Overview. Lab advice. Lab advice. Readers-writers problem. Readers-writers solution. Real-time Systems D0003E 3/10/2009

Similar documents
But first... Re: generic parking. A cyclic POSIX thread (2) Real-time Systems SMD138. Lecture 14: Real-time languages (Burns & Wellings ch.

Operating Systems ECE344

Overview. POSIX signals. Generation of signals. Handling signals. Some predefined signals. Real-time Systems D0003E 3/3/2009

Reminder from last time

CSE 153 Design of Operating Systems

Opera&ng Systems ECE344

Multitasking / Multithreading system Supports multiple tasks

CSE 120 Principles of Operating Systems Spring 2016

Lecture 6 (cont.): Semaphores and Monitors

Introduction to OS Synchronization MOS 2.3

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Summary Semaphores. Passing the Baton any await statement. Synchronisation code not linked to the data

1 Process Coordination

Resource management. Real-Time Systems. Resource management. Resource management

Real-Time and Concurrent Programming Lecture 4 (F4): Monitors: synchronized, wait and notify

Condition Variables CS 241. Prof. Brighten Godfrey. March 16, University of Illinois

Real-Time Systems. Lecture #4. Professor Jan Jonsson. Department of Computer Science and Engineering Chalmers University of Technology

Operating systems and concurrency (B08)

CIS Operating Systems Application of Semaphores. Professor Qiang Zeng Spring 2018

CS F-MID Midterm October 25, 2001

CS510 Operating System Foundations. Jonathan Walpole

High Performance Computing Course Notes Shared Memory Parallel Programming

There are 8 total numbered pages, 6 Questions. You have 60 minutes. Budget your time carefully!

Lecture 7: CVs & Scheduling

Semaphores. Blocking in semaphores. Two types of semaphores. Example: Bounded buffer problem. Binary semaphore usage

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

CSE 120 Principles of Operating Systems Spring 2016

CS 450 Exam 2 Mon. 11/7/2016

CS 318 Principles of Operating Systems

Synchronization 1. Synchronization

Performance Throughput Utilization of system resources

Process Management And Synchronization

CS 318 Principles of Operating Systems

Concurrent Programming Lecture 10

More Shared Memory Programming

User Space Multithreading. Computer Science, University of Warwick

What's wrong with Semaphores?

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

CSE 451: Operating Systems Spring Module 10 Semaphores, Condition Variables, and Monitors

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Atomic Instructions! Hakim Weatherspoon CS 3410, Spring 2011 Computer Science Cornell University. P&H Chapter 2.11

Sharing Objects Ch. 3

Dealing with Issues for Interprocess Communication

Administrivia. Assignments 0 & 1 Class contact for the next two weeks Next week. Following week

Concurrent Programming

Need for synchronization: If threads comprise parts of our software systems, then they must communicate.

Lecture 6. Process Synchronization

Operating Systems (2INC0) 2017/18

CHAPTER 6: PROCESS SYNCHRONIZATION

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Last Class: Synchronization

Concurrency. Lecture 14: Concurrency & exceptions. Why concurrent subprograms? Processes and threads. Design Issues for Concurrency.

CSE 451: Operating Systems Winter Synchronization. Gary Kimura

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Lecture 8: September 30

Synchronization 1. Synchronization

Interprocess Communication By: Kaushik Vaghani

Last Class: Synchronization. Review. Semaphores. Today: Semaphores. MLFQ CPU scheduler. What is test & set?

CS 4410 Operating Systems. Review 1. Summer 2016 Cornell University

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

CS533 Concepts of Operating Systems. Jonathan Walpole

1. Introduction. 2. Project Submission and Deliverables

CS 31: Intro to Systems Misc. Threading. Kevin Webb Swarthmore College December 6, 2018

IT 540 Operating Systems ECE519 Advanced Operating Systems

Semaphores and Monitors: High-level Synchronization Constructs

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Threads and Parallelism in Java

Synchronization SPL/2010 SPL/20 1

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Chapter 6: Process Synchronization

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

CS 318 Principles of Operating Systems

Concurrency: Deadlock and Starvation. Chapter 6

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

CSE 120. Fall Lecture 6: Semaphores. Keith Marzullo

Monitors; Software Transactional Memory

Agenda. Highlight issues with multi threaded programming Introduce thread synchronization primitives Introduce thread safe collections

Models of concurrency & synchronization algorithms

CS 537 Lecture 8 Monitors. Thread Join with Semaphores. Dining Philosophers. Parent thread. Child thread. Michael Swift

Operating Systems CMPSC 473. Synchronization February 26, Lecture 12 Instructor: Trent Jaeger

CS 450 Exam 2 Mon. 4/11/2016

Synchronization Classic Problems

Classic Problems of Synchronization

Java Monitors. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

CSE 451 Midterm 1. Name:

Multiple Inheritance. Computer object can be viewed as

G52CON: Concepts of Concurrency

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.

CPS 310 second midterm exam, 11/14/2014

Concurrency. Chapter 5

CS4411 Intro. to Operating Systems Exam 1 Fall points 9 pages

Process Synchronization

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Parallel Programming with Threads

Monitors; Software Transactional Memory

Chapter 6 Concurrency: Deadlock and Starvation

PROCESS SYNCHRONIZATION

Transcription:

Overview Real-time Systems D0003E Lecture 14: Real-time languages (Burns & Wellings ch. 8 & 9) A little lab advice Generic parking Synchronization semaphores readers-writers problem monitors guards (ccr:s) Concurrency java concurrency model Lab advice Lab advice open com port: int COM1; COM1=open("/dev/ttyS0", O_RDWR); // open com port have a look at: termios tcgetattr(, ); cfsetispeed(, ); cfsetospeed(, ); Blocked wait for both keyboard and com1: fd_set rfds; FD_ZERO(&rfds); // emtpy set FD_SET(0, &rfds); // include keyboard FD_SET(COM1, &rfds); // include com1: select(4,&rfds,null,null,null); // wait if(fd_isset(0,&rfds)){ // handle keypress //handle keypress Readers-writers problem Readers-writers solution Very common problem in concurrent programming Shared resource allow several concurrent readers allow only one writer (and no reader) A reader/writer is a thread How to solve/implement? entry global: readcount=0 sem m=1, w=1 reader: wait(m) readcount++ if readcount==1 then wait(w) sigal(m) //perform reading writer: wait(w) //perform writing signal(w) Problems with this? wait(m) exit readcount=readcount-1 if readcount==0 then signal(w) signal(m) 1

Readers-writers problem problems with solution? If reader is reading allow another reader don t allow writer If long queue of readers and writers let all readers in never let writers in always a reader reading Hmm not very good We have starvation of writers entry exit Readers-writers solution writers preference global: readcount=0, writecount=0 sem m1=1, m2=1, m3=1, w=1, r=1 reader: wait(m3) wait(r) wait(m1) rcount++ if rcount==1 then wait(w) signal(m1) signal(r) signal(m3) //perform reading wait(m1) rcount-- guarantees writers preference allow reader? mutex over rcount if rcount==0 then signal(w) signal(m1) writer: wait(m2) wcount++ if wcount==1 then wait(r) signal(m2) wait(w) //perform writing signal(w) wait(m2) wcount if wcount==0 then signal(r) signal(m2) Re: generic parking Re: generic parking Instead of encoding reactive objects as follows We are often forced to resort to the following pattern: Can safely share thread-local variables, as reactions are automatically serialized 0 REACT0 REACT0 1 REACT1 REACT1 Fully generic, can wait for any combination of events REACTn n REACTn Not generic, will only wait for some specific events Can only share global variables, reactions must be manually serialized via a mutex or semaphore Criticism of semaphores Elegant, but error-prone low-level primitive The two uses of semaphores, mutual exclusion and conditional synchronization, are not distinguished There is nothing that binds a mutual exclusion semaphore to the shared variables it protects A misplaced of forgotten semaphore can lead to total program corruption or deadlock Historically important, but mostly inadequate for larger-scale concurrent programming can be used to support other primitives Monitors Idea: objects and modules successfully control the visibility of shared variables why not make mutual exclusion automatic for such a construct? This is the idea behind monitors, a synchronization construct found in many early concurrent languages (Modula-1, Concurrent Pascal, Mesa) Monitors elegantly solve the mutual exclusion problem; for conditional synchronization a mechanism of condition variables is used We give an example in C-like syntax; note, though, that neither C nor C++ support monitors directly 2

monitor BoundedBuffer { condition notempty, notfull; int count = 0, head = 0, tail = 0; T buf[size]; A monitor BB void put(t item) { if (count == SIZE) wait(notfull); buf[head] = item; head = (head + 1) % SIZE; count++; signal(notempty); void get(t *item) { if (count == 0) wait(notempty); *item = buf[tail]; tail = (tail + 1) % SIZE; count-- signal(notfull); These variables are only visible within methods put and get Only one thread can be running put or get at any time (the others are queued) However, a thread can temporarily give up control of the monitor by calling wait Monitors: Things to note While mutual exclusion is implicit conditional synchronization still handled explicitly condition variables are local to a monitor risk of improper use is reduced Observe: cond.vars. are not counting every wait call blocks While a thread blocks on wait monitor is temporarily opened up to some other thread so that someone may be able to call signal eventually Monitors: Things to note Monitors and POSIX Concern: who gets to run after a signal? The ones outside of the monitor let new threads in The ones that have called wait? release threads blocked in the monitor this is often what we want to do Different monitor variants take different routes Although C doesn t offer any explicit monitor construct, the POSIX library does provide the building blocks for constructing a monitor We have already seen one part of this support: the mutex variables The other part is a mechanism for building condition variables: A datatype: pthread_cond_t An init operation: pthread_cond_init(&cv, &attr) A signal operation: pthread_cond_signal(&cv) A wait operation: pthread_cond_wait(&cv, &mut) Note how the wait call specifies the mutex to be temporarily released A monitor BB in POSIX/C void put(t item) { pthread_mutex_lock(&m); if (count == SIZE) pthread_cond_wait(&notfull, &m); buf[head] = item; head = (head + 1) % SIZE; count++; pthread_cond_signal(&notempty); pthread_mutex_unlock(&m); pthread_mutex_t m; pthread_cond_t notempty, notfull; int count = 0, head = 0, tail = 0; T buf[size]; pthread_mutex_init(&m, NULL); pthread_cond_init(&notempty, NULL); pthread_cond_init(&notfull, NULL); void get(t *item) { pthread_mutex_lock(&m); if (count == 0) pthread_cond_wait(&notempty, &m); *item = buf[tail]; tail = (tail + 1) % SIZE; count--; pthread_cond_signal(&notfull); pthread_mutex_unlock(&m); Java concurrency The Java threading model views threads and objects as orthogonal concepts (objects are not concurrent, a thread does not encapsulate state) However, Java uses objects to represent threads (just like POSIX/C uses variables of type pthread_t) class MyThread extends Thread { public void run() { // your code. MyThread t1 = new MyThread(); t1.start(); // continues in parallel with code in t1.run() 3

Java and monitors Note that methods in object t1 can still be called from the main thread in a normal fashion. Likewise, thread t1 can freely call methods in any object However, to solve the inevitable synchronization problems, Java takes the route that every object may also double as a monitor! Restriction 1: to achieve mutual exclusion, each method of an object must be prepended with the keyword synchronized Restriction 2: only one condition variable per object (monitor) is supported, identified with the object itself A Java BB public class BoundedBuffer { private int count = 0, head = 0, tail = 0; private T buf[] = new T[SIZE]; public synchronized void put(t item) { if (count == SIZE) wait(); buf[head] = item; head = (head+1) % SIZE; count++; notify(); public synchronized T get() { if (count == 0) wait(); T item = buf[tail]; tail = (tail+1) % SIZE; count--; notify(); return item; Data must be marked private to be truly encapsulated wait may only be called if the monitor lock is held; i.e., within synchronized methods Criticism of monitors Guards Monitors elegantly solve the mutual exclusion problem, but the conditional synchronization problem is still dealt with using low-level variables All criticisms surrounding semaphores apply to condition variables as well The use of nested monitor calls (monitors that call other monitors) is inherently problematic only the innermost monitor will open up while the caller waits Can a higher-level alternative to condition variables be devised? Assume monitor if the condition that avoids the wait call is lifted out to guard the whole monitor method instead That is, instead of letting threads in only to find that some condition doesn t hold keep the threads out until the condition does hold This requires a new language construct allows state-dependent boolean expressions outside the actual methods, but still protected from concurrent access blocked waiting on a boolean expression Guards Ada Guards also called Conditional critical regions (CCR) a critical region with an extra condition we wait (blocked) until the condition is true very powerful Such a construct can be found in Ada, in the form of guards Ada provides two relevant constructs for interprocess communication: the protected objects (a monitor-like construct) and a general messagepassing mechanism Threads in Ada are called tasks, and are introduced by declaration (imagine a keyword in C indicating that a function body should start executing as a thread, without any call to pthread_create) Monitor methods as well as message-passing endpoints are called entries in Ada Ada makes a clear distinction between the interface of a protected object or a task, and its implementation body 4

An Ada protected object BB protected BoundedBuffer is entry put (item : in T; entry get (item : out T); private count : Integer := 0; head : Integer := 0; tail : Integer := 0; buf : array (0..SIZE-1) of T; Guard expressions protected body BoundedBuffer is entry put (item : in T) when count /= SIZE is begin buf(head) := item; head := head + 1 mod SIZE; count := count + 1; end put; entry get (item : out T) when count /= 0 is begin item := buf(tail); tail := tail + 1 mod SIZE; count := count - 1; end get; Things to note The Ada type system limits the guards to simple boolean expressions In principle the guards should be constantly reevaluated in practice a good compiler will only insert evaluation code when an entry is called (if the guard depends on an argument) or when an entry exits (if the guard depends on the local state) Note that the guards must be run under mutual exclusion if they depend on local state (the locks are handled by the compiler) The good thing is that the separation between real code and entry conditions is made explicit Message-passing in Ada Ada s message-passing mechanism is characterized by: Full type safety (no need for type casts) Selective parking (blocking for several messages) Message guards (like for the protected objects) Sending a message just amounts to calling an entry in a task s exported interface Receiving a message is done by an accept statement, that mentions the name of an exported entry task BoundedBuffer is entry put (item : in T); entry get (item : out T); Interface BoundedBuffer.put (x); Sending accept put (item : in T) do Receiving An infinite loop thread task body BoundedBuffer is count : Integer := 0; head : Integer := 0; tail : Integer := 0; buf : array (0..SIZE-1) of T; begin loop Additional guards select when count < SIZE => accept put (item : in T) do buf(head) := item; head := head+1 mod SIZE; count := count+1; end; or when count > 0 => Message reception accept get (item : out T) do item := buf(tail); tail := tail+1 mod SIZE; count := count-1; end; end select; end loop; Event alternatives An Ada task-based BB Remarks Criticism of select Ada s tasks closely resemble our reactive objects: Entries are sequential, thus local task variables are automatically protected from concurrent access Any number of unique entries (=methods) can be defined for a task The select statement can also contain delays and interrupt handlers in parallel with ordinary entries Asynchronous methods can be mimiced by inserting arbitrary code after an (empty) entry It seems like select is the generic parking operation we ve been looking for! However, notice that there s nothing that guarantees that a task will respond to its entries just because it is parked in a select. The programmer may just as well select not to include a entry! As a result, an entry call might freeze indefinitely, until the event occurs that causes the receiving task to do a select that accepts it i.e., every entry call becomes a potentially parking operation! A similar effect is caused by guards, and (in other systems) by condition variables and semaphores 5

Excluding entries (or putting callers on hold using condition variables or semaphores) makes it temptingly easy to write software services like the Bounded Buffer, since calls occuring at the wrong time can essentially be ignored However, the downside is that writing clients is made correspondingly harder if a client must react within a deadline it cannot risk becoming parked on a put() in the middle of a reaction In fact, the whole notion of a reaction becomes blurred if parking may silently occur anywhere where does one reaction start and where does another one stop? Our notion of a reaction: waiting in readyq event running reaction done Note one-to-one correspondance! blocked because somebody else is using claimed resource event A reaction that parks internally hmm waiting in readyq event running blocked because somebody else is using claimed resource event2 parked on an entry/monitor/semaphore call not handled until event2 occurs reaction done Which reaction? Avoiding indefinite blocking inside reactions is essential to obtaining predictable reaction times, hence programs that use Ada select / condition variables / semaphores must be written very restrictively The model enforced by our reactive objects directly capture the desired program structure! See lecture 9 examples example of reactive objects for a bounded buffer implementation that does not rely on silent parking 6