So far, we've seen situations in which locking can improve reliability of access to critical sections.

Similar documents
Threads Tuesday, September 28, :37 AM

So far, we know: Wednesday, October 4, Thread_Programming Page 1

Solving the Producer Consumer Problem with PThreads

More Shared Memory Programming

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Recall from deadlock lecture. Tuesday, October 18, 2011

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Synchronising Threads

Epilogue. Thursday, December 09, 2004

CS 450 Exam 2 Mon. 4/11/2016

CMSC421: Principles of Operating Systems

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS4961 Parallel Programming. Lecture 12: Advanced Synchronization (Pthreads) 10/4/11. Administrative. Mary Hall October 4, 2011

So far, system calls have had easy syntax. Integer, character string, and structure arguments.

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

Floating-point lab deadline moved until Wednesday Today: characters, strings, scanf Characters, strings, scanf questions clicker questions

Synchronization Primitives

CS 3305 Intro to Threads. Lecture 6

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

CSE 120. Fall Lecture 6: Semaphores. Keith Marzullo

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Last Class: Deadlocks. Today

What's wrong with Semaphores?

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Introduction to Embedded Systems

CSE373 Fall 2013, Second Midterm Examination November 15, 2013

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Synchronization Exercise

CMSC421: Principles of Operating Systems

0x0d2C May your signals all trap May your references be bounded All memory aligned Floats to ints round. remember...

Operating systems. Lecture 12

COMP 3430 Robert Guderian

Race Conditions & Synchronization

CSci 4061 Introduction to Operating Systems. Synchronization Basics: Locks

Synchronization 1. Synchronization

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Lecture 5 Threads and Pthreads II

Informatica 3. Marcello Restelli. Laurea in Ingegneria Informatica Politecnico di Milano 9/15/07 10/29/07

Condition Variables & Semaphores

Announcements. CS 3204 Operating Systems. Schedule. Optimistic Concurrency Control. Optimistic Concurrency Control (2)

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Programming in Parallel COMP755

CS 6400 Lecture 11 Name:

Thread. Disclaimer: some slides are adopted from the book authors slides with permission 1

Page 1. Goals for Today. Atomic Read-Modify-Write instructions. Examples of Read-Modify-Write

Lecture 21: Transactional Memory. Topics: consistency model recap, introduction to transactional memory

High Performance Computing Course Notes Shared Memory Parallel Programming

Java Monitors. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

CSE 120 Principles of Operating Systems Spring 2016

Lecture 6 (cont.): Semaphores and Monitors

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Computer Science 61 Scribe Notes Tuesday, November 25, 2014 (aka the day before Thanksgiving Break)

Inside core.async Channels. Rich Hickey

Last Class: Synchronization

Intertask Communication

Synchronization. Announcements. Concurrent Programs. Race Conditions. Race Conditions 11/9/17. Purpose of this lecture. A8 released today, Due: 11/21

Faculty of Electrical Engineering, Mathematics, and Computer Science Delft University of Technology

CS533 Concepts of Operating Systems. Jonathan Walpole

Multithreading Programming II

Systems Programming/ C and UNIX

Operating Systems CMPSC 473. Synchronization February 26, Lecture 12 Instructor: Trent Jaeger

Condition Variables CS 241. Prof. Brighten Godfrey. March 16, University of Illinois

CSE 486/586 Distributed Systems

Reminder from last time

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.

CS 25200: Systems Programming. Lecture 26: Classic Synchronization Problems

CSE 153 Design of Operating Systems

Today: Synchronization. Recap: Synchronization

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

CSEN 602-Operating Systems, Spring 2018 Practice Assignment 2 Solutions Discussion:

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Virtual Machine Design

CS 318 Principles of Operating Systems

CSE332: Data Abstractions Lecture 22: Shared-Memory Concurrency and Mutual Exclusion. Tyler Robison Summer 2010

Lecture 7: Transactional Memory Intro. Topics: introduction to transactional memory, lazy implementation

Announcements. CS18000: Problem Solving And Object-Oriented Programming

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

What does IRET do, anyway?... Exam #1 Feb. 27, 2004

CSE 120 Principles of Operating Systems Spring 2016

This exam paper contains 8 questions (12 pages) Total 100 points. Please put your official name and NOT your assumed name. First Name: Last Name:

Condition Variables. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS-345 Operating Systems. Tutorial 2: Grocer-Client Threads, Shared Memory, Synchronization

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

COSC 2P95 Lab 9 Signals, Mutexes, and Threads

Pre- and post- CS protocols. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 7. Other requirements for a mutual exclusion algorithm

COSC 6374 Parallel Computation. Shared memory programming with POSIX Threads. Edgar Gabriel. Fall References

[537] Locks and Condition Variables. Tyler Harter

CS 333 Introduction to Operating Systems Class 4 Concurrent Programming and Synchronization Primitives

Operating Systems ECE344

Synchronization 1. Synchronization

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

ANSI/IEEE POSIX Standard Thread management

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Software Engineering CSE 335, Spring OO design: Concurrency and Distribution

Locks and semaphores. Johan Montelius KTH

Lecture 6. Process Synchronization

Threads need to synchronize their activities to effectively interact. This includes:

Transcription:

Locks Page 1 Using locks Monday, October 6, 2014 9:49 AM So far, we've seen situations in which locking can improve reliability of access to critical sections. In general, how can one use locks?

Locks Page 2 Some important concepts we will emphasize Monday, October 6, 2014 12:33 PM An atomic operation is something that is not interruptable and occurs in isolation from other operations. A thread-safe operation (or set of operations) has the property that if used by multiple threads, no problems will occur. An async-safe operation (also called reentrant in some circles) is an operation that can safely be invoked in a signal handler.

Locks Page 3 A few quandaries to ponder Monday, October 6, 2014 12:36 PM Pipes and raw I/O are thread-safe. Raw I/O (write) is async-safe but not formatted I/O (printf). Certain operations are atomic, e.g., locking an unlocked mutex. In most cases, being atomic assures both thread-safe and async-safe, but not vice versa.

Locks Page 4 The deep semantics of process interaction Tuesday, October 04, 2011 1:29 PM The deep semantics of process interaction What is a process? What is a thread? What is a pipe? What is a file? What is a file buffer?

Locks Page 5 What is a pipe? Monday, October 12, 2015 2:21 PM Many of the anomalies we can experience are manifest by pipes. To understand these anomalies, it helps to understand what pipes are. The pipe is a very basic building block of operating system processes.

Locks Page 6 What is a pipe (cont'd)? Tuesday, October 04, 2011 1:43 PM What is a pipe? Semantically, a ring buffer of characters (an array queue). write enqueues to the buffer. read dequeues from the buffer. An over-simplification (that we will refine later): http://www.cs.tufts.edu/comp/111/examples/semantics/ring.c

Locks Page 7 Looking deeper Monday, October 6, 2014 10:32 AM What exactly is a pipe? As a first approximation, it is the ring buffer we demonstrated above, with read and write implemented as multiple dequeues/enqueues. But a pipe has to have more properties, including thread-safety and async safety. simultaneous usability by multiple processes. block on read from empty buffer and block on write to full buffer.

A simple and profound example Monday, October 6, 2014 10:34 AM We'll take the simple ring buffer code and make it threadsafe. This is a metaphor for what the OS does to make pipes usable across multiple processes. Step 1: identify critical sections These are the sections that multiple threads should not enter at the same time. // simple example of a ring buffer to explain a pipe #include <stdio.h> #include <string.h> #define PSIZE 8192 char ring[psize]; int begin=0; int end =0; int empty() { return begin==end; int full() { return (end+1)%psize==begin; int used() { return (end-begin+psize)%psize; int avail() { return PSIZE-1-used(); void enqueue(char c) { if (!full()) { ring[end]=c; end=(end+1)%psize; // put one character into the buffer char dequeue() { // remove one character from the buffer char out; if (!empty()) { out=ring[begin]; begin=(begin+1)%psize; return out; else { return '\0'; void read(char *buffer, int size) { // make read atomic while (!used(size)) sleep(1); // parody of blocking while(!empty() && size) { *(buffer++) = dequeue(); size--; Locks Page 8

while(!empty() && size) { *(buffer++) = dequeue(); size--; void write(const char *buffer, int size) { // make write atomic while (!avail(size)) sleep(1); // parody of blocking while (!full() && size) { enqueue(*buffer++); size--; char buffer[128]; main() { printf("begin=%d\n",begin); printf("end=%d\n",end); printf("used=%d\n",used()); printf("avail=%d\n",avail()); write("hello",strlen("hello")+1); printf("begin=%d\n",begin); printf("end=%d\n",end); printf("used=%d\n",used()); printf("avail=%d\n",avail()); read(buffer, 6); printf("begin=%d\n",begin); printf("end=%d\n",end); printf("used=%d\n",used()); printf("avail=%d\n",avail()); printf("got %s\n",buffer); From <http://www.cs.tufts.edu/comp/111/examples/semantics/ring.c> For an idea of what might go wrong, see http://www.cs.tufts.edu/comp/111/examples/semantics/race.c Locks Page 9

Ring queues in action Tuesday, October 12, 2010 5:19 PM Locks Page 10

Locks Page 11 Locking and schedules Tuesday, October 11, 2011 3:20 PM Locking and schedules A schedule is a sequence of "what happens when" in a concurrent program. Locks preclude some possible schedules. The game of locking: limit the possible schedules so that only desirable things can happen. WITHOUT LIMITING YOURSELF INTO DEADLOCK.

A very bad schedule Monday, October 6, 2014 6:46 PM A schedule is a matrix of what happened when. Columns are threads. Rows are statements if (!full) update increment if (!full) update full-1 full-1 full-1 increment full-1 full empty Locks Page 12

Locks Page 13 The problem Tuesday, October 12, 2010 1:29 PM It turns out that this code executes perfectly 99.9999% of the time. But there is a latent problem that is extremely rare. It might happen that an enqueue gets interrupted by another as predicted by the schedule above. Then all havoc breaks loose!

Locks Page 14 Designing a threaded P/C program Tuesday, October 12, 2010 12:39 PM Designing a threaded program: Separate parts of the program into producer and consumer Identify critical sections of shared code that should be atomic. Surround critical sections (that you define) with mutex (mutual exclusion) locks.

Locks Page 15 Sharing the ring buffer Tuesday, October 12, 2010 1:27 PM void *threaded_routine_1 (void * v) { int i; printf ("hello from the thread!\n"); for(i=0; i<200; i++) { while(full()) ; enqueue('a'+(i%26)); sleep(1); enqueue(0); printf ("bye from the thread!\n"); return NULL; void *threaded_routine_2 (void * v) { int i; printf ("hello from the thread!\n"); for(i=0; i<200; i++) { while(full()) ; enqueue('0'+(i%10)); sleep(1); enqueue(0); printf ("bye from the thread!\n"); return NULL; main() { pthread_t thread; void *retptr; printf("hello from the parent... creating thread\n"); pthread_create( &thread, NULL, threaded_routine_1, NULL); pthread_create( &thread, NULL, threaded_routine_2, NULL); while (1) { while(empty()) ; char c = dequeue(); printf("parent got %c\n",c); if (c==0) break; pthread_join(thread,(void **)&retptr);

Locks Page 16 pthread_join(thread,(void **)&retptr); printf("bye from the parent\n"); Pasted from <http://www.cs.tufts.edu/comp/111/examples/threads/race.c>

Locks Page 17 Steps in solving the problem Tuesday, October 12, 2010 1:30 PM Steps in solving the problem Identify critical sections that should be atomic. Surround these with mutex locks.

Locks Page 18 What are the critical sections? Tuesday, October 12, 2010 1:30 PM #define SIZE 50 int begin=0, end=0; char queue[size]; int empty() { return begin==end; int full() { return ((end+1)%size)==begin; void enqueue(char c) { // BEGIN CRITICAL SECTION if (!full()) { queue[end]=c; end=(end+1)%size; else { fprintf(stderr,"queue full\n"); // END CRITICAL SECTION char dequeue() { // BEGIN CRITICAL SECTION if (!empty()) { char out = queue[begin]; begin=(begin+1)%size; // END CRITICAL SECTION return out; else { // END CRITICAL SECTION fprintf(stderr,"queue empty\n"); return 0; Pasted from <http://www.cs.tufts.edu/comp/111/examples/threads/race.c>

Locks Page 19 Problem solved Tuesday, October 12, 2010 1:33 PM #define SIZE 50 int begin=0, end=0; char queue[size]; int empty() { return begin==end; int full() { return ((end+1)%size)==begin; void enqueue(char c) { pthread_mutex_lock(&locker); if (!full()) { queue[end]=c; end=(end+1)%size; else { fprintf(stderr,"queue full\n"); pthread_mutex_unlock(&locker); char dequeue() { pthread_mutex_lock(&locker); if (!empty()) { char out = queue[begin]; begin=(begin+1)%size; pthread_mutex_unlock(&locker); return out; else { pthread_mutex_unlock(&locker); fprintf(stderr,"queue empty\n"); return 0; Pasted from <http://www.cs.tufts.edu/comp/111/examples/threads/lock.c>

Locks Page 20 The important fact... Wednesday, October 8, 2014 6:13 PM Is not so much that the process stops until it can achieve a lock. It is that the process is blocked and not runnable (outside the run queue) until it gets the lock. => you can use locks to block programs, independent of their critical sections.

A better approach Thursday, October 14, 2010 12:08 PM Locks are not just to protect critical sections! In fact, one can utilize mutexes to block I/O! pthread_mutex_t notempty; pthread_mutex_t notfull; #define SIZE 50 int begin=0, end=0; char queue[size]; // in essence, make these private inline int empty() { return begin==end; inline int full() { return ((end+1)%size)==begin; void enqueue(char c) { int e; pthread_mutex_lock(&notfull); // wait until not full! pthread_mutex_lock(&modify); // modify queue /// BEGIN CRITICAL SECTION e=empty(); queue[end]=c; end=(end+1)%size; if (e) pthread_mutex_unlock(&notempty); // ok for dequeue to run now. if (!full()) pthread_mutex_unlock(&notfull); // ok to do another enqueue /// END CRITICAL SECTION pthread_mutex_unlock(&modify); char dequeue() { int f; char out; pthread_mutex_lock(&notempty); // wait until not empty pthread_mutex_lock(&modify); // modify queue /// BEGIN CRITICAL SECTION f = full(); out = queue[begin]; begin=(begin+1)%size; if (f) pthread_mutex_unlock(&notfull); // ok for enqueue to work if (!empty()) pthread_mutex_unlock(&notempty); // ok for another dequeue /// END CRITICAL SECTION pthread_mutex_unlock(&modify); return out; Locks Page 21

See http://www.cs.tufts.edu/comp/111/examples/locks/block4.c Locks Page 22

Locks Page 23 Some notes on this solution Wednesday, October 11, 2017 5:11 PM When you lock notfull, you know that the state is not full and that no one else can do anything with that state but you. When you unlock notfull, you allow someone else to lock it. When you lock notempty, you know that the state is not empty you are the only one who can act on that state.

Locks Page 24 A bit of tuning Monday, October 12, 2015 3:45 PM The critical sections in the preceeding are too long. Instead of pthread_mutex_lock(modify); // modify queue /// BEGIN CRITICAL SECTION f = full(); out = queue[begin]; begin=(begin+1)%size; if (f) pthread_mutex_unlock(&notfull); // ok for enqueue to work if (!empty()) pthread_mutex_unlock(&notempty); // ok for another dequeue /// END CRITICAL SECTION pthread_mutex_unlock(&modify); we can write: pthread_mutex_lock(modify); // modify queue /// BEGIN CRITICAL SECTION f = full(); out = queue[begin]; begin=(begin+1)%size; e = empty(); /// END CRITICAL SECTION pthread_mutex_unlock(&modify); if (f) pthread_mutex_unlock(&notfull); // ok for enqueue to work if (!e) pthread_mutex_unlock(&notempty); // ok for another dequeue See http://www.cs.tufts.edu/comp/111/examples/locks/blo ck5.c

Locks Page 25 The bounded buffer queue Wednesday, October 8, 2014 10:02 AM The previous example/pattern is called the bounded buffer queue. Dequeueing thread blocks on dequeue from empty queue until input present. Enqueueing thread blocks on enqueue of full queue until dequeue creates room. This is a basic building block for all producer/consumer programs.

Locks Page 26 Why this works Thursday, October 14, 2010 12:14 PM Why this works: notfull is unlocked => can enqueue notfull is locked => can' t enqueue because queue is full notempty is unlocked => can dequeue notempty is locked => can't dequeue because queue is empty

Locks Page 27 Locks and proof Thursday, October 14, 2010 12:16 PM There is no debugging method that can determine whether the preceding code is "correct". We need: No deadlocks. No critical section conflicts. Instead, one must "prove" correctness by careful reasoning. Express the states of the system Show how state transitions occur and why.

Locks Page 28 A "proof" that notfull and notempty work Thursday, October 14, 2010 12:17 PM There are three states of the queue empty: nothing there, dequeue impossible. full: filled up, enqueue impossible. part-full (or part-empty): entries present, enqueue and dequeue possible. But also note that: enqueue only modifies the "end" index. dequeue only modifies the "begin" index. only one of enqueue or dequeue can enter the critical section at a time.

A picture of locking state Thursday, October 14, 2010 12:22 PM Enqueue running Neither enqueue nor dequeue running Both enqueue and dequeue running simultaneously Dequeue running Locks Page 29

Locks Page 30 End of lecture on 10/11/2017 Wednesday, October 11, 2017 5:55 PM

Locks Page 31 A true horror story Tuesday, October 11, 2011 12:26 PM A true horror story When I was working on this example, I had an extremely subtle locking error. I started by testing the example without the modify lock: void enqueue(char c) { int e; pthread_mutex_lock(&notfull); // wait until not full! pthread_mutex_lock(&modify); // modify queue /// BEGIN CRITICAL SECTION e=empty(); queue[end]=c; end=(end+1)%size; if (e) pthread_mutex_unlock(&notempty); if (!full()) pthread_mutex_unlock(&notfull); /// END CRITICAL SECTION pthread_mutex_unlock(&modify); char dequeue() { int f; char out; pthread_mutex_lock(&notempty); // wait until not empty pthread_mutex_lock(modify); // modify queue /// BEGIN CRITICAL SECTION f = full(); out = queue[begin]; begin=(begin+1)%size; if (f) pthread_mutex_unlock(&notfull); if (!empty()) pthread_mutex_unlock(&notempty); /// END CRITICAL SECTION pthread_mutex_unlock(&modify); return out; This worked, but failed about 1/100 of the time. So I started checking for issues, and found that: It's ok for enqueue and dequeue to happen at the same time as far as the data structure goes, but It's not ok as far as modifying the lock states!

Locks Page 32 The problem is that the statements that test empty() and full() to accomplish state transitions may skip an unlock of a critical lock. One scenerio: queue nearly empty. Call dequeue and enqueue at the same time. Dequeue empties the queue. Enqueue adds an element. Then!empty() is checked. Result: notempty is not unlocked!