CS 31: Intro to Systems Misc. Threading. Kevin Webb Swarthmore College December 6, 2018

Similar documents
More Types of Synchronization 11/29/16

Threads and Synchronization. Kevin Webb Swarthmore College February 15, 2018

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Condition Variables CS 241. Prof. Brighten Godfrey. March 16, University of Illinois

Concurrency: Deadlock and Starvation

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CSci 4061 Introduction to Operating Systems. Synchronization Basics: Locks

EECS 482 Introduction to Operating Systems

Summary Semaphores. Passing the Baton any await statement. Synchronisation code not linked to the data

CS 470 Spring Mike Lam, Professor. Semaphores and Conditions

Atomic Instructions! Hakim Weatherspoon CS 3410, Spring 2011 Computer Science Cornell University. P&H Chapter 2.11

Interprocess Communication By: Kaushik Vaghani

Operating Systems ECE344

Process Synchronization Mechanisms

W4118 Operating Systems. Instructor: Junfeng Yang

Operating Systems (2INC0) 2017/18

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Reminder from last time

CSE 153 Design of Operating Systems

Synchroniza+on II COMS W4118

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Opera&ng Systems ECE344

Lecture 6 (cont.): Semaphores and Monitors

Synchronising Threads

Chapter 6: Process Synchronization

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Chap. 6 Part 1. CIS*3090 Fall Fall 2016 CIS*3090 Parallel Programming 1

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

More Shared Memory Programming

Lecture 13 Condition Variables

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

Solving the Producer Consumer Problem with PThreads

COSC 6374 Parallel Computation. Shared memory programming with POSIX Threads. Edgar Gabriel. Fall References

Synchronization 1. Synchronization

Semaphores. A semaphore is an object that consists of a. methods (e.g., functions): signal and wait. semaphore. method signal. counter.

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

Multithreading Programming II

Semaphores. Blocking in semaphores. Two types of semaphores. Example: Bounded buffer problem. Binary semaphore usage

Threading and Synchronization. Fahd Albinali

Real-Time Programming

Interprocess Communication and Synchronization

Systèmes d Exploitation Avancés

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Warm-up question (CS 261 review) What is the primary difference between processes and threads from a developer s perspective?

CSE 451: Operating Systems Spring Module 10 Semaphores, Condition Variables, and Monitors

POSIX / System Programming

Concurrency and Synchronisation

Reminder from last time

Synchronization. CS 416: Operating Systems Design, Spring 2011 Department of Computer Science Rutgers University

High Performance Computing Lecture 21. Matthew Jacob Indian Institute of Science

Operating systems and concurrency (B08)

C09: Process Synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Operating Systems. Thread Synchronization Primitives. Thomas Ropars.

Dealing with Issues for Interprocess Communication

Synchronization 1. Synchronization

Concurrency. Chapter 5

Concurrency and Synchronisation

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

PROCESS SYNCHRONIZATION

Condition Variables & Semaphores

Synchronization (Part 2) 1/40

Announcements. Class feedback for mid-course evaluations Receive about survey to fill out until this Friday

Introduction to OS Synchronization MOS 2.3

CS533 Concepts of Operating Systems. Jonathan Walpole

Synchronization. Dr. Yingwu Zhu

An Introduction to Parallel Systems

Chapters 5 and 6 Concurrency

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems: William Stallings. Starvation. Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall

Synchronization. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

CS 153 Design of Operating Systems Winter 2016

CMSC421: Principles of Operating Systems

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

EECS 482 Introduction to Operating Systems

Condition Variables. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Semaphores INF4140. Lecture 3. 0 Book: Andrews - ch.04 ( ) INF4140 ( ) Semaphores Lecture 3 1 / 34

Process Management And Synchronization

The Dining Philosophers Problem CMSC 330: Organization of Programming Languages

Overview. CMSC 330: Organization of Programming Languages. Concurrency. Multiprocessors. Processes vs. Threads. Computation Abstractions

Page 1. Goals for Today. Atomic Read-Modify-Write instructions. Examples of Read-Modify-Write

High-level Synchronization

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

Semaphores and Monitors: High-level Synchronization Constructs

Processor speed. Concurrency Structure and Interpretation of Computer Programs. Multiple processors. Processor speed. Mike Phillips <mpp>

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

CSE Traditional Operating Systems deal with typical system software designed to be:

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

CMSC 330: Organization of Programming Languages. The Dining Philosophers Problem

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

CS 318 Principles of Operating Systems

Shared Memory Programming. Parallel Programming Overview

CS 333 Introduction to Operating Systems. Class 4 Concurrent Programming and Synchronization Primitives

Transcription:

CS 31: Intro to Systems Misc. Threading Kevin Webb Swarthmore College December 6, 2018

Classic thread patterns Agenda Pthreads primitives and examples of other forms of synchronization: Condition variables Barriers RW locks Message passing Message passing: alternative to shared memory

Common Thread Patterns Producer / Consumer (a.k.a. Bounded buffer) Thread pool (a.k.a. work queue) Thread per client connection

The Producer/Consumer Problem in Producer 2 3 5 4 9 Consumer buf out Producer produces data, places it in shared buffer Consumer consumes data, removes from buffer Cooperation: Producer feeds Consumer How does data get from Producer to Consumer? How does Consumer wait for Producer?

Producer/Consumer: Shared Memory shared int buf[n], in = 0, out = 0; Producer Consumer buf[in] = Produce (); Consume (buf[out]); in = (in + 1)%N; out = (out + 1)%N; Data transferred in shared memory buffer.

Producer/Consumer: Shared Memory shared int buf[n], in = 0, out = 0; Producer buf[in] = Produce (); in = (in + 1)%N; Consumer Data transferred in shared memory buffer. Is there a problem with this code? A. Yes, this is broken. B. No, this ought to be fine. Consume (buf[out]); out = (out + 1)%N;

This producer/consumer scenario requires synchronization to shared int buf[n], in = 0, out = 0; Producer buf[in] = Produce (); in = (in + 1)%N; Consumer Consume (buf[out]); out = (out + 1)%N; A. Avoid deadlock B. Avoid double writes or empty consumes of buf[] slots C. Protect a critical section with mutual exclusion D. Copy data from producer to consumer

Adding Semaphores shared int buf[n], in = 0, out = 0; shared sem filledslots = 0, emptyslots = N; Producer wait (X); buf[in] = Produce (); in = (in + 1)%N; signal (Y); Recall semaphores: Consumer wait (Z); Consume (buf[out]); out = (out + 1)%N; signal (W); wait(): decrement sem and block if sem value < 0 signal(): increment sem and unblock a waiting process (if any)

Suppose we now have two semaphores to protect our array. Where do we use them? shared int buf[n], in = 0, out = 0; shared sem filledslots = 0, emptyslots = N; Producer wait (X); buf[in] = Produce (); in = (in + 1)%N; signal (Y); Consumer wait (Z); Consume (buf[out]); out = (out + 1)%N; signal (W); Answer choice X Y Z W A. emptyslots emptyslots filledslots filledslots B. emptyslots filledslots filledslots emptyslots C. filledslots emptyslots emptyslots filledslots

Add Semaphores for Synchronization shared int buf[n], in = 0, out = 0; shared sem filledslots = 0, emptyslots = N; Producer wait (emptyslots); buf[in] = Produce (); in = (in + 1)%N; signal (filledslots); Buffer empty, Consumer waits Buffer full, Producer waits Consumer wait (filledslots); Consume (buf[out]); out = (out + 1)%N; signal (emptyslots); Don t confuse synchronization with mutual exclusion

Synchronization: More than Mutexes I want to block a thread until something specific happens. Condition variable: wait for a condition to be true

Condition Variables In the pthreads library: pthread_cond_init: Initialize CV pthread_cond_wait: Wait on CV pthread_cond_signal: Wakeup one waiter pthread_cond_broadcast: Wakeup all waiters Condition variable is associated with a mutex: 1. Lock mutex, realize conditions aren t ready yet 2. Temporarily give up mutex until CV signaled 3. Reacquire mutex and wake up when ready

Condition Variable Pattern //independent code lock(m); while (conditions bad) wait(cond, m); //proceed knowing that conditions are now good signal (other_cond); // Let other thread know unlock(m);

Condition Variable Example shared int buf[n], in = 0, out = 0; shared int count = 0; // # of items in buffer shared mutex m; shared cond notempty, notfull; Producer item = Produce(); lock(m); while (count == N) wait(m, notfull); buf[in] = item; in = (in + 1)%N; count += 1; signal (notempty); unlock(m); Consumer lock(m); while (count == 0) wait(m, notempty); item = buf[out]; out = (out + 1)%N; count -= 1; signal (notfull); unlock(m); Consume(item);

Synchronization: More than Mutexes I want to block a thread until something specific happens. Condition variable: wait for a condition to be true I want all my threads to sync up at the same point. Barrier: wait for everyone to catch up.

Barriers Used to coordinate threads, but also other forms of concurrent execution. Often found in simulations that have discrete rounds. (e.g., game of life)

Barrier Example, N Threads shared barrier b; init_barrier(&b, N); T 0 T 1 T 2 T 3 T 4 Time create_threads(n, func); void *func(void *arg) { while ( ) { compute_sim_round() barrier_wait(&b) Barrier (0 waiting)

Barrier Example, N Threads shared barrier b; init_barrier(&b, N); Threads make progress computing current round at different rates. T 3 Time create_threads(n, func); void *func(void *arg) { while ( ) { compute_sim_round() barrier_wait(&b) T 1 T 0 T 2 Barrier (0 waiting) T 4

Barrier Example, N Threads shared barrier b; init_barrier(&b, N); create_threads(n, func); Threads that make it to barrier must wait for all others to get there. T 1 Time void *func(void *arg) { while ( ) { compute_sim_round() barrier_wait(&b) T 0 T 2 T 3 Barrier (3 waiting) T 4

Barrier Example, N Threads shared barrier b; init_barrier(&b, N); create_threads(n, func); void *func(void *arg) { while ( ) { compute_sim_round() barrier_wait(&b) Barrier allows threads to pass when N threads reach it. Matches T 0 T 1 T 2 T 3 T 4 Barrier (5 waiting) Time

Barrier Example, N Threads shared barrier b; init_barrier(&b, N); Threads compute next round, wait on barrier again, repeat Time create_threads(n, func); void *func(void *arg) { while ( ) { compute_sim_round() barrier_wait(&b) Barrier (0 waiting) T 0 T T T 2 4 1 T 3

Synchronization: More than Mutexes I want to block a thread until something specific happens. Condition variable: wait for a condition to be true I want all my threads to sync up at the same point. Barrier: wait for everyone to catch up. I want my threads to share a critical section when they re reading, but still safely write. Readers/writers lock: distinguish how lock is used

Readers/Writers Readers/Writers Problem: An object is shared among several threads Some threads only read the object, others only write it We can safely allow multiple readers But only one writer pthread_rwlock_t: pthread_rwlock_init: initialize rwlock pthread_rwlock_rdlock: lock for reading pthread_rwlock_wrlock: lock for writing

Common Thread Patterns Producer / Consumer (a.k.a. Bounded buffer) Thread pool (a.k.a. work queue) Thread per client connection

Thread Pool / Work Queue Common way of structuring threaded apps: Thread Pool

Thread Pool / Work Queue Common way of structuring threaded apps: Queue of work to be done: Thread Pool

Thread Pool / Work Queue Common way of structuring threaded apps: Queue of work to be done: Farm out work to threads when they re idle. Thread Pool

Thread Pool / Work Queue Common way of structuring threaded apps: Queue of work to be done: Thread Pool As threads finish work at their own rate, they grab the next item in queue. Common for embarrassingly parallel algorithms. Works across the network too!

Thread Per Client Consider Web server: Client connects Client asks for a page: http://web.cs.swarthmore.edu/~kwebb/cs31 Give me /~kwebb/cs31 Server looks through file system to find path (I/O) Server sends back html for client browser (I/O) Web server does this for MANY clients at once

Thread Per Client Server main thread: Wait for new connections Upon receiving one, spawn new client thread Continue waiting for new connections, repeat Client threads: Read client request, find files in file system Send files back to client Nice property: Each client is independent Nice property: When a thread does I/O, it gets blocked for a while. OS can schedule another one.

Message Passing send (to, buf) receive (from, buf) P 1 P 2 kernel Operating system mechanism for IPC send (destination, message_buffer) receive (source, message_buffer) Data transfer: in to and out of kernel message buffers Synchronization: can t receive until message is sent

Suppose we re using message passing, will this code operate correctly? /* NO SHARED MEMORY */ Producer int item; Consumer int item; item = Produce (); receive (Producer, &item); send (Consumer, &item); Consume (item); A. No, there is a race condition. B. No, we need to protect item. C. Yes, this code is correct.

This code is correct and relatively simple. Why don t we always just use message passing (vs semaphores, etc.)? /* NO SHARED MEMORY */ Producer int item; Consumer int item; item = Produce (); receive (Producer, &item); send (Consumer, &item); Consume (item); A. Message passing copies more data. B. Message passing only works across a network. C. Message passing is a security risk. D. We usually do use message passing!

Issues with Message Passing Who should messages be addressed to? ports (mailboxes) rather than processes/threads What if it wants to receive from anyone? pid = receive (*, msg) Synchronous (blocking) vs. asynchronous (non-blocking) Kernel buffering: how many sends w/o receives? Good paradigm for IPC over networks

Summary Many ways to solve the same classic problems Producer/Consumer: semaphores, CVs, messages There s more to synchronization than just mutual exclusion! CVs, barriers, RWlocks, and others. Message passing doesn t require shared mem. Useful for threads on different machines.