Threading and Synchronization. Fahd Albinali

Similar documents
Process synchronization

C09: Process Synchronization

Semaphores INF4140. Lecture 3. 0 Book: Andrews - ch.04 ( ) INF4140 ( ) Semaphores Lecture 3 1 / 34

Overview. CMSC 330: Organization of Programming Languages. Concurrency. Multiprocessors. Processes vs. Threads. Computation Abstractions

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Synchronization. CS 416: Operating Systems Design, Spring 2011 Department of Computer Science Rutgers University

Synchronization. Heechul Yun. Disclaimer: some slides are adopted from the book authors and Dr. Kulkani

CSE 451: Operating Systems Winter Synchronization. Gary Kimura

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

Summary Semaphores. Passing the Baton any await statement. Synchronisation code not linked to the data

Synchronization. Disclaimer: some slides are adopted from the book authors slides 1

Introduction to OS Synchronization MOS 2.3

Synchronization. Disclaimer: some slides are adopted from the book authors slides 1

CS 153 Design of Operating Systems Winter 2016

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5

Semaphores. Derived Inference Rules. (R g>0) Q g (g 1) {R} V (g) {Q} R Q g (g+1) V (s), hs := 1; i

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

POSIX Threads: a first step toward parallel programming. George Bosilca

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Concurrency and Synchronisation

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Concurrency and Synchronisation

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Semaphores. Mutual Exclusion. Inference Rules. Fairness: V may release a waiting process. Weakly fair, Strongly fair, or FIFO

CS533 Concepts of Operating Systems. Jonathan Walpole

2 Threads vs. Processes

CS 4410 Operating Systems. Review 1. Summer 2016 Cornell University

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Synchronization. Heechul Yun

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Operating Systems ECE344

Interprocess Communication By: Kaushik Vaghani

Concurrency, Thread. Dongkun Shin, SKKU

Lesson 6: Process Synchronization

Chapter 5 Asynchronous Concurrent Execution

Opera&ng Systems ECE344

Remaining Contemplation Questions

Chapter 6: Process Synchronization

Process Synchronization. studykorner.org

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Concept of a process

The University of Texas at Arlington

Chapter 8. Basic Synchronization Principles

Lecture 5: Synchronization w/locks

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

The University of Texas at Arlington

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

4. Concurrency via Threads

CMSC 330: Organization of Programming Languages

Synchronization Principles

Chapter 2 Processes and Threads

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Process Synchronization

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Processes The Process Model. Chapter 2 Processes and Threads. Process Termination. Process States (1) Process Hierarchies

CS3733: Operating Systems

Chapter 4: Multi-Threaded Programming

Multi-core Architecture and Programming

Operating Systems. Synchronisation Part I

CS510 Advanced Topics in Concurrency. Jonathan Walpole

Operating Systems. Synchronization

Dealing with Issues for Interprocess Communication

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Threads and Synchronization. Kevin Webb Swarthmore College February 15, 2018

Non-blocking Array-based Algorithms for Stacks and Queues. Niloufar Shafiei

Threads. What is a thread? Motivation. Single and Multithreaded Processes. Benefits

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Locks. Dongkun Shin, SKKU

Module 6: Process Synchronization

Concurrency: Mutual Exclusion and

Processes The Process Model. Chapter 2. Processes and Threads. Process Termination. Process Creation

CS 333 Introduction to Operating Systems. Class 4 Concurrent Programming and Synchronization Primitives

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

More Types of Synchronization 11/29/16

Concurrency: a crash course

Process Synchronization - I

Today: Synchronization. Recap: Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

CSE Traditional Operating Systems deal with typical system software designed to be:

Chapters 5 and 6 Concurrency

CS370 Operating Systems Midterm Review

CSE 153 Design of Operating Systems Fall 2018

CS370 Operating Systems

Concurrency: Deadlock and Starvation

Example of use. Shared var mutex: semaphore = 1; Process i. begin.. P(mutex); execute CS; V(mutex);.. End;

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

CS 31: Intro to Systems Misc. Threading. Kevin Webb Swarthmore College December 6, 2018

CMSC421: Principles of Operating Systems

Transcription:

Threading and Synchronization Fahd Albinali

Parallelism Parallelism and Pseudoparallelism Why parallelize? Finding parallelism Advantages: better load balancing, better scalability Disadvantages: process/thread overhead and communication

Parrallelism Cont d distribute computation and data Assign which processor does which computation If memory is distributed, decide which processor stores which data (why is this?) Data can be replicated also Goals: minimize communication and balance the computational workload Often conflicting goals

Parallelism Cont d synchronize and/or communicate If shared memory machine, synchronize Both mutual exclusion and sequence control Locks, semaphores, condition variables, barriers, reductions If distributed memory machine, communicate Message passing Usually communication involves implicit synchronization

Definition of Thread Thread Lightweight process (LWP) Threads of instructions or thread of control Shares address space and other global information with its process Registers, stack, signal masks and other threadspecific data are local to each thread Threads may be managed by the operating system or by a user application Examples: Win32 threads, C threads, Pthreads

Threads have become prominent due to trends in Software design More naturally expresses inherently parallel tasks Performance Scales better to multiprocessor systems Cooperation Shared address space incurs less overhead than IPC

Thread states Born state Ready state (runnable state) Running state Dead state Blocked state Waiting state Sleeping state Sleep interval specifies for how long a thread will sleep

Figure 4.3 User-level threads.

User level threads perform threading operations in user space Threads are created by runtime libraries that cannot execute privileged instructions or access kernel primitives directly User level thread implementation Many to one thread mappings Operating system maps all threads in a multithreaded process to single execution context Advantages User level libraries can schedule its threads to optimize performance Synchronization performed outside kernel, avoids context switches More portable Disadvantage Kernel views a multithreaded process as a single thread of control» Can lead to suboptimal performance if a thread issues I/O» Cannot be scheduled on multiple processors at once

Thread Signal Delivery Two types of signals Synchronous: Occur as a direct result of program execution Should be delivered to currently executing thread Asynchronous Occur due to an event typically unrelated to the current instruction Threading library must determine each signal s recipient so that asynchronous signals are delivered properly Each thread is usually associated with a set of pending signals that are delivered when it executes Thread can mask all signals except those that it wishes to receive

Thread Termination Thread termination (cancellation) Differs between thread implementations Prematurely terminating a thread can cause subtle errors in processes because multiple threads share the same address space Some thread implementations allow a thread to determine when it can be terminated to prevent process from entering inconsistent state

Linux task state-transition diagram.

Example Concurrent Program (x is shared, initially 0) code for Thread 0 foo( ) x := x+1 code for Thread 1 bar( ) x := x+2 Assume both threads execute at about the same time. What s the output?

Example Concurrent Program (cont.) One possible execution order is: Thread 0: R1 := x (R1 == 0) Thread 1: R2 := x (R2 == 0) Thread 1: R2 := R2 + 2 (R2 == 2) Thread 1: x := R2 (x == 2) Thread 0: R1 := R1 + 1 (R1 == 1) Thread 0: x := R1 (x == 1) Final value of x is 1 (!!) Question: what if Thread 1 also uses R1?

Example Execution 1. head Insert: elem >next := head; 4. Insert: head := elem; head 2. head elem t elem Delete: head := head >next; Delete: t := head; 5. head 3. head elem t elem t Delete: return t;

Some Definitions Race condition when output depends on ordering of thread execution more formally: (1) two or more threads access a shared variable with no synchronization, and (2) at least one of the threads writes to the variable

More Definitions Atomic Operation an operation that, once started, runs to completion note: more precisely, logically runs to completion indivisible in this class: loads and stores meaning: if thread A stores 1 into variable x and thread B stores 2 into variable x about about the same time, result is either 1 or 2 <await (B) S> atomically (evaluate B, wait until true, execute S)

Critical Section section of code that: must be executed by one thread at a time if more than one thread executes at a time, have a race condition ex: linked list from before Insert/Delete code forms a critical section What about just the Insert or Delete code? is that enough, or do both procedures belong in a single critical section?

Critical Section (CS) Problem Provide entry and exit routines: all threads must call entry before executing CS all threads must call exit after executing CS thread must not leave entry routine until it s safe CS solution properties Mutual exclusion: at most one thread is executing CS Absence of deadlock: two or more threads trying to get into CS => at least one succeeds Absence of uneccessary delay: if only one thread trying to get into CS, it succeeds Eventual entry: thread eventually gets into CS

Structure of threads for Critical Section problem Threads do the following: while (1) { do other stuff (non critical section) call enter execute CS call exit do other stuff (non critical section) }

Critical Section Assumptions Threads must call enter and exit Threads must not die or quit inside a critical section Threads can be context switched inside a critical section this does not mean that the newly running thread may enter the critical section

Hardware Support Provide instruction that is: atomic fairly easy for hardware designer to implement Read/Modify/Write atomically read value from memory, modify it in some way, write it back to memory Use to develop simpler critical section solution for any number of threads

Test and Set Many machines have it function TS(var target: bool) returns bool var b: bool := target; /* return old value */ target := true; return b; Executes atomically

Basic Idea with Atomic Instructions Each thread has a local flag One variable shared by all threads Use the atomic instruction with flag, shared variable on a change, allow thread to go in other threads will not see this change When done with CS, set shared var back to initial state

Problems with busy waiting CS solution Complicated Inefficient consumes CPU cycles while spinning Priority inversion problem low priority thread in CS, high priority thread spinning can end up causing deadlock example: Mars Pathfinder problem Want to block when waiting for CS

Locks Two operations: Acquire (get it, if can t go to sleep) Release (give it up, possibly wake up a waiter) entry( ) is then just Acquire(lock) exit( ) is just Release(lock) Lock is shared among all threads

Problems with Locks Not general only solve simple critical section problem can t do any more general synchronization often must enforce strict orderings betw. threads Condition synchronization need to wait until some condition is true example: bounded buffer (next slide) example: thread join

Semaphores (Dijkstra) Semaphore is an object contains a (private) value and 2 operations Semaphore value must be nonnegative P operation (atomic): if value is 0, block; else decrement value by 1 V operation (atomic): if thread blocked, wake up; else value++ Semaphores are resource counters

Critical Sections with Semaphores sem mutex := 1 entry( ) P(mutex) exit( ) V(mutex) Semaphores more powerful than locks For mutual exclusion, initialize semaphore to 1

Bounded Buffer (1 producer, 1 consumer) char buf[n], int front := 0, rear := 0 sem empty := n, full := 0 Producer( ) Consumer() do forever... do forever... produce message m P(full) P(empty) m := buf[front] buf[rear] := m; front := front + 1 rear := rear + 1 V(empty) V(full) consume m

Bounded Buffer (multiple producers and consumers) char buf[n], int front := 0, rear := 0 sem empty := n, full := 0, mutexc := 1, mutexp := 1 Producer( ) Consumer() do forever... do forever... produce message m P(full); P(mutexC) P(empty); P(mutexP) m := buf[front] buf[rear] := m; front := front + 1 rear := rear + 1 V(mutexC); V(empty) V(mutexP); V(full) consume m

Scratching the surface Readers/Writers Barriers Monitors Fairness/Enforcing ordering