Synchronization I. Today. Next time. Monitors. ! Race condition, critical regions and locks. ! Condition variables, semaphores and

Similar documents
Synchronization I. Today. Next time. Race condition, critical regions and locks. Condition variables, semaphores and Monitors

Synchronization I. To do

Synchronization I q Race condition, critical regions q Locks and concurrent data structures q Next time: Condition variables, semaphores and monitors

Locks. Dongkun Shin, SKKU

Concurrency: Mutual Exclusion (Locks)

Concurrency: Locks. Announcements

CS 537 Lecture 11 Locks

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

recap, what s the problem Locks and semaphores Total Store Order Peterson s algorithm Johan Montelius 0 0 a = 1 b = 1 read b read a

Operating System Labs. Yuanbin Wu

Locks and semaphores. Johan Montelius KTH

Locks and semaphores. Johan Montelius KTH

[537] Locks and Condition Variables. Tyler Harter

C09: Process Synchronization

Implementing Locks. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau)

PROCESS SYNCHRONIZATION

CS 111. Operating Systems Peter Reiher

Lock-based Concurrent Data Structures

As an example, assume our critical section looks like this, the canonical update of a shared variable:

28. Locks. Operating System: Three Easy Pieces

Introduction to OS Synchronization MOS 2.3

Synchronization I. Jo, Heeseung

Lock-based Concurrent Data Structures

7: Interprocess Communication

CS 537: Introduction to Operating Systems (Summer 2017) University of Wisconsin-Madison Department of Computer Sciences.

Concurrency, Thread. Dongkun Shin, SKKU

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL. Chap. 2.2 Interprocess Communication

Section I Section Real Time Systems. Processes. 1.4 Inter-Processes Communication (part 1)

Lesson 6: Process Synchronization

18-642: Race Conditions

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

Why We Wait. IPC, Threads, Races, Critical Sections. Correct Ordering. Approaches to Waiting. Condition Variables 4/23/2018

CS420: Operating Systems. Process Synchronization

Today: Synchronization. Recap: Synchronization

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CS 111. Operating Systems Peter Reiher

Resource management. Real-Time Systems. Resource management. Resource management

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Systèmes d Exploitation Avancés

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 6 Process Synchronization

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

As an example, assume our critical section looks like this, the canonical update of a shared variable:

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Why We Wait. Mutual Exclusion, Async Completions. Correct Ordering. Approaches to Waiting. Condition Variables 4/15/2017

Operating Systems. Synchronization

Motivation of Threads. Preview. Motivation of Threads. Motivation of Threads. Motivation of Threads. Motivation of Threads 9/12/2018.

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

CMSC421: Principles of Operating Systems

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

Threading and Synchronization. Fahd Albinali

Concurrency: Signaling and Condition Variables

Synchronization I. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Real-Time Systems. Lecture #4. Professor Jan Jonsson. Department of Computer Science and Engineering Chalmers University of Technology

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

CHAPTER 6: PROCESS SYNCHRONIZATION

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Chapter 6: Process Synchronization

CS 111. Operating Systems Peter Reiher

Page 1. Challenges" Concurrency" CS162 Operating Systems and Systems Programming Lecture 4. Synchronization, Atomic operations, Locks"

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization

CS533 Concepts of Operating Systems. Jonathan Walpole

Multiprocessors and Locking

Lecture 5: Synchronization w/locks

Concurrency and Synchronisation

Dealing with Issues for Interprocess Communication

Concurrency and Synchronisation

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Microkernel/OS and Real-Time Scheduling

Operating Systems. Thread Synchronization Primitives. Thomas Ropars.

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Lecture #7: Implementing Mutual Exclusion

Threads Tuesday, September 28, :37 AM

Process Management And Synchronization

Condition Variables & Semaphores

Operating systems. Lecture 3 Interprocess communication. Narcis ILISEI, 2018

Operating Systems. Synchronisation Part I

Interprocess Communication By: Kaushik Vaghani

! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA

Achieving Synchronization or How to Build a Semaphore

CMSC421: Principles of Operating Systems

Synchronization Principles

Concurrency Race Conditions and Deadlocks

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Interprocess Communication and Synchronization

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Synchronization Spinlocks - Semaphores

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

What are they? How do we represent them? Scheduling Something smaller than a process? Threads Synchronizing and Communicating Classic IPC problems

Processes. Introduction to Processes. The Process Model

Part II Process Management Chapter 6: Process Synchronization

Verification of Real-Time Systems Resource Sharing

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization?

Process/Thread Synchronization

Transcription:

Synchronization I Today! Race condition, critical regions and locks Next time! Condition variables, semaphores and Monitors

Cooperating processes! Cooperating processes need to communicate They can affect/be affected by others! Issues with Inter-Process Communication (IPC) 1. How to pass information to another process? 2. How to avoid getting in each other s ways? Two processes trying to get the last seat on a plane 3. How to ensure proper sequencing when there are dependencies? Process A produces data that B prints B must wait for A to print! How about threads? 1. Easy 2 & 3. Pretty much the same 2

Accessing shared resources! Many times cooperating threads share memory! A common example print spooler A thread wants to print a file, enter file name in a special spooler directory Printer daemon, another thread, periodically checks the directory, prints whatever file is there and removes the name next_slot:= in; // in = 4 spooler_dir[next_slot] := file_name; // insert abc in++; Spooler directory 3 4 5 6 7 8 zzz abc Out: 3 In: 34 3

Accessing shared resources! Assumption preemptive scheduling! Two threads, A & B, trying to print A: next_slot A in % 7 Switch A: spooler_dir[next_slot A ] file_name A A: in next_slot A + 1 % 8 B: next_slot B in % 8 B: spooler_dir[next_slot B ] file_name B B: in next_slot B + 1 % 9 Process A Spooler directory 4 5 6 abc filex taxes Out: 3 In: 7 7 fnama 8 fnamb Process B 4

Interleaved schedules! Problem the execution of the two threads/processes can be interleaved Some times the result of interleaving is OK, other times not! Switch Switch Switch A: next_slot A in % 7 A: B: spooler_dir[next_slot B in A ] % file_name 7 A A: B: in spooler_dir[next_slot next_slot A + 1 B ] file_name % 8 B B: in next_slot B in B + 1 % 8 % 8 B: A: spooler_dir[next_slot B A ] file_name B A B: A: in next_slot B A + 1 % 98 B s printout?! Process A Process B Spooler directory 4 5 6 7 8 abc filea taxes fnamb fnama Out: 3 In: 7 In: 8 5

Interleaved schedule another example next 1 struct list { 2 int data; 3 struct list *next; 4 }; 5 6 struct list *list = 0; 7 8 void 9 insert(int data) 10 { 11 struct list *l; 12 13 l = malloc(sizeof *l); 14 l->data = data; 15 l->next = list; 16 list = l; 17 } l list l list list next Two threads A and B, what would happen if B executes line 15 before A executes 16? 6

Race conditions and critical regions! Problem the threads operating on the data assumes certain conditions (invariants) hold For the linked list list points to the head of the list and each element s next point to the next element Insert temporarily violates this, but fixes it before finishing True for a single thread, not for two concurrent ones! Race condition Two or more threads/processes access (r/w) shared data Final results depends on order of execution! Code where race condition is possible critical region 7

Race conditions and critical regions! We need mechanisms to prevent race conditions, synchronizing access to shared resources Some tools try to detect them helgrind! We need a way to ensure the invariant conditions hold when the process is going to manipulate the share item, i.e.! to ensure that if a thread is using a shared item, others will be excluded from doing it i.e. only one thread at a time in the critical region (CR) Mutual exclusion 8

Requirements for a solution 1. No two threads simultaneously in CR Mutual exclusion, at most one thread in 2. No assumptions on speeds or numbers of CPUs 3. No thread outside its CR can block another one Ensure progress; a thread outside the CR cannot prevent another one from entering 4. No thread should wait forever to enter its CR Bounded waiting or no starvation Threads waiting to enter a CR should eventually be allow to enter 9

How about taking turns?! Strict alternation turn keeps track of whose turn it is to enter the CR Thread 0! Thread 1! while(true) {! while(turn!= 0);! critical_region0();! turn = 1;! noncritical_region0();! while(true) {! while(turn!= 1);! critical_region1();! turn = 0;! noncritical_region1();!! Problems? What if thread 0 sets turn to 1, but it gets around to just before its critical region before process 1 even tries? Turn is 1 and both process are in their noncritical region Violates conditions 3 10

Locks! Using locks It s a variable so you have to declare it Threads check lock when entering CR, and free it after The lock is either available (free) or acquired Can hold other information such as which thread holds the lock or a queue of lock requests lock(): if available, go on; else lock_t mutex; don t return until you have it unlock(): if threads are waiting, they will (eventually) find out (or be told) void insert(int data) { struct list *l; } lock(&mutex); l = malloc(sizeof *l); l->data = data; l->next = list; list = l; unlock(&mutex); 11

Pthreads locks! In the POSIX library, a lock is called a mutex pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER; void insert(int data) { struct list *l; } Pthread_mutex_lock(&lock); /* a wrapper that checks for errors */ l = malloc(sizeof *l);... Pthread_mutex_unlock(&lock);! Note the call passes a variable to lock/unlock To enable fine-grain locking (rather than a single coarse lock)! Locks must be initialized before used, either this way or dynamically with pthread_mutex_init(&lock, NULL)! 12

Implementing locks! Here it s a simple implementation of lock()!! Are we done? 1 void 2 lock(lock *lk) 3 { 4 while(lk->locked == 1) 5 ; /* spin-wait */ 6 lk->locked = 1; /* now set it */ 7 } Context switch here are there we go again! No yet! Correctness problem: Both can concurrently test 4, see it unlocked, and grab it; now both are in the CR Continuously testing a variable for a given value is called busy waiting; a lock that uses this is a spin lock spin waiting is wasteful 13

Implementing locks! Disabling interrupts Simplest solution threads disables all interrupts when entering the CR and re-enables them at exit void lock() {! DisableInterrupts();! void unlock() {! EnableInterrupts();! No interrupts no clock interrupts no other process getting in your way! Obvious problems Users in control grabs the CPU and never comes back What about multiprocessors? And yes, it s also inefficient! Use in the kernel still multi-core means we need something more sophisticated 14

TSL(test&set) -based solution! Atomically test & modify the content of a word TSL The CPU executing the TSL locks the memory bus to prohibit other CPUs from accessing memory until it is done In SPARC is ldstub (load & store), in x86 is xchg! int TSL(int *ptr, int new) {! int old = *ptr; /* fetch old value at ptr */! *ptr = new; /*store new value into ptr */! return old;! Done atomically! Entering and leaving CR typedef struct lock_t {! int flag;! } lock_t;! void init(lock_t *loc) {! lock->flag = 0;! void lock(lock_t *lock) { while (TSL(&lock->flag, 1) == 1)! ; /* spin-wait */! void unlock(lock_t *lock) { lock->flag = 0;! 15

Busy waiting and priority inversion! Problems with TSL-based approach? Waste CPU by busy waiting Can lead to priority inversion Two processes, H (high-priority) & L (low-priority) L gets into its CR H is ready to run and starts busy waiting L is never scheduled while H is running So L never leaves its critical region and H loops forever! Welcome to Mars! 16

Problems in the Mars Pathfinder*! Mars Pathfinder Launched Dec. 4, 1996, landed July 4 th, 1997! Periodically the system reset itself, loosing data! VxWork provides preemptive priority scheduling! Pathfinder software architecture An information bus with access controlled by a lock A bus management (B) high-priority thread A meteorological (M) low-priority, short-running thread If B thread was scheduled while the M thread was holding the lock, the B thread busy waited on the lock A communication (C) thread running with medium priority B (high) M (low) *As explained by D. Wilner, CTO of Wind River Systems, and narrated by Mike Jones Information bus C (medium) 17

Problems in the Mars Pathfinder* B (high) M (low) Information bus! Sometimes, B was waiting on M and C was scheduled C (medium)! After a bit of waiting, a watchdog timer would reset the system J! How would you fix it? Priority inheritance the M thread inherits the priority of the B thread blocked on it Actually supported by VxWork but dissabled! 18

Yield rather than spin! Too much spinning Rather than sit in a tight loop, go to sleep Imagine two threads; first one gets the lock and gets interrupted Second one wants the lock and have to wait and wait! An alternative just yield void lock(lock_t *lock) { while (TSL(&lock->flag, 1) == 1)! yield(); /* give up the CPU */!! Better than spinning but What about the context switching cost? Is there a chance of starvation? 19

Sleep rather than spin! Too much left to chance The schedule determines who runs next; if it makes a bad choice yield immediately or sleep Let s get some control over who gets to acquire the lock next! Two special calls (in Solaris) park() put a calling thread to sleep unpark(threadid) wake up a particular thread! If you can get it, go to sleep typedef struct lock_t {! int flag;! int guard;! An explicit queue queue_t *q;! to control who } lock_t;! gets the lock void init(lock_t *m) {! m->flag = 0;! m->guard = 0;! queue_init(m->q);! 20

Sleep rather than spin void lock(lock_t *m) { while (TSL(&m->guard, 1) == 1)! ;! if (m->flag == 0) {! m->flag = 1;! m->guard = 0;! } else {! queue_add(m->q, gettid());! m->guard = 0;! park();! Here it s where the thread is when woken up Notice the use of guard as a spin-lock around flag and queue manipulation so not quite avoiding spinning void unlock(lock_t *m) { while (TSL(&m->guard, 1) == 1)! ;! if (queue_empty(m->q))! m->flag = 0;! else! unpark(queue_remove(m->q);! m->guard = 0;! Notice we are not setting flag back to zero when waking up a thread; the one woken up does not have it 21

Sleep rather than spin void lock(lock_t *m) { while (TSL(&m->guard, 1) == 1)! ;! if (m->flag == 0) {! m->flag = 1;! m->guard = 0;! } else {! queue_add(m->q, gettid());! m->guard = 0;! park();! Just curious, what would happen if you park before releasing guard? Isn t that a race condition? What would happen if the thread about to park is interrupted, the one holding the lock releases it the parking one will never wakeup!! One solution uses a third Solaris system call setpark() I am about to park, so be ware After this, if the thread is interrupted and another calls unpark before the park is called, parks returns immediately! } else {! queue_add(m->q, gettid());! setpark();! m->guard = 0;! park();! 22

Lock-based concurrent data structures! Making data structures thread-safe, i.e., usable by threads Two concerns correctness, obviously And performance! A non-concurrent counter typedef struct counter_t {! int value;! } counter_t;!! void init(counter_t *c) {! c->value = 0;!! void increment(counter_t *c) {! c->value++;!! void decrement(counter_t *c) {! c->value--;! 23

Lock-based concurrent data structures! A concurrent version typedef struct counter_t {! int value;! pthread_lock_t lock;! } counter_t;!! void init(counter_t *c) {! c->value = 0;! pthread_mutex_init(&c->lock, NULL);!! void increment(counter_t *c) {! pthread_mutex_lock(&c->lock);! c->value++;! pthread_mutex_unlock(&c->lock);!! void decrement(counter_t *c) {! pthread_mutex_lock(&c->lock);! c->value--;! pthread_mutex_unlock(&c->lock);!! Still not very scalable; for a better option with sloppy counters S. Boyd-Wickizer et al., An analysis of Linux Scalability to Many Cores, OSDI 2010 24

Lock-based concurrent data structures! A list typedef struct node_t {! int key;! struct node_t *next;! } node_t;! typedef struct list_t {! node_t *head;! pthread_mutex_t lock;! } list_t;! void List_Init(list_t *L) {! L->head = NULL;! pthread_mutex_init(&l->lock, NULL);! int List_Insert(list_t *L, int key) {! pthread_mutex_lock(&l->lock);! node_t *new = malloc(sizeof(node_t));! if (new == NULL) {! perror( malloc );! pthread_mutex_unlock(&l->lock);! return -1; /* failure */! new->key = key;! new->next = L->head;! L->head = new;! pthread_mutex_unlock(&l->lock);! return 0; /* success */! int List_Lookup(list_t *L, int key) {! pthread_mutex_lock(&l->lock);! node_t *current = L->head;! while(curr) {! if (curr->key == key) {! pthread_mutex_unlock(&l->lock);! return 0; /* success */! curr = curr->next;! pthread_mutex_unlock(&l->lock);! return -1; /* failure */! Can we simplify this to avoid releasing the lock on the failure path? Very coarse locking; what s your critical section? 25

Lock-based concurrent data structures! A first concurrent list (part of, actually) typedef struct node_t {! int key;! struct node_t *next;! } node_t;! typedef struct list_t {! node_t *head;! pthread_mutex_t lock;! } list_t;! void List_Init(list_t *L) {! L->head = NULL;! pthread_mutex_init(&l->lock, NULL);! int List_Insert(list_t *L, int key) {! pthread_mutex_lock(&l->lock);! node_t *new = malloc(sizeof(node_t));! if (new == NULL) {! perror( malloc );! pthread_mutex_unlock(&l->lock);! return -1; /* failure */! new->key = key;! new->next = L->head;! L->head = new;! pthread_mutex_unlock(&l->lock);! return 0; /* success */! int List_Lookup(list_t *L, int key) {! pthread_mutex_lock(&l->lock);! node_t *current = L->head;! while(curr) {! if (curr->key == key) {! pthread_mutex_unlock(&l->lock);! return 0; /* success */! curr = curr->next;! pthread_mutex_unlock(&l->lock);! return -1; /* failure */! Can we simplify this to avoid releasing the lock on the failure path? Very coarse locking; what s your critical section? 26

Lock-based concurrent data structures! And some improvements int List_Insert(list_t *L, int key) {! pthread_mutex_lock(&l->lock);! node_t *new = malloc(sizeof(node_t));! if (new == NULL) {! perror( malloc );! pthread_mutex_unlock(&l->lock);! return -1; /* failure */! new->key = key;! new->next = L->head;! L->head = new;! pthread_mutex_unlock(&l->lock);! return 0; /* success */! int List_Lookup(list_t *L, int key) {! pthread_mutex_lock(&l->lock);! node_t *current = L->head;! while(curr) {! if (curr->key == key) {! pthread_mutex_unlock(&l->lock);! return 0; /* success */! curr = curr->next;! pthread_mutex_unlock(&l->lock);! return -1; /* failure */! int List_Insert(list_t *L, int key) {! node_t *new = malloc(sizeof(node_t));! if (new == NULL) {! perror( malloc );! Just lock the return -1; /* failure */! critical section new->key = key;! pthread_mutex_lock(&l->lock);! new->next = L->head;! L->head = new;! pthread_mutex_unlock(&l->lock);! return 0; /* success */! int List_Lookup(list_t *L, int key) {! int rv = -1;! pthread_mutex_lock(&l->lock);! node_t *current = L->head;! while (curr) {! if (curr->key == key) {! rv = 0;! A single return path, break; /* success */! to avoid potential bugs curr = curr->next;! pthread_mutex_unlock(&l->lock);! return rv;! 27

Coming up! Other mechanisms for synchronization Condition variables Semaphores slightly higher abstractions Monitors much better but requiring language support 28