Concurrency Control. Unlocked. The lock is not held by any thread of control and may be acquired.

Size: px
Start display at page:

Download "Concurrency Control. Unlocked. The lock is not held by any thread of control and may be acquired."

Transcription

1 1 Race Condition A race condition, data race, or just race, occurs when more than one thread of control tries to access a shared data item without using proper concurrency control. The code regions accessing the shared data item are collectively known as the critical section for that data item. We have seen an example of a race when we looked at how i + + can be unsafe in a concurrent environment. The timing issue that we looked at is an example of a data race. Without proper concurrency control, there is potential for state corruption within a set of assembly instructions that we need to be executed as one atomic unit. This often because of inconvenient interrupts the malicious scheduler or the latency of the time of check to time of use (TOCTOU) in a critical section. 2 Locks The most basic concurrency control primitive is the mutex lock, usually just called lock or mutex. A lock is a common synchronization primitive used in concurrent programming. There are two types of locks, spin lock and wait lock; a wait lock is also called a blocking lock but is actually a binary semaphore which we will discuss in the section on semaphores. Generally, locks are advisory locks, where each thread agrees to voluntarily acquire the lock before accessing the corresponding data. Some systems implement mandatory locks, where attempting to access a locked resource, without holding the corresponding lock, will force an exception in the thread of control attempting to make the access. Locks in this class are advisory. At any one time, a lock can be in one and only one of two states. Locked. The lock is exclusively held by one thread of control and may not be acquired by another thread of control until the lock is released. Unlocked. The lock is not held by any thread of control and may be acquired. 2.1 Hardware Support We require hardware support to build a lock. The reason is the instruction cycle. Figure 1: CPU Execution Cycle We need the ability to acquire a lock without being interrupted since the interrupt may cause the scheduler to run. The only way to accomplish this is with an atomic assembly instruction. The instruction set architects provide the set of assembly instructions, so we need them to create the instruction for our use. This is one example of the close cooperation between OS developers and processor architects. 1

2 But wait, how about disabling interrupts? If our system only has a single CPU, then disabling interrupts will control concurrency. However, single CPU systems are rare anymore. Further, it is difficult, perhaps impossible, for a thread of control to determine the number of CPUs for the machine on which it is running. The problem is that interrupts can only be disabled on a per CPU basis. While the thread cannot be forced to give up the CPU it is using, other CPUs will continue to run and threads on those could access the shared data item. It is safer and more scalable to use atomic instructions instead of disabling interrupts. Always assume a multi CPU environment. Disabling interrupts disables all interrupt processing for that CPU. This can be a very bad thing TM since the clock won t be updated, I/O will not be processed, etc. Only disable interrupts if 1) you have no other solution and the code path is small; or 2) you are modifying processor-local state. There are several common types of atomic instructions that can be used to build a lock. 1. TSL Test and Set Lock 1. The test-and-set instruction writes 1 (set) to a memory location and returns its old value as a single atomic operation. Software only pseudo code: #define LOCKED 1 int TestAndSet(int* lockptr) int oldvalue; // -- Start of atomic segment -- // This should be interpreted as pseudocode for illustrative purposes only. // Traditional compilation of this code will not guarantee atomicity, the // use of shared memory (i.e., non-cached values), protection from compiler // optimizations, or other required properties. oldvalue = *lockptr; *lockptr = LOCKED; // -- End of atomic segment -- return oldvalue; Use as a spin lock: volatile int lock = 0; void Critical() while (TestAndSet(&lock) == 1) ; critical section // only one process can be in this section at a time lock = 0 // release lock when finished with the critical section 1 See Wikipedia 2

3 2. XCHG Exchange Register/Memory with Register Use as a spin lock: // The xchg is atomic. while(xchg(&lk->locked, 1)!= 0) ; xv6 inline assembly using xchg atomic instruction a. static inline uint xchg(volatile uint *addr, uint newval) uint result; // The + in "+m" denotes a read-modify-write operand. asm volatile("lock; xchgl %0, %1" : "+m" (*addr), "=a" (result) : "1" (newval) : "cc"); return result; a from spinlock.c 3. CAS Compare and Swap 2 Pseudo code: int compare_and_swap(int* reg, int oldval, int newval) ATOMIC(); int old_reg_val = *reg; if (old_reg_val == oldval) *reg = newval; END_ATOMIC(); return old_reg_val; 2 See Wikipedia 3

4 Example usage pseudo code: int function add ( int *p, int a) done false while not done value *p done cas (p, value, value + a) return value + a 2.2 Spinning vs Blocking Advantages: Spinning. A spin lock is conceptually simple and easy to implement correctly. Blocking. Usually has better resource utilization. Disadvantages: Spinning. Potentially wastes resources such as CPU time. Blocking. Complexity of the implementation; managing the wait list. When would you use spinning instead of blocking? Single CPU System: Never. This is because, on a single CPU system, the spinning thread will use its entire time slice each time that it is scheduled. When the spinning thread is running, the thread holding the lock is delayed unnecessarily from running. If it cannot run, it cannot complete its critical section and release the lock. This implies that spinning increases the delay until the spinning thread will acquire the lock. If the thread instead blocked, the thread holding the lock will be able to finish the critical section and release the lock in a much shorter period of time. Multi CPU System: When the anticipated time for spinning is less than the latency of waiting. This is because the spinning thread can be running on one CPU while the thread holding the lock can be running on a separate CPU and therefore free the lock without being delayed by the spinning thread. In other words, if the time spent spinning will be less than the time for waiting, use a spin lock. Once the number of threads exceeds the number of CPUs, the decision is more difficult to make because of interrupts and context switches. 4

5 2.3 Lock Evaluation 1. Correctness. Basically, does the lock work, preventing multiple threads from entering a critical section? 2. Fairness. Does each thread contending for the lock get a fair shot at acquiring it once it is free? Another way to look at this is by examining the more extreme case: does any thread contending for the lock starve while doing so, thus never obtaining it? 3. Performance. What are the costs of using a spin lock? To analyze this more carefully, we suggest thinking about a few different cases. In the first, imagine threads competing for the lock on a single processor; in the second, consider threads spread out across many CPUs. For spin locks, in the single CPU case, performance overheads can be quite painful; imagine the case where the thread holding the lock is preempted within a critical section. The scheduler might then run every other thread (imagine there are N 1 others), each of which tries to acquire the lock. In this case, each of those threads will spin for the duration of a time slice before giving up the CPU, a waste of CPU cycles. However, on multiple CPUs, spin locks work reasonably well (if the number of threads roughly equals the number of CPUs). The thinking goes as follows: imagine Thread A on CPU 1 and Thread B on CPU 2, both contending for a lock. If Thread A (CPU 1) grabs the lock, and then Thread B tries to, B will spin (on CPU 2). However, presumably the critical section is short, and thus soon the lock becomes available, and is acquired by Thread B. Spinning to wait for a lock held on another processor doesn t waste many cycles in this case, and thus can be effective. In this order. Why? 2.4 Evaluating spin locks Example code: typedef struct lock_t int flag; lock_t; void init(lock_t *lock) // 0 indicates that lock is available, 1 that it is held lock->flag = 0; void lock(lock_t *lock) while (TestAndSet(&lock->flag, 1) == 1) ; // spin-wait (do nothing) void unlock(lock_t *lock) lock->flag = 0; 5

6 Evaluation: Correctness. The answer here is yes: the spin lock only allows a single thread to enter the critical section at a time. Thus, we have a correct lock. Fairness. How fair is a spin lock to a waiting thread? Can you guarantee that a waiting thread will ever enter the critical section? The answer here, unfortunately, is bad news: spin locks don t provide any fairness guarantees. Indeed, a thread spinning may spin forever when under contention. Simple spin locks are not fair and may lead to starvation. Performance. It makes no sense to evaluate performance until we get fairness. 2.5 xv6 spin and sleep locks xv6 holding() helper function spin lock: // Check whether this cpu is holding the lock. int holding(struct spinlock *lock) int r; pushcli(); r = lock->locked && lock->cpu == mycpu(); popcli(); return r; Why does this routine turn off interrupts? Hint: context switch. sleep lock: int holdingsleep(struct sleeplock *lk) int r; acquire(&lk->lk); r = lk->locked; release(&lk->lk); return r; 6

7 2.5.2 xv6 acquire() spin lock: // Acquire the lock. // Loops (spins) until the lock is acquired. // Holding a lock for a long time may cause // other CPUs to waste time spinning to acquire it. void acquire(struct spinlock *lk) pushcli(); // disable interrupts to avoid deadlock. if(holding(lk)) panic("acquire"); // The xchg is atomic. while(xchg(&lk->locked, 1)!= 0) ; // Tell the C compiler and the processor to not move loads or stores // past this point, to ensure that the critical section s memory // references happen after the lock is acquired. sync_synchronize(); // Record info about lock acquisition for debugging. lk->cpu = mycpu(); getcallerpcs(&lk, lk->pcs); Note: Pushcli/popcli are like cli (clear interrupt) and sti (set interrupt) except that they are matched: It takes two popcli to undo two pushcli. They count. Also, if interrupts are off, then pushcli, popcli leaves them off. Interrupt state is preserved. sleep lock: void acquiresleep(struct sleeplock *lk) acquire(&lk->lk); while (lk->locked) sleep(lk, &lk->lk); lk->locked = 1; 7

8 lk->pid = myproc()->pid; release(&lk->lk); xv6 release() spin lock: // Release the lock. void release(struct spinlock *lk) if(!holding(lk)) panic("release"); lk->pcs[0] = 0; lk->cpu = 0; // Tell the C compiler and the processor to not move loads or stores // past this point, to ensure that all the stores in the critical // section are visible to other cores before the lock is released. // Both the C compiler and the hardware may re-order loads and // stores; sync_synchronize() tells them both not to. sync_synchronize(); // Release the lock, equivalent to lk->locked = 0. // This code can t use a C assignment, since it might // not be atomic. A real OS would use C atomics here. asm volatile("movl $0, %0" : "+m" (lk->locked) : ); popcli(); sleep lock: void releasesleep(struct sleeplock *lk) acquire(&lk->lk); lk->locked = 0; lk->pid = 0; wakeup(lk); release(&lk->lk); 8

9 2.6 The Dining Philosophers Problem Figure 2: The Dining Philosophers Problem The easy solution prevents deadlock but the right solution also prevents starvation. Problem statement Five silent philosophers sit at a round table with bowls of noodles. Forks are placed between each pair of adjacent philosophers. Each philosopher must alternately think and eat. However, a philosopher can only eat noodles when they have both left and right forks. Each fork can be held by only one philosopher and so a philosopher can use the fork only if it is not being used by another philosopher. After an individual philosopher finishes eating, they need to put down both forks so that the forks become available to others. A philosopher can take the fork on their right or the one on their left as they become available, but cannot start eating before getting both forks. Eating is not limited by the remaining amounts of noodles or stomach space; an infinite supply and an infinite demand are assumed. The problem is how to design a [ ] concurrent algorithm such that no philosopher will starve; i.e., each can forever continue to alternate between eating and thinking, assuming that no philosopher can know when others may want to eat or think. The problem was designed to illustrate the challenges of avoiding deadlock, a system state in which no progress is possible. To see that a proper solution to this problem is not obvious, consider a proposal in which each philosopher is instructed to behave as follows 3 : 3 Assume that each step is atomic. 9

10 think until the left fork is available; when it is, pick it up; think until the right fork is available; when it is, pick it up; when both forks are held, eat for a fixed amount of time; then, put the right fork down; then, put the left fork down; repeat from the beginning. This attempted solution fails because it allows the system to reach a deadlock state, in which no progress is possible. This is a state in which each philosopher has picked up the fork to the left, and is waiting for the fork to the right to become available. With the given instructions, this state can be reached, and when it is reached, the philosophers will eternally wait for each other to release a fork. Resource starvation might also occur independently of deadlock if a particular philosopher is unable to acquire both forks because of a timing problem. For example, there might be a rule that the philosophers put down a fork after waiting ten minutes for the other fork to become available and wait a further ten minutes before making their next attempt. This scheme eliminates the possibility of deadlock (the system can always advance to a different state) but still suffers from the problem of livelock. If all five philosophers appear in the dining room at exactly the same time and each picks up the left fork at the same time the philosophers will wait ten minutes until they all put their forks down and then wait a further ten minutes before they all pick them up again 4. A lock-based solution Let us suppose that the table has a lock. All philosophers are required to hold the lock before they can attempt a change in the state of their utensils. No deadlock can result, but individual philosophers can be starved. The left-handed philosopher Assume all philosophers always attempt to pick up their utensils in this order: right utensil followed by left utensil (right-handed). This is essentially the original problem statement. Now, pick one philosopher at random to be left-handed, meaning that this philosopher will always attempt to pick up the left utensil before the right utensil. No deadlock can result, but individual philosophers can be starved. Deadlock avoided, but what about starvation To eliminate starvation, you must guarantee that no philosopher may be blocked in an unbounded manner. For example, suppose you maintain a queue of philosophers. When a philosopher is hungry, they get put onto the tail of the queue. A philosopher may eat only if they are at the head of the queue and the necessary forks are free. When both forks are not free the philosopher again waits by moving back onto the tail of the queue. This approach solves the deadlock and starvation problems, using ordering on lock acquisition. 4 See Wikipedia 10

11 2.7 Pthread Mutexes int pthread mutex lock(pthread mutex t *mutex) acquire a lock on the specified mutex variable. If the mutex is already locked by another thread, this call will block the calling thread until the mutex is unlocked. int pthread mutex trylock(pthread mutex t *mutex) attempt to lock a mutex or will return error code if busy. Useful for preventing deadlock conditions. The main questions is how long to wait before retrying. The use of a binary backoff 5 algorithm is common. int pthread mutex unlock(pthread mutex t *mutex) unlock a mutex variable. An error is returned if mutex is already unlocked or owned by another thread. 5 See Wikipedia 11

12 #include<stdio.h> #include<string.h> #include<pthread.h> #include<stdlib.h> #include<unistd.h> pthread_t tid[2]; int counter; pthread_mutex_t lock; int main(void) int i = 0; int error; void* trythis(void *arg) pthread_mutex_lock( unsigned long i = 0; & lock); counter += 1; printf("\n Job %d has started\n", counter); for(i=0; i<(0xffffffff); i++); printf("\n Job %d has finished\n", counter); pthread_mutex_unlock( & lock); return NULL; if (pthread_mutex_init( & lock, NULL)!= 0) printf("\n mutex init has failed\n"); return 1; while(i < 2) err = pthread_create(&(tid[i]), NULL, &trythis, NULL); if (error!= 0) printf("\nthread can t be created :[%s]", strerror(error)); i++; pthread_join(tid[0], NULL); pthread_join(tid[1], NULL); pthread_mutex_destroy( & lock); return 0; Table 1: POSIX Mutex Example 12

13 2.8 Review Questions 1. What is a critical section? 2. What is a race condition? 3. How can we protect against race conditions? 4. Can locks be implemented simply by reading and writing to a binary variable in memory? Why or why not? 5. How can a kernel make synchronization-related system calls atomic on a uniprocessor? Why won t this work on a multiprocessor? 6. Why is hardware support necessary for mutual exclusion? 7. Why is it better to block rather than spin on a uniprocessor? 8. Why is it sometimes better to spin rather than block on a multiprocessor? Describe a scenario in which spinning is superior to blocking. 9. How do we evaluate lock implementations? 13

14 3 Condition Variables A condition variable (CV) compares against some arbitrary condition. The vast majority of the time, the condition check will resolve to TRUE (1) or FALSE (0), a binary condition asking if the condition has been met. A lock is always associated with a condition variable. Note that CVs do not have the concept of ownership. #include <stdio.h> #include <pthread.h> #include <unistd.h> #include <stdlib.h> volatile int i = 0; static void *f1(void *p) while (i==0) printf("i s value has changed to"); printf(" %d.\n", i); return NULL; static void *f2(void *p) sleep(60); i = 99; // wrong way to signal printf("t2 has changed the value of"); printf(" i to %d.\n", i); return NULL; int main() int rc; pthread_t t1, t2; Table 2: Rolling Your Own, Broken Edition A condition variable represents a condition on which a thread can: Wait until the condition occurs; or rc = pthread_create(&t1, NULL, f1, NULL); if (rc!= 0) fprintf(stderr,"pthread f1 failed\n"); return EXIT_FAILURE; rc = pthread_create(&t2, NULL, f2, NULL); if (rc!= 0) fprintf(stderr,"pthread f2 failed\n"); return EXIT_FAILURE; pthread_join(t1, NULL); pthread_join(t2, NULL); puts("all pthreads finished."); return 0; Signal or notify other waiting threads that the condition has occurred Note that while these appear to be paired, they are not paired. This is a very useful primitive for asynchronous communication between threads. Three operations on condition variables: wait() Block until another thread calls signal() or broadcast() on the CV signal() Wake up one thread waiting on the CV broadcast() Wake up all threads waiting on the CV 14

15 3.1 Pthread Condition Variables With pthreads, a condition variable is of type pthread cond t. Four operations are supported: pthread cond init() to initialize. Different ways exist, see the man page. pthread cond wait(&thecv, &somelock) to wait on the semaphore. Note that the lock is required. pthread cond signal(&thecv) to signal the semaphore. pthread cond broadcast(&thecv) to broadcast a signal to all threads waiting on the semaphore. /* globals */ pthread mutex t mylock; pthread cond t mycv; int counter = 0; /* Thread A */ pthread_mutex_lock(&mylock); while (counter < 10) pthread_cond_wait(&mycv, &mylock); pthread_mutex_unlock(&mylock); /* Thread B */ pthread_mutex_lock(&mylock); counter++; if (counter >= 10) pthread_cond_signal(&mycv); pthread_mutex_unlock(&mylock); Table 3: Using a pthread condition variable 15

16 Glossary atomic Executing as a single unit or block of computation. An atomic section of code is said to have transactional semantics. No intermediate state for the code unit is visible outside of the atomic transaction. atomic transaction An atomic transaction is an indivisible and irreducible series of operations such that either all occur, or nothing occurs. binary semaphore dummy busy waiting busy-waiting, busy-looping, or spinning is a technique in which a process repeatedly checks to see if a condition is true, such as whether keyboard input or a lock is available 6. concurrency the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome. concurrency control primitive The basic concurrency control primitives in this class are the lock, condition variable, and semaphore. Concurrency control primitives are used to synchronize operations among multiple threads of control. concurrent Two or more operations are said to be concurrent if they can occur at the same time or appear to occur at the same time. The context switch plays a large role in concurrency. condition variable (CV) Condition variables allow threads to synchronize based upon the actual value of data. A condition variable is always used in conjunction with a mutex lock 7. A condition variable represents some condition that a thread can: Wait on, until the condition occurs; or Notify other waiting threads that the condition has occurred Very useful primitive for signaling between threads. Condition variable indicates an event; cannot store or retrieve a value from a CV. There are three operations on condition variables: wait() Block until another thread calls signal() or broadcast() on the CV signal() Wake up one thread waiting on the CV broadcast() Wake up all threads waiting on the CV Compare with semaphore. context switch The process of storing the state of a process or of a thread, so that it can be restored and execution resumed from the same point later. This allows multiple processes to share a single CPU, and is an essential feature of a multitasking operating system. 6 See Wikipedia 7 See this LLNL Tutorial 16

17 The precise meaning of the phrase context switch varies significantly in usage. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance, although the size of this effect depends on the nature of the switch being performed. critical section A critical section is a piece of code that accesses a shared variable (or more generally, a shared resource) and must not be concurrently executed by more than one thread. deadlock A state in which each member of a group is waiting for another member, including itself, to take action, such as sending a message or more commonly releasing a lock. Deadlock is a common problem in multiprocessing systems, parallel computing, and distributed systems, where software and hardware locks are used to arbitrate shared resources and implement process synchronization. In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process, then the system is said to be in a deadlock. deterministic An algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. exception In general, an exception breaks the normal flow of execution and executes a preregistered exception handler. The details of how this is done depends on whether it is a hardware or software exception and how the software exception is implemented. Some exceptions, especially hardware ones, may be handled so gracefully that execution can resume where it was interrupted 8. general semaphore long dummy invariant Invariants are properties of data structures that are maintained across operations. Typically, the correct behavior of an operation depends on the invariants being true when the operation begins. The operation may temporarily violate the invariants but must reestablish them before finishing. lock A lock or mutex (from mutual exclusion) is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. A lock is designed to enforce a mutual exclusion concurrency control policy 9. See also reentrant mutex. 8 See Wikipedia 9 See Wikipedia 17

18 mutex A mutex provides multiple threads with access to a shared resource such that the second thread that needs to acquire a mutex that has already been acquired by another thread has to wait until the first thread releases the mutex. Care should be taken to ensure that a thread does not attempt to acquire a mutex that it already holds, as this can result in a deadlock. See also lock. mutex lock See mutex. mutual exclusion A property of concurrency control, which is instituted for the purpose of preventing race conditions; it is the requirement that one thread of execution never enters its critical section at the same time that another concurrent thread of execution enters its own critical section nondeterministic In computer science, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run. An improperly constructed concurrent algorithm can perform differently on different runs due to a race condition. parallel Occurring at the same time. race When multiple threads of execution enter the critical section at roughly the same time; both attempt to update the shared data structure, leading to a surprising (and perhaps undesirable) outcome. See also race condition. race condition A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don t know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are racing to access/change the data. 10. reentrant mutex A reentrant mutex, also called recursive mutex or recursive lock), is type of mutual exclusion (mutex) device that may be locked multiple times by the same process or thread, without causing deadlock. While any attempt to perform the lock operation on an ordinary mutex (lock) would either fail or block when the mutex is already locked, on a recursive mutex this operation will succeed if and only if the locking thread is the one that already holds the lock or if the lock is not held by any thread. Typically, a recursive mutex tracks the number of times it has been locked, and requires equally many unlock operations to be performed before other threads may lock it 11. semaphore a semaphore is a variable or abstract data type used to control access to a common resource by multiple processes in a concurrent system such as a multitasking operating system. A semaphore is simply a variable. This variable is used to solve critical section problems and to achieve process synchronization in the multi processing environment. A trivial 10 See this thread from Stack Overflow 11 See Wikipedia 18

19 semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions. A useful way to think of a semaphore as used in the real-world system is as a record of how many units of a particular resource are available, coupled with operations to adjust that record safely (i.e. to avoid race conditions) as units are required or become free, and, if necessary, wait until a unit of the resource becomes available. Semaphores are a useful tool in the prevention of race conditions; however, their use is by no means a guarantee that a program is free from these problems. Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to implement locks 12. A semaphore is a shared counter with two operations P() or wait() or down(). From Dutch proeberen, meaning test. Decrement semaphore value. If Sem.val < 0, enter the wait list. V() or signal() or up(). From Dutch verhogen, meaning increment. Atomically increment semaphore value and wake a thread if necessary. If ++Sem.val 0 wake a thread. semaphores Plural of semaphore. serialization dummy sleep lock Another name for a binary semaphore which can logically be coinsidered a simple lock with a sleep queue. In Linux, this type of lock is called a mutex. Note that, in this class, we instead consider mutex as a synonym for lock. spin lock A lock which causes a thread trying to acquire it to simply wait in a loop ( spin ) while repeatedly checking if the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released 13. Uniprocessor architectures have the option of using uninterpretable sequences of instructions using special instructions or instruction prefixes to disable interrupts temporarily but this technique does not work for multiprocessor shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial synchronization issues. starvation A process is perpetually denied necessary resources to process its work. Starvation may be caused by errors in a scheduling or mutual exclusion algorithm, but can also be caused by resource leaks, and can be intentionally caused via a denial-of-service attack such as a fork bomb. synchronization primitive See concurrency control primitive 12 See Wikipedia 13 See Wikipedia 19

20 time of check to time of use In software development, time of check to time of use (TOCTOU, TOCTTOU or TOC/TOU) is a class of software bugs caused by changes in a system between the checking of a condition (such as a security credential) and the use of the results of that check 14. This is one example of a race condition. TOCTOU See time of check to time of use. TOCTTOU See time of check to time of use. wait lock See spin lock. 14 See Wikipedia 20

Concurrency. Glossary

Concurrency. Glossary Glossary atomic Executing as a single unit or block of computation. An atomic section of code is said to have transactional semantics. No intermediate state for the code unit is visible outside of the

More information

Scheduling. 2 Constraints 3. Glossary 14

Scheduling. 2 Constraints 3. Glossary 14 Contents 1 Processes 2 1.1 Metrics.......................................... 2 1.2 Process Types....................................... 2 1.3 Running Environments.................................. 3 2

More information

Locks. Dongkun Shin, SKKU

Locks. Dongkun Shin, SKKU Locks 1 Locks: The Basic Idea To implement a critical section A lock variable must be declared A lock variable holds the state of the lock Available (unlocked, free) Acquired (locked, held) Exactly one

More information

CS533 Concepts of Operating Systems. Jonathan Walpole

CS533 Concepts of Operating Systems. Jonathan Walpole CS533 Concepts of Operating Systems Jonathan Walpole Introduction to Threads and Concurrency Why is Concurrency Important? Why study threads and concurrent programming in an OS class? What is a thread?

More information

Concurrency: Mutual Exclusion (Locks)

Concurrency: Mutual Exclusion (Locks) Concurrency: Mutual Exclusion (Locks) Questions Answered in this Lecture: What are locks and how do we implement them? How do we use hardware primitives (atomics) to support efficient locks? How do we

More information

[537] Locks. Tyler Harter

[537] Locks. Tyler Harter [537] Locks Tyler Harter Review: Threads+Locks CPU 1 CPU 2 running thread 1 running thread 2 RAM PageDir A PageDir B CPU 1 CPU 2 running thread 1 running thread 2 RAM PageDir A PageDir B Virt Mem (PageDir

More information

Threads. Concurrency. What it is. Lecture Notes Week 2. Figure 1: Multi-Threading. Figure 2: Multi-Threading

Threads. Concurrency. What it is. Lecture Notes Week 2. Figure 1: Multi-Threading. Figure 2: Multi-Threading Threads Figure 1: Multi-Threading Figure 2: Multi-Threading Concurrency What it is 1. Two or more threads of control access a shared resource. Scheduler operation must be taken into account fetch-decode-execute-check

More information

CS 537 Lecture 11 Locks

CS 537 Lecture 11 Locks CS 537 Lecture 11 Locks Michael Swift 10/17/17 2004-2007 Ed Lazowska, Hank Levy, Andrea and Remzi Arpaci-Dussea, Michael Swift 1 Concurrency: Locks Questions answered in this lecture: Review: Why threads

More information

Concurrency: Locks. Announcements

Concurrency: Locks. Announcements CS 537 Introduction to Operating Systems UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department Concurrency: Locks Andrea C. Arpaci-Dusseau Remzi H. Arpaci-Dusseau Questions answered in this lecture:

More information

CS5460/6460: Operating Systems. Lecture 11: Locking. Anton Burtsev February, 2014

CS5460/6460: Operating Systems. Lecture 11: Locking. Anton Burtsev February, 2014 CS5460/6460: Operating Systems Lecture 11: Locking Anton Burtsev February, 2014 Race conditions Disk driver maintains a list of outstanding requests Each process can add requests to the list 1 struct list

More information

Building Concurrency Primitives

Building Concurrency Primitives Building Concurrency Primitives Science Computer Science CS 450: Operating Systems Sean Wallace Previously 1. Decided concurrency was a useful (sometimes necessary) thing to have. 2.

More information

Dealing with Issues for Interprocess Communication

Dealing with Issues for Interprocess Communication Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1 Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

CS3210: Multiprocessors and Locking

CS3210: Multiprocessors and Locking CS3210: Multiprocessors and Locking Kyle Harrigan 1 / 33 Administrivia Lab 3 (Part A), due Feb 24 Lab 3 (Part B), due Mar 3 Drop Date approaching (Mar 15) Team Proposal (3-5 min/team) - Mar 7 2 / 33 Summary

More information

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3 Operating Systems Antonio Vivace - 2016 revision 4 Licensed under GPLv3 Process Synchronization Background A cooperating process can share directly a logical address space (code, data) or share data through

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011 Synchronization CS61, Lecture 18 Prof. Stephen Chong November 3, 2011 Announcements Assignment 5 Tell us your group by Sunday Nov 6 Due Thursday Nov 17 Talks of interest in next two days Towards Predictable,

More information

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing

More information

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Lesson 6: Process Synchronization

Lesson 6: Process Synchronization Lesson 6: Process Synchronization Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization

More information

MULTITHREADING AND SYNCHRONIZATION. CS124 Operating Systems Fall , Lecture 10

MULTITHREADING AND SYNCHRONIZATION. CS124 Operating Systems Fall , Lecture 10 MULTITHREADING AND SYNCHRONIZATION CS124 Operating Systems Fall 2017-2018, Lecture 10 2 Critical Sections Race conditions can be avoided by preventing multiple control paths from accessing shared state

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 The Process Concept 2 The Process Concept Process a program in execution

More information

Implementing Locks. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau)

Implementing Locks. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau) Implementing Locks Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau) Lock Implementation Goals We evaluate lock implementations along following lines Correctness Mutual exclusion: only one

More information

Chapter 6 Process Synchronization

Chapter 6 Process Synchronization Chapter 6 Process Synchronization Cooperating Process process that can affect or be affected by other processes directly share a logical address space (threads) be allowed to share data via files or messages

More information

CSci 4061 Introduction to Operating Systems. Synchronization Basics: Locks

CSci 4061 Introduction to Operating Systems. Synchronization Basics: Locks CSci 4061 Introduction to Operating Systems Synchronization Basics: Locks Synchronization Outline Basics Locks Condition Variables Semaphores Basics Race condition: threads + shared data Outcome (data

More information

Synchronization 2: Locks (part 2), Mutexes

Synchronization 2: Locks (part 2), Mutexes Synchronization 2: Locks (part 2), Mutexes 1 load/store reordering 2 recall: out-of-order processors processors execute instructons in different order hide delays from slow caches, variable computation

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

Concurrency. Chapter 5

Concurrency. Chapter 5 Concurrency 1 Chapter 5 2 Concurrency Is a fundamental concept in operating system design Processes execute interleaved in time on a single processor Creates the illusion of simultaneous execution Benefits

More information

Synchronization 1. Synchronization

Synchronization 1. Synchronization Synchronization 1 Synchronization key concepts critical sections, mutual exclusion, test-and-set, spinlocks, blocking and blocking locks, semaphores, condition variables, deadlocks reading Three Easy Pieces:

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 Process creation in UNIX All processes have a unique process id getpid(),

More information

Synchronising Threads

Synchronising Threads Synchronising Threads David Chisnall March 1, 2011 First Rule for Maintainable Concurrent Code No data may be both mutable and aliased Harder Problems Data is shared and mutable Access to it must be protected

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

CS3210: Processes and switching 1. Taesoo Kim

CS3210: Processes and switching 1. Taesoo Kim 1 CS3210: Processes and switching 1 Taesoo Kim 2 Administrivia (Mar 17) Team Proposal Day (just slides, 3-5 min/team) Problem statement Idea Demo plan (aka evaluation) Timeline DUE : submit slides (as

More information

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition Module 6: Process Synchronization 6.1 Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Operating Systems. Thread Synchronization Primitives. Thomas Ropars.

Operating Systems. Thread Synchronization Primitives. Thomas Ropars. 1 Operating Systems Thread Synchronization Primitives Thomas Ropars thomas.ropars@univ-grenoble-alpes.fr 2017 2 Agenda Week 42/43: Synchronization primitives Week 44: Vacation Week 45: Synchronization

More information

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018 Deadlock and Monitors CS439: Principles of Computer Systems September 24, 2018 Bringing It All Together Processes Abstraction for protection Define address space Threads Share (and communicate) through

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization

More information

Pre-lab #2 tutorial. ECE 254 Operating Systems and Systems Programming. May 24, 2012

Pre-lab #2 tutorial. ECE 254 Operating Systems and Systems Programming. May 24, 2012 Pre-lab #2 tutorial ECE 254 Operating Systems and Systems Programming May 24, 2012 Content Concurrency Concurrent Programming Thread vs. Process POSIX Threads Synchronization and Critical Sections Mutexes

More information

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs CSE 451: Operating Systems Winter 2005 Lecture 7 Synchronization Steve Gribble Synchronization Threads cooperate in multithreaded programs to share resources, access shared data structures e.g., threads

More information

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018 Deadlock and Monitors CS439: Principles of Computer Systems February 7, 2018 Last Time Terminology Safety and liveness Atomic Instructions, Synchronization, Mutual Exclusion, Critical Sections Synchronization

More information

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10) Synchronization CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10) 1 Outline Critical region and mutual exclusion Mutual exclusion using busy waiting Sleep and

More information

Concurrency, Thread. Dongkun Shin, SKKU

Concurrency, Thread. Dongkun Shin, SKKU Concurrency, Thread 1 Thread Classic view a single point of execution within a program a single PC where instructions are being fetched from and executed), Multi-threaded program Has more than one point

More information

Chapter 5 Asynchronous Concurrent Execution

Chapter 5 Asynchronous Concurrent Execution Chapter 5 Asynchronous Concurrent Execution Outline 5.1 Introduction 5.2 Mutual Exclusion 5.2.1 Java Multithreading Case Study 5.2.2 Critical Sections 5.2.3 Mutual Exclusion Primitives 5.3 Implementing

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Chapter 6: Synchronization 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic

More information

Multiprocessors and Locking

Multiprocessors and Locking Types of Multiprocessors (MPs) Uniform memory-access (UMA) MP Access to all memory occurs at the same speed for all processors. Multiprocessors and Locking COMP9242 2008/S2 Week 12 Part 1 Non-uniform memory-access

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Review: Easy Piece 1

Review: Easy Piece 1 CS 537 Lecture 10 Threads Michael Swift 10/9/17 2004-2007 Ed Lazowska, Hank Levy, Andrea and Remzi Arpaci-Dussea, Michael Swift 1 Review: Easy Piece 1 Virtualization CPU Memory Context Switch Schedulers

More information

Memory Consistency Models

Memory Consistency Models Memory Consistency Models Contents of Lecture 3 The need for memory consistency models The uniprocessor model Sequential consistency Relaxed memory models Weak ordering Release consistency Jonas Skeppstedt

More information

Threads need to synchronize their activities to effectively interact. This includes:

Threads need to synchronize their activities to effectively interact. This includes: KING FAHD UNIVERSITY OF PETROLEUM AND MINERALS Information and Computer Science Department ICS 431 Operating Systems Lab # 8 Threads Synchronization ( Mutex & Condition Variables ) Objective: When multiple

More information

Problem Set 2. CS347: Operating Systems

Problem Set 2. CS347: Operating Systems CS347: Operating Systems Problem Set 2 1. Consider a clinic with one doctor and a very large waiting room (of infinite capacity). Any patient entering the clinic will wait in the waiting room until the

More information

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University Semaphores Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu EEE3052: Introduction to Operating Systems, Fall 2017, Jinkyu Jeong (jinkyu@skku.edu) Synchronization

More information

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6. Part Three - Process Coordination Chapter 6: Synchronization 6.1 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure

More information

CS-537: Midterm Exam (Spring 2001)

CS-537: Midterm Exam (Spring 2001) CS-537: Midterm Exam (Spring 2001) Please Read All Questions Carefully! There are seven (7) total numbered pages Name: 1 Grading Page Points Total Possible Part I: Short Answers (12 5) 60 Part II: Long

More information

Multicore and Multiprocessor Systems: Part I

Multicore and Multiprocessor Systems: Part I Chapter 3 Multicore and Multiprocessor Systems: Part I Max Planck Institute Magdeburg Jens Saak, Scientific Computing II 44/337 Symmetric Multiprocessing Definition (Symmetric Multiprocessing (SMP)) The

More information

CSE 4/521 Introduction to Operating Systems

CSE 4/521 Introduction to Operating Systems CSE 4/521 Introduction to Operating Systems Lecture 7 Process Synchronization II (Classic Problems of Synchronization, Synchronization Examples) Summer 2018 Overview Objective: 1. To examine several classical

More information

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization Chapter 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

Synchronization for Concurrent Tasks

Synchronization for Concurrent Tasks Synchronization for Concurrent Tasks Minsoo Ryu Department of Computer Science and Engineering 2 1 Race Condition and Critical Section Page X 2 Algorithmic Approaches Page X 3 Hardware Support Page X 4

More information

Synchronization Principles

Synchronization Principles Synchronization Principles Gordon College Stephen Brinton The Problem with Concurrency Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms

More information

Interprocess Communication and Synchronization

Interprocess Communication and Synchronization Chapter 2 (Second Part) Interprocess Communication and Synchronization Slide Credits: Jonathan Walpole Andrew Tanenbaum 1 Outline Race Conditions Mutual Exclusion and Critical Regions Mutex s Test-And-Set

More information

Synchronization I. Jo, Heeseung

Synchronization I. Jo, Heeseung Synchronization I Jo, Heeseung Today's Topics Synchronization problem Locks 2 Synchronization Threads cooperate in multithreaded programs To share resources, access shared data structures Also, to coordinate

More information

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack What is it? Recap: Thread Independent flow of control What does it need (thread private)? Stack What for? Lightweight programming construct for concurrent activities How to implement? Kernel thread vs.

More information

Synchronization 1. Synchronization

Synchronization 1. Synchronization Synchronization 1 Synchronization key concepts critical sections, mutual exclusion, test-and-set, spinlocks, blocking and blocking locks, semaphores, condition variables, deadlocks reading Three Easy Pieces:

More information

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization Hank Levy Levy@cs.washington.edu 412 Sieg Hall Synchronization Threads cooperate in multithreaded programs to share resources, access shared

More information

ECE 574 Cluster Computing Lecture 8

ECE 574 Cluster Computing Lecture 8 ECE 574 Cluster Computing Lecture 8 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 16 February 2017 Announcements Too many snow days Posted a video with HW#4 Review HW#5 will

More information

Module 6: Process Synchronization

Module 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization Examples Atomic

More information

Deadlock CS 241. March 19, University of Illinois

Deadlock CS 241. March 19, University of Illinois Deadlock CS 241 March 19, 2014 University of Illinois Slides adapted in part from material accompanying Bryant & O Hallaron, Computer Systems: A Programmer's Perspective, 2/E 1 The Dining Philosophers

More information

Multithreading Programming II

Multithreading Programming II Multithreading Programming II Content Review Multithreading programming Race conditions Semaphores Thread safety Deadlock Review: Resource Sharing Access to shared resources need to be controlled to ensure

More information

Dining Philosophers, Semaphores

Dining Philosophers, Semaphores CS 220: Introduction to Parallel Computing Dining Philosophers, Semaphores Lecture 27 Today s Schedule Dining Philosophers Semaphores Barriers Thread Safety 4/30/18 CS 220: Parallel Computing 2 Today s

More information

CS-537: Midterm Exam (Spring 2009) The Future of Processors, Operating Systems, and You

CS-537: Midterm Exam (Spring 2009) The Future of Processors, Operating Systems, and You CS-537: Midterm Exam (Spring 2009) The Future of Processors, Operating Systems, and You Please Read All Questions Carefully! There are 15 total numbered pages. Please put your NAME and student ID on THIS

More information

Systèmes d Exploitation Avancés

Systèmes d Exploitation Avancés Systèmes d Exploitation Avancés Instructor: Pablo Oliveira ISTY Instructor: Pablo Oliveira (ISTY) Systèmes d Exploitation Avancés 1 / 32 Review : Thread package API tid thread create (void (*fn) (void

More information

IT 540 Operating Systems ECE519 Advanced Operating Systems

IT 540 Operating Systems ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (5 th Week) (Advanced) Operating Systems 5. Concurrency: Mutual Exclusion and Synchronization 5. Outline Principles

More information

Plan. Demos. Next Project. CSCI [4 6]730 Operating Systems. Hardware Primitives. Process Synchronization Part II. Synchronization Part 2

Plan. Demos. Next Project. CSCI [4 6]730 Operating Systems. Hardware Primitives. Process Synchronization Part II. Synchronization Part 2 Plan Demos Project: Due» Demos following week (Wednesday during class time) Next Week» New phase of presentations» Deadlock, finish synchronization Course Progress:» History, Structure, Processes, Threads,

More information

Synchronization I q Race condition, critical regions q Locks and concurrent data structures q Next time: Condition variables, semaphores and monitors

Synchronization I q Race condition, critical regions q Locks and concurrent data structures q Next time: Condition variables, semaphores and monitors Synchronization I q Race condition, critical regions q Locks and concurrent data structures q Next time: Condition variables, semaphores and monitors Cooperating processes and threads Cooperating processes

More information

Operating Systems. Operating Systems Sina Meraji U of T

Operating Systems. Operating Systems Sina Meraji U of T Operating Systems Operating Systems Sina Meraji U of T Remember example from third week? My_work(id_t id) { /* id can be 0 or 1 */... flag[id] = true; /* indicate entering CS */ while (flag[1-id]) ;/*

More information

Programming Languages

Programming Languages TECHNISCHE UNIVERSITÄT MÜNCHEN FAKULTÄT FÜR INFORMATIK Programming Languages Concurrency: Atomic Executions, Locks and Monitors Dr. Michael Petter Winter term 2016 Atomic Executions, Locks and Monitors

More information

RCU in the Linux Kernel: One Decade Later

RCU in the Linux Kernel: One Decade Later RCU in the Linux Kernel: One Decade Later by: Paul E. Mckenney, Silas Boyd-Wickizer, Jonathan Walpole Slides by David Kennedy (and sources) RCU Usage in Linux During this same time period, the usage of

More information

Data Races and Deadlocks! (or The Dangers of Threading) CS449 Fall 2017

Data Races and Deadlocks! (or The Dangers of Threading) CS449 Fall 2017 Data Races and Deadlocks! (or The Dangers of Threading) CS449 Fall 2017 Data Race Shared Data: 465 1 8 5 6 209? tail A[] thread switch Enqueue(): A[tail] = 20; tail++; A[tail] = 9; tail++; Thread 0 Thread

More information

CSEN 602-Operating Systems, Spring 2018 Practice Assignment 2 Solutions Discussion:

CSEN 602-Operating Systems, Spring 2018 Practice Assignment 2 Solutions Discussion: CSEN 602-Operating Systems, Spring 2018 Practice Assignment 2 Solutions Discussion: 10.2.2018-15.2.2018 Exercise 2-1: Reading Read sections 2.1 (except 2.1.7), 2.2.1 till 2.2.5. 1 Exercise 2-2 In Fig.1,

More information

CS3733: Operating Systems

CS3733: Operating Systems Outline CS3733: Operating Systems Topics: Synchronization, Critical Sections and Semaphores (SGG Chapter 6) Instructor: Dr. Tongping Liu 1 Memory Model of Multithreaded Programs Synchronization for coordinated

More information

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor. Synchronization 1 Concurrency On multiprocessors, several threads can execute simultaneously, one on each processor. On uniprocessors, only one thread executes at a time. However, because of preemption

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS

More information

Concurrency: a crash course

Concurrency: a crash course Chair of Software Engineering Carlo A. Furia, Marco Piccioni, Bertrand Meyer Concurrency: a crash course Concurrent computing Applications designed as a collection of computational units that may execute

More information

Threads and Synchronization. Kevin Webb Swarthmore College February 15, 2018

Threads and Synchronization. Kevin Webb Swarthmore College February 15, 2018 Threads and Synchronization Kevin Webb Swarthmore College February 15, 2018 Today s Goals Extend processes to allow for multiple execution contexts (threads) Benefits and challenges of concurrency Race

More information

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Operating System Concepts 9th Edition Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Operating systems. Lecture 12

Operating systems. Lecture 12 Operating systems. Lecture 12 Michał Goliński 2018-12-18 Introduction Recall Critical section problem Peterson s algorithm Synchronization primitives Mutexes Semaphores Plan for today Classical problems

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4 Temporal relations CSE 451: Operating Systems Autumn 2010 Module 7 Synchronization Instructions executed by a single thread are totally ordered A < B < C < Absent synchronization, instructions executed

More information

Programming Languages

Programming Languages Programming Languages Tevfik Koşar Lecture - XXVI April 27 th, 2006 1 Roadmap Shared Memory Synchronization Spin Locks Barriers Semaphores Monitors 2 1 Memory Architectures Distributed Memory Shared Memory

More information

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Processes Prof. James L. Frankel Harvard University Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Process Model Each process consists of a sequential program

More information

CS 333 Introduction to Operating Systems Class 4 Concurrent Programming and Synchronization Primitives

CS 333 Introduction to Operating Systems Class 4 Concurrent Programming and Synchronization Primitives CS 333 Introduction to Operating Systems Class 4 Concurrent Programming and Synchronization Primitives Jonathan Walpole Computer Science Portland State University 1 What does a typical thread API look

More information

Process Synchronization(2)

Process Synchronization(2) EECS 3221.3 Operating System Fundamentals No.6 Process Synchronization(2) Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University Semaphores Problems with the software solutions.

More information

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; } Semaphore Semaphore S integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Can only be accessed via two indivisible (atomic) operations wait (S) { while

More information

Solving the Producer Consumer Problem with PThreads

Solving the Producer Consumer Problem with PThreads Solving the Producer Consumer Problem with PThreads Michael Jantz Dr. Prasad Kulkarni Dr. Douglas Niehaus EECS 678 Pthreads: Producer-Consumer 1 Introduction This lab is an extension of last week's lab.

More information

IV. Process Synchronisation

IV. Process Synchronisation IV. Process Synchronisation Operating Systems Stefan Klinger Database & Information Systems Group University of Konstanz Summer Term 2009 Background Multiprogramming Multiple processes are executed asynchronously.

More information

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling. Background The Critical-Section Problem Background Race Conditions Solution Criteria to Critical-Section Problem Peterson s (Software) Solution Concurrent access to shared data may result in data inconsistency

More information