Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Similar documents
Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 6: Process Synchronization

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Process Synchronization

Synchronization Principles

Lesson 6: Process Synchronization

Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CS370 Operating Systems

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Module 6: Process Synchronization

CHAPTER 6: PROCESS SYNCHRONIZATION

CS370 Operating Systems

Process Synchronization

Chapter 7: Process Synchronization!

Chapter 5: Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization

Chapter 7: Process Synchronization. Background. Illustration

Process Synchronization

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Chapter 5: Process Synchronization

CS420: Operating Systems. Process Synchronization

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Chapter 7: Process Synchronization. Background

Chapter 6: Process Synchronization

Process Coordination

Real-Time Operating Systems M. 5. Process Synchronization

Interprocess Communication By: Kaushik Vaghani

Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

CSE 4/521 Introduction to Operating Systems

Process Synchronization

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Chapter 6 Synchronization

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Chapter 6: Process Synchronization

Process Management And Synchronization

Chapter 6 Process Synchronization

Process Co-ordination OPERATING SYSTEMS

Roadmap. Readers-Writers Problem. Readers-Writers Problem. Readers-Writers Problem (Cont.) Dining Philosophers Problem.

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

Process Synchronization(2)

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!

PROCESS SYNCHRONIZATION

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Process Synchronization(2)

Module 6: Process Synchronization

CS370 Operating Systems

Programming Languages

CSE Opera,ng System Principles

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Prof. Hui Jiang Dept of Computer Science and Engineering York University

Lecture 3: Synchronization & Deadlocks

Introduction to Operating Systems

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Chapter 6: Synchronization

Dept. of CSE, York Univ. 1

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Process Synchronization(2)

OS Process Synchronization!

IV. Process Synchronisation

Lecture 5: Inter-process Communication and Synchronization

Process Synchronization

Process Synchronization

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

CS370 Operating Systems

Synchronization Principles II

CSC501 Operating Systems Principles. Process Synchronization

Dealing with Issues for Interprocess Communication

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

CS3733: Operating Systems

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Process Synchronization (Part I)

Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - X Deadlocks - I. University at Buffalo. Synchronization structures

Roadmap. Problems with Semaphores. Semaphores. Monitors. Monitor - Example. Tevfik Koşar. CSE 421/521 - Operating Systems Fall 2012

9/29/2014. CS341: Operating System Mid Semester Model Solution Uploaded Semaphore ADT: wait(), signal()

CS3502 OPERATING SYSTEMS

Concurrency. Chapter 5

Process Synchronization. studykorner.org

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

Concurrency: Mutual Exclusion and

Synchronization for Concurrent Tasks

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Process Synchronization. Operating System Concepts 9 th Edit9on

Puzzle: Write synchronization code for oxygen and hydrogen molecules that enforces these constraints.

Process Synchronization

Transcription:

Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009

Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Synchronization Examples 6.2 Silberschatz, Galvin and Gagne 2009

Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To present both software and hardware solutions of the critical-section problem 6.3 Silberschatz, Galvin and Gagne 2009

Background Concurrent access to shared data may result in data inconsistency Multiple threads in a single process Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes 6.4 Silberschatz, Galvin and Gagne 2009

Background -2- Disjoint threads use disjoint sets of variables, and do not use shared resources. The progress of one thread is independent of all other threads, to which it is disjoint. Non-disjoint threads influence each other by using shared data and/or resources. Two possibilities: Competing threads: compete for access to the data resources Cooperating threads: e.g. producer / consumer model Without synchronization, the effect of non-disjoint parallel threads influencing each other is not predictable and not reproducible. 6.5 Silberschatz, Galvin and Gagne 2009

Example -1- Suppose that we wanted to provide a solution to the consumerproducer problem that fills all the buffers. We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer. 6.6 Silberschatz, Galvin and Gagne 2009

PROCESS SYNCHRONIZATION The Producer Consumer Problem : A producer process "produces" information "consumed" by a consumer process. Here are the variables needed to define the problem: #define BUFFER_SIZE 10 typedef struct { DATA data; } item; item buffer[buffer_size]; int in = 0; // Location of next input to buffer int out = 0; // Location of next removal from buffer int count = 0; // Number of buffers currently full 6.7 Silberschatz, Galvin and Gagne 2009

Producer / Consumer Producer : item nextproduced; while (true) { /* produce an item and put in nextproduced */ while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextproduced; in = (in + 1) % BUFFER_SIZE; count++; } Consumer: item nextconsumed; while (true) { while (count == 0) ; // do nothing nextconsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; /* consume the item in nextconsumed } 6.8 Silberschatz, Galvin and Gagne 2009

count++ could be implemented as register1 = count register1 = register1 + 1 count = register1 count-- could be implemented as register2 = count register2 = register2-1 count = register2 Race Condition Consider this execution interleaving with count = 5 initially: S0: producer execute register1 = count {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2-1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4} count = 4 after count++ and count--, even though we started with count = 5 Easy question: what other values can count be from doing this incorrectly? Obviously, we would like to have count++ execute, followed by count-- (or vice versa) 6.9 Silberschatz, Galvin and Gagne 2009

Example -2-6.10 Silberschatz, Galvin and Gagne 2009

Race Conditions A race condition is where multiple processes/threads concurrently read and write to a shared memory location and the result depends on the order of the execution. The part of the program, in which race conditions can occur, is called critical section. 6.11 Silberschatz, Galvin and Gagne 2009

Critical Sections A critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. The goal is to provide a mechanism by which only one instance of a critical section is executing for a particular shared resource. Unfortunately, it is often very difficult to detect critical section bugs 6.12 Silberschatz, Galvin and Gagne 2009

Critical Sections -2- A Critical Section Environment contains: Entry Section Code requesting entry into the critical section. Critical Section Code in which only one process can execute at any one time. Exit Section The end of the critical section, releasing or allowing others in. Remainder Section Rest of the code AFTER the critical section. do { Entry Section; critical section; Exit Section; } while (true); remainder sections; 6.13 Silberschatz, Galvin and Gagne 2009

Solution to Critical-Section Problem A solution to the critical-section problem must satisfy the following requirements: 1. Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting - A bound, or limit must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes 6.14 Silberschatz, Galvin and Gagne 2009

Critical section problem an example, a kernel data structure that maintains a List of all open files in the system, If it will be accessed simultaneously by two processes this will result in a race condition. Two general approaches to handle CS in OS: Preemptive kernels e.g. Linux( carefully designed). Nonpreemptive kernels e.g. WinXP (free from race conditions?) 6.15 Silberschatz, Galvin and Gagne 2009

Peterson s Solution Two processes solution It provides a good algorithmic description of solving the criticalsection problem Algorithm is only for 2 processes at a time Processes are P 0 P 1 Or can also be represented as P i and P j, i.e. j=1-i 6.16 Silberschatz, Galvin and Gagne 2009

2 process-algorithm 1 Let the processes share common integer variable turn Let int turn =0 (or 1) Do{ while (turn!= i); }while(1) Turn = j; Critical section remainder section For Process P i If process turn == i, P i is allowed to execute in critical section But, Guarantees mutual exclusion. Does not guarantee progress --- enforces strict alternation of processes entering CS's (if P j decides not to reenter or crashes outside CS, then P i cannot ever get in). Sonali C. 17 6.17 Silberschatz, Galvin and Gagne 2009

2 process -Algorithm 2 Shared variables boolean flag[2]; // interest bits initially flag [0] = flag [1] = false. flag [i] = true P i declares interest in entering its critical section Process P i // where the other process is P j do { flag[i] = true; // declare your own interest while (flag[ j]) ; //wait if the other guy is interested critical section flag [i] = false; // declare that you lost interest remainder section // allows other guy to enter } while (1); Sonali C. Process Synchronization 18 6.18 Silberschatz, Galvin and Gagne 2009

2-PROCESS -Algorithm 2 Satisfies mutual exclusion, but not progress requirement. If flag[ i] == flag[ j] == true, then deadlock - no progress but barring this event, a non-cs guy cannot block you from entering Can make consecutive re-entries to CS if other not interested Sonali C. Process Synchronization 19 6.19 Silberschatz, Galvin and Gagne 2009

TWO-PROCESS -Algorithm 3 Combined shared variables & approaches of algorithms 1 and 2. For Process P i do { flag [i] = true; // declare your interest to enter turn = j; // assume it is the other s turn-give PJ a chance while (flag [ j ] and turn == j) ; critical section flag [i] = false; remainder section } while (1); Sonali C. Process Synchronization 20 6.20 Silberschatz, Galvin and Gagne 2009

TWO-PROCESS-Algorithm 3 Meets all three requirements; solves the criticalsection problem for two processes. Turn variable breaks any deadlock possibility of previous example, AND prevents hogging P i setting turn to j gives P J a chance after each pass of P i s CS Flag[ ] variable prevents getting locked out if other guy never re-enters or crashes outside and allows CS consecutive access other not interested in entering. Sonali C. Process Synchronization 21 6.21 Silberschatz, Galvin and Gagne 2009

Synchronization Hardware Many systems provide hardware support for critical section code Uniprocessors could disable interrupts Currently running code would execute without preemption Generally too inefficient on multiprocessor systems Operating systems using this not broadly scalable Modern machines provide special atomic hardware instructions Atomic = non-interruptable Either test memory word and set value(testandset()) Or swap contents of two memory words(swap()) TestAndSet is hard to program for end users 6.22 Silberschatz, Galvin and Gagne 2009

Semaphore Semaphore: a synchronization tool, A flag used to indicate that a routine cannot proceed if a shared resource is already in use by another routine. Semaphore S integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Less complicated Can only be accessed via two indivisible (atomic) operations wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; } 6.23 Silberschatz, Galvin and Gagne 2009

Semaphore as General Synchronization Tool Two types: Counting semaphore integer value can range over an unrestricted domain Binary semaphore integer value can range only between 0 and 1; can be simpler to implement Also known as mutex locks (they are locks that provide mutual exclusion) There are 3 uses for Semaphores: 6.24 Silberschatz, Galvin and Gagne 2009

Usage1: Semaphores Usage Binary semaphores can solve the critical-section problem for n processes. The n processes share a semaphore, mutex, initialized to 1. Each process P i is organized as: Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE); Only one process is allowed into Critical Section (mutual exclusion). 6.25 Silberschatz, Galvin and Gagne 2009

Usage2: Semaphores Usage Counting semaphores control access to a given resource of many instances. Simply initialize the semaphore to K (# of available resources). This allows K processes to enter their critical-sections and use that resource at a time. Each process wants to use a resource performs wait( ) operation decrementing the count of the semaphore. Each process releases a resource, it performs signal( ) operation incrementing the count of the semaphore. When the count =0, all resources are being used. Any process wants to use a resource will block until count >0. 6.26 Silberschatz, Galvin and Gagne 2009

Semaphores Usage Usage3: To solve synchronization problems. Example: Two concurrent processes: P 1 and P 2 Statement S 1 in P 1 needs to be performed before statement S 3 in P 2 Need to make P 2 wait until P 1 tells it is OK to proceed Define a semaphore synch Initialize synch to 0. In P 2 : wait(synch); S 3 ; And in P 1 : S 1 ; signal(synch); 6.27 Silberschatz, Galvin and Gagne 2009

Semaphore Implementation The main disadvantage of the semaphore is that it requires busy waiting, which wastes CPU cycle that some other process might be able to use productively This type of semaphore is also called a spinlock because the process spins while waiting for the lock 6.28 Silberschatz, Galvin and Gagne 2009

Semaphore Implementation with no Busy waiting Each semaphore has an integer value and an associated waiting queue. typedef struct { int value; struct process *list; } semaphore; Two operations: block place the process invoking the operation on the appropriate waiting queue. and the state of the process is switched to the waiting state. Then control is transferred to the CPU scheduler, which selects another process to execute. wakeup remove one of processes in the waiting queue and place it in the ready queue. 6.29 Silberschatz, Galvin and Gagne 2009

Semaphore Implementation with no Busy waiting (Cont.) Implementation of wait: wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } Implementation of signal: signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(p); } } 6.30 Silberschatz, Galvin and Gagne 2009

struct semaphore { int count; queuetype queue; } void wait(semaphore s) { s.count--; if (s.count < 0) { place this process in s.queue; block this process } } void signal(semaphore s) { s.count++; if (s.count <= 0) { remove a process P from s.queue; place process P on ready list; Sonali } C. Process } Synchronization A queue is used to hold processes waiting on a semaphore. 31 6.31 Silberschatz, Galvin and Gagne 2009

Semaphore It's critical that these be atomic, We must guarantee that no two processes can execute wait()and signal()operations on the same semaphore at the same time. in uniprocessors we can disable interrupts, but in multiprocessors other mechanisms for atomicity are needed. 6.32 Silberschatz, Galvin and Gagne 2009

Deadlock Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P 0 P 1 wait (S); wait (Q); wait (Q); wait (S);...... signal (S); signal (Q); signal (Q); signal (S); Suppose that P0 executes wait(s) and P1 then executes wait(q). When P0 executes wait(q), it must wait until P1 executes signal(q). Similarly, when P1 executes wait(s), it must wait until P0 executes signal (S). Since these signal () operations cannot be executed, P0 and P1 are deadlocked. 6.33 Silberschatz, Galvin and Gagne 2009

Starvation Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. 6.34 Silberschatz, Galvin and Gagne 2009

Classical Problems of Synchronization Bounded-Buffer Problem Bounded buffers P/C can be seen in e.g. streaming filters or packet switching in networks. Readers and Writers Problem Database readers and writers: online reservation systems; file systems. Dining-Philosophers Problem Dining philosophers could be a sequence of active database transactions that have a circular wait-for-lock dependence. The Sleeping Barber problem Sleeping Barber is often generally thought of as a client-server relationship. 6.35 Silberschatz, Galvin and Gagne 2009

Bounded-Buffer Problem N buffers, each can hold one item Semaphore mutex controls access to region and initialized to the value 1. Semaphore full counts full buffer slots and initialized to the value 0. Semaphore empty counts empty buffer slots and initialized to the value N. 6.36 Silberschatz, Galvin and Gagne 2009

Bounded Buffer Problem (Cont.) The structure of the producer process do { // produce an item in nextp wait (empty); /* decrement empty count */ wait (mutex); /* enter critical region */ // add the item to the buffer signal (mutex); /* leave critical region */ signal (full); /* increment count of full slots */ } while (TRUE); 6.37 Silberschatz, Galvin and Gagne 2009

Bounded Buffer Problem (Cont.) The structure of the consumer process do { wait (full); /* decrement full count */ wait (mutex); /* enter critical region */ // remove an item from buffer to nextc signal (mutex); /* leave critical region */ signal (empty); /* increment count of empty slots */ // consume the item in nextc } while (TRUE); 6.38 Silberschatz, Galvin and Gagne 2009

Classical Problem 2: The Readers-Writers Problem Multiple readers or a single writer can use DB. writer X reader reader writer X X reader writer reader writer reader reader reader reader CSS430 Processes Synchronization 39 6.39 Silberschatz, Galvin and Gagne 2009

Readers-Writers Problem A data set is shared among a number of concurrent processes Readers only read the data set; they do not perform any updates Writers can both read and write Problem allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time Shared Data Data set Semaphore mutex initialized to 1 Semaphore wrt initialized to 1 Integer readcount initialized to 0 6.40 Silberschatz, Galvin and Gagne 2009

Readers-Writers Problem (Cont.) BINARY_SEMAPHORE wrt = 1; BINARY_SEMAPHORE mutex = 1; int readcount = 0; The structure of a writer process do { wait (wrt) ; The structure of a reader process do { wait (mutex) ; /* Allow 1 reader in entry*/ readcount ++ ; if (readcount == 1) wait (wrt) ; /* 1st reader locks writer */ signal (mutex) performed // writing is // reading is performed signal (wrt) ; } while (TRUE); writer */ wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; /*last reader frees signal (mutex) ; } while (TRUE); 6.41 Silberschatz, Galvin and Gagne 2009

Dining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1 6.42 Silberschatz, Galvin and Gagne 2009

The Structure of Philosopher i Philosopher i while ( true ) { // get left chopstick wait(chopstick[i]); // get right chopstick wait(chopstick[(i + 1) % 5]); // eat for a while //return left chopstick signal(chopstick[i]); // return right chopstick signal(chopstick[(i + 1) % 5]); } // think for awhile Waiting Picked up A deadlock occurs! CSS430 Processes Synchronization 43 6.43 Silberschatz, Galvin and Gagne 2009

Dining-Philosophers Problem 5 philosophers with 5 chopsticks sit around a circular table. They each want to eat at random times and must pick up the chopsticks on their right and on their left. Clearly deadlock is rampant ( and starvation possible.) Several solutions are possible: Allow only 4 philosophers to be hungry at a time. Allow pickup only if both chopsticks are available. ( Done in critical section ) Odd # philosopher always picks up left chopstick1st, even # philosopher always picks up right chopstick 1st. 6.44 Silberschatz, Galvin and Gagne 2009

The Sleeping Barber Problem A barbershop consists of a waiting room with N chairs, and the barber room containing the barber chair. If there are no customers to be served the barber goes to sleep. If a customer enters the barbershop and all chairs are busy, then the customer leaves the shop. If the barber is busy, then the customer sits in one of the available free chairs. If the barber is asleep, the customer wakes the barber up. 6.45 Silberschatz, Galvin and Gagne 2009

The Sleeping Barber Problem The following pseudo-code guarantees synchronization between barber and customer and is deadlock free, but may lead to starvation of a customer Semaphore Customers = 0 Semaphore Barber = 0 Semaphore accessseats (mutex) = 1 int NumberOfFreeSeats = N //total number of seats The Barber (Thread/Process): while(true) { //runs in an infinite loop wait(customers) //tries to acquire a customer - if none is available he goes to sleep wait(accessseats) //at this time he has been awakened - want to modify the number of available seats NumberOfFreeSeats++ //one chair gets free signal(barber) //the barber is ready to cut signal(accessseats) //we don't need the lock on the chairs anymore //here the barber is cutting hair } The Customer (Thread/Process): while(true) { //runs in an infinite loop wait(accessseats) //tries to get access to the chairs if ( NumberOfFreeSeats > 0 ) { //if there are any free seats NumberOfFreeSeats-- //sitting down on a chair signal(customers) //notify the barber, who's waiting that there is a customer signal(accessseats) //don't need to lock the chairs anymore wait(barber) //now it's this customer's turn, but wait if the barber is busy //here the customer is having his hair cut } else { //there are no free seats //tough luck signal(accessseats) //but don't forget to release the lock on the seats //customer leaves without a haircut } } 6.46 Silberschatz, Galvin and Gagne 2009

Synchronization Examples Windows XP Linux 6.47 Silberschatz, Galvin and Gagne 2009

Windows XP Synchronization Uses interrupt masks to protect access to global resources on uniprocessor systems. Uses spinlocks on multiprocessor systems Also provides dispatcher objects which may act as either mutexes and semaphores Dispatcher objects may also provide events An event acts much like a condition variable(they may notify a waiting thread when a desired condition occurs). Nonsignaled owner thread releases mutex lock signaled thread acquires mutex lock 6.48 Silberschatz, Galvin and Gagne 2009

Linux Synchronization Linux: Prior to kernel Version 2.6, a nonpreemptive kernel. Version 2.6 and later, fully preemptive Linux provides: semaphores spin locks Single processor Disable kernel preemption enable kernel preemption Multiple processors Acquire spin lock Release spin lock 6.49 Silberschatz, Galvin and Gagne 2009

End of Chapter 6, Silberschatz, Galvin and Gagne 2009