Operating Systems Spring 2009-2010
Outline Process Synchronisation (contd.) 1 Process Synchronisation (contd.) 2
Announcements Presentations: will be held on last teaching week during lectures make a 20-minute presentation additionally, a publicly available.pdf of talk 5 teams of 2 topic: smartphone operating systems Symbian Google / Android iphone Blackberry Windows Mobile
Solution to Critical-Section Problem Any design solution to the critical section problem must address Mutual Exclusion: If process P i is executing in its critical section, then no other processes can be executing in their critical sections Progress: If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes
Petersen s Solution Two process solution Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. This is not reasonable on modern CPUs! The two processes share two variables: int turn; boolean flag[2] The variable turn indicates whose turn it is to enter the critical section The boolean array flag is used to indicate if a process is ready to enter the critical section: flag[i]=true implies that process P i is ready!
Petersen s Solution (contd.) Two processes, P 0 and P 1 ; code below is for process P i ; other process is j = 1 i; Algorithm for Process P i while (true) { flag[i] = true; turn = j; // j = 1-i while ( flag[j] && turn == j) /* no-op */ ; CRITICAL SECTION flag[i] = false; REMAINDER SECTION }
Synchronization Hardware Many systems provide hardware support for critical section code Could disable interrupts ( uniprocessor ) Currently running code would execute without preemption Generally too inefficient on multiprocessor systems: OSs using this are not broadly scalable Modern machines provide special atomic hardware instructions Either test memory word and set value Or swap contents of two memory words
Semaphores Synchronization tool that does not require busy waiting Semaphore S, an integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() (from the Dutch) Less complicated Can only be accessed via two indivisible (atomic) operations whose implementation are: wait(s): wait(s) { while (S <= 0) ; // no-op S- -; } signal(s): Signal(S) { S++; }
Semaphores as a General Synchronization Tool Counting semaphore: integer value which can range over an unrestricted domain (permits more than one simultaneous access) Binary semaphore: integer value can range only between 0 and 1 can be simpler to implement Also known as a mutex lock Can implement a counting semaphore S as a binary semaphore Provides mutual exclusion Semaphore S; // initialized to 1 wait(s); CRITICAL SECTION signal(s);
Semaphore Implementation Must guarantee that no two processes can execute wait() and signal() on the same semaphore at the same time Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section Could now have busy waiting in critical section implementation But implementation code is short Little busy waiting if critical section rarely occupied Conversely, applications which spend lots of time in critical sections should not use this implementation
Semaphore Implementation with no Busy Waiting With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: int value pointer to next record in the list Two operations: block: place the process invoking the operation on the appropriate waiting queue wakeup: remove one of processes in the waiting queue and place it in the ready queue
Outline Process Synchronisation (contd.) 1 Process Synchronisation (contd.) 2
The Problem A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set Example: System has 2 disk drives P1 and P2 each hold one disk drive and each needs another one Example semaphores A and B, initialized to 1 P 0 wait(a); wait(b); P 1 wait(b); wait(a);
The Problem (contd.) Bridge Crossing Example: Traffic only in one direction Each section of a bridge can be viewed as a resource If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback) Several cars may have to be backed up if a deadlock occurs Starvation is possible
System Model Process Synchronisation (contd.) Resource types R 1, R 2,..., R m (CPU cycles, memory space, I/O devices) Each resource type R i has W i instances Each process utilizes a resource as follows: request use release
Characterization can only arise if four conditions hold simultaneously: Mutual exclusion: only one process at a time can use a resource Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task Circular wait: there exists a set {P 0, P 1,..., P n } of waiting processes such that P 0 is waiting for a resource that is held by P 1, P 1 is waiting for a resource that is held by P 2,..., P n 1 is waiting for a resource that is held by P n, and P n is waiting for a resource that is held by P 0
Resource-Allocation Graph A set of vertices V and a set of edges E V is partitioned into two types: P = {P 1, P 2,..., P n }, the set consisting of all the processes in the system R = {R 1, R 2,..., R m }, the set consisting of all resource types in the system request edge: a directed edge P i R j assignment edge: a directed edge R k P l
Example of a Resource-Allocation Graph Three processes, P 1, P 2 and P 3 Four resources Two instances of resource R 2 Three instances of resource R 4 Process P 1 has been allocated one instance of R 2 and has requested an instance of R 1 Process P 2 has been allocated one instance of R 2 and has requested an instance of R 3 etc.
Resource-Allocation Graph with. Process P 3 requests an instance of R 2
Detection If graph contains no cycles then there is no deadlock If graph contains a cycle: if only one instance per resource type, then deadlock if several instances per resource type, then possibility of deadlock
Resource-Allocation Graph with possibility of
Methods for Handling s Ensure that the system will never enter a deadlock state Allow the system to enter a deadlock state and then recover Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX
Prevention Restrain the ways request can be made. Mutual Exclusion: not required for sharable resources; must hold for nonsharable resources Hold and Wait: must guarantee that whenever a process requests a resource, it does not hold any other resources Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none Low resource utilization; starvation possible.
Prevention (contd.) No Preemption: If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released Preempted resources are added to the list of resources for which the process is waiting Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting Circular Wait: impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration
Avoidance Requires that the system has some additional a priori information available Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes