Operating Systems CMPSC 473 Midterm Review April 15, 8 - Lecture 1 Instructor: Trent Jaeger
Scope Chapter 6 -- Synchronization Chapter 7 -- Deadlocks Chapter 8 -- Main Memory (Physical) Chapter 9 -- Virtual Memory Chapter 1 -- File System Interface Chapter 11 -- File System Implementation
Synchronization Little s Law -- Final Problems Synchronization Requirements Disabling Interrupts Busy-wait/Spinlock solutions Related to properties Hardware Enabled Solutions OS-supported 3
Requirements for Solution 1. Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes 4
Synchronization Hardware Enabled Solutions OS-supported Solutions Mutex Semaphores Condition Variables Apply these to code Classic Synchronization Problems 5
Example Blocking Implementation Enter_CS(L) { Disable Interrupts Check if anyone is using L If not { Set L to being used } else { Move this PCB to Blocked queue for L Select another process to run from Ready queue Context switch to that process } Enable Interrupts } Exit_CS(L) { Disable Interrupts Check if blocked queue for L is empty if so { Set L to free } else { Move PCB from head of Blocked queue of L to Ready queue } Enable Interrupts } NOTE: These are OS system calls! 6
Semaphores You are given a data-type Semaphore_t. On a variable of this type, you are allowed P(Semaphore_t) -- wait V(Semaphore_t) signal Intuitive Functionality: Logically one could visualize the semaphore as having a counter initially set to. When you do a P(), you decrement the count, and need to block if the count becomes negative. When you do a V(), you increment the count and you wake up 1 process from its blocked queue if not null. 7
Deadlocks Necessary Conditions Safe States Resource Allocation Graph Deadlock Prevention Safe States Deadlock Detection Detection Algorithm Recovery 8
Necessary Conditions for a Deadlock Mutual exclusion: The requesting process is delayed until the resource held by another is released. Hold and wait: A process must be holding at least 1 resource and must be waiting for 1 or more resources held by others. No preemption: Resources cannot be preempted from one and given to another. Circular wait: A set (P,P1, Pn) of waiting processes must exist such that P is waiting for a resource held by P1, P1 is waiting for. by P, Pn is waiting for held by P. 9
Deadlock Prevention Example 5 processes, 3 resource types A (1 instances), B (5 instances), C (7 instances) MaxNeeds Allocated StillNeeds Free A B C A B C A B C A B C P 7 5 3 P 1 P 7 4 3 3 3 P1 3 P1 P1 1 P 9 P 3 P 6 P3 P3 1 1 P3 1 1 P4 4 3 3 P4 P4 4 3 1 This state is safe, because there is a reduction sequence <P1, P3, P4, P, P> that can satisfy all the requests. Exercise: Formally go through each of the steps that update these matrices for the reduction sequence. 1
Deadlock Detection Example 5 processes, 3 resource types A (7 instances), B ( instances), C (6 instances) P P1 Allocated A B 1 C P P1 Request A B C A Free B C P 3 3 P P3 1 1 P3 1 P4 P4 This state is NOT deadlocked. By applying algorithm, the sequence <P, P, P3, P1, P4> will result in Done[i] being TRUE for all processes. 11
Main Memory Swapping Allocation Contiguous, Non-contiguous (paging) Algorithms Fragmentation Internal, External Page-tables, TLBs virtual-physical translation Page table structure, entries 1
Memory Allocation P1 Allocated Regions Queue of waiting requests/jobs P3 P Free Regions (Holes) OS Question: How do we perform this allocation? 13
Programs are provided with a virtual address space (say 1 MB). Role of the OS to fetch data from either physical memory or disk. Done by a mechanism called (demand) paging. Divide the virtual address space into units called virtual pages each of which is of a fixed size (usually 4K or 8K). For example, 1M virtual address space has 56 4K pages. Divide the physical address space into physical pages or frames. For example, we could have only 3 4K-sized pages. 14
Page Tables Virtual Address Virtual Page # Offset in Page VP # vp 1 PP # pp 1 Present vp n pp n Physical Page # Physical Address Offset in Page 15
Example: A -level Page Table 3-bit virtual address Page Tables Page dir Page table Page offset 1 1 1 Code Data 3-bit physical address Stack Page Directory 16
Virtual Memory Page Fault Handling Performance Estimations Memory Initialization Page Replacement Algorithms Belady s Anomaly Uses of Virtual Memory COW, Shared Pages, Memory-mapped Files Thrashing 17
Page Fault If there is a reference to a page, first reference to that page will trap to operating system: page fault Operating system looks at another table to decide: Invalid reference -- abort Just not in memory Get empty frame Swap page into frame Reset tables Set validation bit = v Restart the instruction that caused the page fault 18
Putting it all together! VAS before execution int A[K]; int B[k]; main() { int i, j, p; p = malloc(16k); } PC Heap Pointer Stack Pointer x No Page table Entries here All Point To Null (i.e. fault the first time) 4K page size xf f Page Table 19
Belady s Anomaly Normally you expect number of page faults to decrease as you increase physical memory size. However, this may not be the case in certain replacement algorithms
File Systems File System Concepts Files, Directories, File Systems Operations and Usage Remote File Systems File System Implementation What s on the disk? How s it formatted? What s in memory? How s it represented? File System Usage Get a file Caching Free Space Recovery 1
File System Mounting
In-memory structures for open() syscall P1 fd=open( a, ); read(fd, ); close(fd); P fd=open( a, ); read(fd, ); close(fd); P3 fd=open( b, ); write(fd, ); close(fd); OS Per-process Open File Descriptor Table System-wide Open File Descriptor table i-node of 3 a (all in Memory) i-node of b
i-node Filename Time Perm. Disk Block Data Disk Block Disk Block Data Disk Block Disk Block Disk Block Data Disk Block Disk Block Disk Block Disk Block Data 4
On a write, should we do write-back or a writethrough? With write-back, you may loose data that is written if machine goes down before write-back With write-through, you may be loosing performance Loss in opportunity to perform several writes at a time Perhaps the write may not even be needed! DOS uses write-through In UNIX, writes are buffered, and they are propagated in the background after a delay, i.e. every 3 secs there is a sync() call which propagates dirty blocks to disk. This is usually done in the background. Metadata (directories/i-nodes) writes are propagated immediately. 5
Summary Need a clear understanding of concepts in Synchronization Memory Management File Systems Be able to apply these concepts Not so much algorithm memorization I will provide algorithms in most cases Some equation memorization 6
Next time: Midterm Th, April 17, 6:3-7:45 Location: Deike Good Luck!