Lecture 9: Virtual Memory Summary 1. omputer system overview (hapter 1) 2. asic of virtual memory; i.e. segmentation and paging (hapter 7 and part of 8) 3. Process (hapter 3) 4. Mutual xclusion and Synchronization (hapter 5 section 1-4) onditions for race avoidance. Strict alternation. Semaphores. Producers and onsumers problem. Hardware support for mutual exclusion. Monitors. 5. Threading (user mode and kernel mode). 6. eadlock and Starvation. 7. Memory management. Virtual Memory 1. Lessons learned from memory management. (a) ll memory references are logical addresses that are dynamically translated into physical address at run time. Thus, a process can be swapped in and out of memory and occupy different regions of main memory during the course of execution. (b) process may be broken up into pieces and these pieces need not be contiguously located in main memory during execution. 2. reak through: ased on the two preceding characteristics, it is not necessary that all pages or segments be presented in the main memory during execution. Only the pieces that hold the next instructions and necessary data need to be in the resident set. 3. If the issued logical address is not in main memory: Generate an interrupt to indicate that the desired address is not in main memory (access fault). The OS puts the interrupted process in a blocking state and takes control. The OS issues a disk I/O request to bring the needed piece to main memory. The OS dispatches another process to run. Once the desired piece has been brought into main memory, an I/O interrupt occurs. The OS takes control and put the blocked process into a ready state. 4. Implications: University of Nebraska-Lincoln Lecture 9 1 omputer Science and ngineering
More processes can be maintained in main memory. The maximum process image (virtual memory) may be larger than all of main memory (real memory). 5. oes virtual memory work? Small working set. If the OS frequently throws out the pieces about to be used, then thrashing can occur. Principle of locality: programs and data references within a process tend to cluster. 6. Paging: page table revisited. From last lecture, a process page table contains information about the mapping of page to page frame. It is per process and loaded into the memory when all the process pages are loaded. To support VM, additional bits are needed: P (present bit) and M (modified bit). (See Figure 8.2) ecause page tables can be large, they are also subject to paging. Thus, the goal is to break a page table into smaller chunks, and each chunk is equal to the page size (e.g. 4K). Multilevel page table (see Figure 8.4 and 8.5). (a) Load root page table (aka. page table directory). This table is the first that get loaded and stays in the memory throughout the execution of that process. (b) User the first n bit (10 in the example) to index the root page table for the secondary page table. (c) Index into the secondary page table and find the page frame. If the P bit is set, the page is in the main memory, done. If the P bit is clear, the page must be brought into the main memory. Identify the location in the main memory to place the page. If eviction is needed, check the M bit of the evicted page to determine if writing to disk is needed before replacement. Note that page replacement policy will be discussed a little later. ring in the new page to this location then done. Inverted page table. n entry for every page frame. Thus, the table size is proportional to the amount of main memory and not virtual memory. ntire table stay in the memory. Translation Lookaside uffer (TL) contains the page table entries that have be most recently used (see Figure 8.7 and Figure 8.8). 7. Replacement policy: Optimal. Least Recently Used (LRU). First-in-first-out (FIFO). lock (see Figure 8.16). University of Nebraska-Lincoln Lecture 9 2 omputer Science and ngineering
8. Other considerations. Resident set size. Page size. TL size. 9. Segmentation (see Figure 8.13). Scheduling lgorithms 1. ecision mode: preemptive or non-preemptive. 2. Policies (see Figure 9.5): First-come-first-serve (FFS): Schedule processes based on the order of arrival. Round Robin: Use time slice to schedule processes that are ready to run. Shortest Process Next (SPN). The process with the shortest expected running time is selected next. It is non-preemptive. Shortest Remaining Time (SRT). It is a preemptive version of SPN. The process with the shortest remaining time is picked next. Highest Response Ratio Next (HRRN): R = w+s s, w = wait time, s = expected service time. The process with the greatest R is picked next. Feedback. ombine preemptive scheduling based on time-slice with dynamic priority. Process loses one level of priority after each run. The lowest queue uses round-robin while others use FFS. University of Nebraska-Lincoln Lecture 9 3 omputer Science and ngineering
Virtual ddress Page Number Page Table ntry P MOther ontrol its Frame Number (a) Paging only Virtual ddress Segment Number Segment Table ntry P MOther ontrol its Length Segment ase (b) Segmentation only Virtual ddress Segment Number Page Number Segment Table ntry ontrol its Length Segment ase Page Table ntry P MOther ontrol its Frame Number (c) ombined segmentation and paging P= present bit M = Modified bit Figure 8.2 Typical Memory Management Formats University of Nebraska-Lincoln Lecture 9 4 omputer Science and ngineering
4-kbyte root page table 4-Mbyte user page table 4-Gbyte user address space Figure 8.4 Two-Level Hierarchical Page Table University of Nebraska-Lincoln Lecture 9 5 omputer Science and ngineering
Virtual ddress 10 bits 10 bits 12 bits Frame # Root page table ptr + + Page Frame Root page table (contains 1024 PTs) 4-kbyte page table (contains 1024 PTs) Program Paging Mechanism Main Memory Figure 8.5 ddress Translation in a Two-Level Paging System University of Nebraska-Lincoln Lecture 9 6 omputer Science and ngineering
Virtual ddress Page # Main Memory Secondary Memory Translation Lookaside uffer TL hit Page Table Load page TL miss Frame # Real ddress Page fault Figure 8.7 Use of a Translation Lookaside uffer University of Nebraska-Lincoln Lecture 9 7 omputer Science and ngineering
Return to Faulted Instruction Start PU checks the TL Page Table ntry in TL? Yes No Page Fault Handling Routine OS Instructs PU to Read the Page from isk PU ctivates I/O Hardware No ccess Page Table Page in Main Memory? Yes Update TL Page Transferred from isk to Main Memory PU Generates Physical ddress Memory Full? Yes No Perform Page Replacement Page Tables Updated Figure 8.8 Operation of Paging and Translation Lookaside uffer (TL) [FURH87] University of Nebraska-Lincoln Lecture 9 8 omputer Science and ngineering
Virtual ddress Seg # Page # Frame # Seg Table Ptr Segment Table Page Table + Seg# + Page# Page Frame Program Segmentation Mechanism Paging Mechanism Main Memory Figure 8.13 ddress Translation in a Segmentation/Paging System University of Nebraska-Lincoln Lecture 9 9 omputer Science and ngineering
n 1 page 9 0 page 19 next frame pointer page 1 First frame in circular buffer of frames that are candidates for replacement 1 page 45 2 8 page 222 page 191 3 7 page 33 page 67 page 556 page 13 4 6 (a) State of buffer just prior to a page replacement 5 n 1 0 page 9 page 19 page 1 1 page 45 2 8 page 222 page 191 3 7 page 33 page 67 page 13 page 727 4 6 (b) State of buffer just after the next page replacement 5 Figure 8.16 xample of lock Policy Operation University of Nebraska-Lincoln Lecture 9 10 omputer Science and ngineering
Table 9.4 Process Scheduling xample Process rrival Time Service Time 0 3 2 6 4 4 6 5 8 2 0 5 10 15 20 First-ome-First Served (FFS) Round-Robin (RR), q = 1 Round-Robin (RR), q = 4 Shortest Process Next (SPN) Shortest Remaining Time (SRT) Highest Response Ratio Next (HRRN) Feedback q = 1 Feedback q = 2 i 0 5 10 15 20 Figure 9.5 omparison of Scheduling Policies University of Nebraska-Lincoln Lecture 9 11 omputer Science and ngineering