Memory Management. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Similar documents
PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Virtual Memory III. Jo, Heeseung

Lecture 12: Demand Paging

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Virtual Memory I. Jo, Heeseung

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

CS 153 Design of Operating Systems Winter 2016

Operating Systems. Operating Systems Sina Meraji U of T

CSE 120 Principles of Operating Systems

VIRTUAL MEMORY. Operating Systems 2015 Spring by Euiseong Seo

CSE 153 Design of Operating Systems

Memory Management. Jo, Heeseung

VIRTUAL MEMORY II. Jo, Heeseung

MEMORY MANAGEMENT. Jo, Heeseung

CSE 120 Principles of Operating Systems Spring 2017

CSE 120 Principles of Operating Systems

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

Swapping. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS 318 Principles of Operating Systems

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Recap: Memory Management

Basic Memory Management

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Swapping. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

CS370 Operating Systems

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Chapter 9: Virtual Memory

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Virtual Memory Management

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Chapter 8: Virtual Memory. Operating System Concepts

Address spaces and memory management

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Chapters 9 & 10: Memory Management and Virtual Memory

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Memory management, part 2: outline

Virtual Memory - II. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XVI. University at Buffalo.

Operating System Concepts

How to create a process? What does process look like?

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Virtual Memory Outline

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Chapter 6: Demand Paging

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

Page Tables. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

OPERATING SYSTEM. Chapter 9: Virtual Memory

CS 153 Design of Operating Systems Winter 2016

Virtual Memory Design and Implementation

VIRTUAL MEMORY READING: CHAPTER 9

Virtual Memory. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS 153 Design of Operating Systems Winter 2016

Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms

Chapter 9: Virtual-Memory

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Goals of memory management

Virtual Memory COMPSCI 386

Chapter 8. Virtual Memory

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

CS420: Operating Systems

Page Replacement Algorithms

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

CS450/550 Operating Systems

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

Virtual Memory. ICS332 Operating Systems

Chapter 9: Virtual Memory

Chapter 4 Memory Management. Memory Management

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Chapter 8 Virtual Memory

Memory Management Prof. James L. Frankel Harvard University

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Memory: Paging System

CS307: Operating Systems

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

CS370 Operating Systems

Memory Management Virtual Memory

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Operating Systems Lecture 6: Memory Management II

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

Chapter 4 Memory Management

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

Transcription:

Memory Management Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu

Topics Why is memory management difficult? Old memory management techniques: Fixed partitions Variable partitions Introduction to virtual memory 2

Memory Management (1) Goals To provide a convenient abstraction for programming. To allocate scarce memory resources among competing processes to maximize performance with minimal overhead. To provide isolation between processes. 3

Single/Batch Programming An OS with one user process 0xFFFF.. Programs use physical addresses directly. OS loads job, runs it, unloads it. User Program Operating System in ROM User Program Device Drivers in ROM User Program 0 Operating System in RAM Operating System in RAM 4

Multiprogramming (1) Example #include <stdio.h> int n = 0; int main () { printf ( &n = 0x%08x\n, &n); } %./a.out &n = 0x08049508 %./a.out &n = 0x08049508 What happens if two users simultaneously run this application? 5

Multiprogramming (2) Multiprogramming Need multiple processes in memory at once. To overlap I/O and CPU of multiple jobs Each process requires variable-sized and contiguous space. Requirements Protection: restrict which addresses processes can use. Fast translation: memory lookups must be fast, in spite of protection scheme. Fast context switching: updating memory hardware (for protection and translation) should be quick. 6

Fixed Partitions (1) Partition 4 0x5000 Base register 0x2000 Virtual address 0x0362 + 0x2362 Partition 3 Partition 2 Partition 1 Partition 0 Operating System 0x4000 0x3000 0x2000 0x1000 0 7

Fixed Partitions (2) Physical memory is broken up into fixed partitions Size of each partition is the same and fixed the number of partitions = degree of multiprogramming Hardware requirements: base register Physical address = virtual address + base register Base register loaded by OS when it switches to a process Advantages Easy to implement, fast context switch Problems Internal fragmentation: memory in a partition not used by a process is not available to other processes Partition size: one size does not fit all Fragmentation vs. fitting large programs 8

Fixed Partitions (3) Improvement Partition size need not be equal. Allocation strategies Maintain a separate queue for each partition size Maintain a single queue and allocate to the closest job whose size fits in an empty partition (first fit) Search the whole input queue and pick the largest job that fits in an empty partition (best fit) IBM OS/MFT (Multiprogramming with a Fixed number of Tasks) Partition 4 Partition 2 Partition 1 Partition 0 Operating System 0x8000 0x4000 0x2000 0x1000 0 9

Variable Partitions (1) Limit register P1 s Limit Base register P1 s Base Partition 3 Virtual address offset <? Yes + Partition 2 Partition 1 No protection fault Partition 0 Operating System 10

Variable Partitions (2) Physical memory is broken up into variable-sized partitions IBM OS/MVT Hardware requirements: base register and limit register Physical address = virtual address + base register Base register loaded by OS when it switches to a process The role of limit register: protection If (physical address > base + limit), then raise a protection fault. Allocation strategies First fit: Allocate the first hole that is big enough Best fit: Allocate the smallest hole that is big enough Worst fit: Allocate the largest hole 11

Variable Partitions (3) Advantages No internal fragmentation Simply allocate partition size to be just big enough for process But, if we break the physical memory into fixed-sized blocks and allocate memory in unit of block sizes (in order to reduce bookkeeping), we have internal fragmentation. Problems External fragmentation As we load and unload jobs, holes are left scattered throughout physical memory Solutions to external fragmentation: Compaction Paging and segmentation 12

Virtual Memory (1) Example #include <stdio.h> int n = 0; int main () { printf ( &n = 0x%08x\n, &n); } %./a.out &n = 0x08049508 %./a.out &n = 0x08049508 What happens if two users simultaneously run this application? 13

Virtual Memory (2) Virtual Memory (VM) Use virtual addresses for memory references. Large and contiguous CPU performs address translation at run time. Instructions executed by the CPU issue virtual addresses. Virtual addresses are translated by hardware into physical addresses (with help from OS). Virtual addresses are independent of the actual physical location of data referenced. OS determines location of data in physical memory 14

Virtual Memory (3) Virtual Memory (VM) Physical memory is dynamically allocated or released on demand. Programs execute without requiring their entire address space to be resident in physical memory. Lazy loading Virtual addresses are private to each process. Each process has its own isolated virtual address space. One process cannot name addresses visible to others. Many ways to translate virtual addresses into physical addresses 15

Virtual Memory (4) Advantages Separates user s logical memory from physical memory. Abstracts main memory into an extremely large, uniform array of storage. Frees programmers from the concerns of memory-storage limitations. Allows the execution of processes that may not be completely in memory. Programs can be larger than physical memory. More programs could be run at the same time. Less I/O would be needed to load or swap each user program into memory. Allows processes to easily share files and address spaces. Provides an efficient mechanism for protection and process creation. 16

Virtual Memory (5) Disadvantages Performance!!! In terms of time and space Implementation Paging Segmentation 17

Paging Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu

Topics Virtual memory implementation Paging Demand paging Advanced VM techniques Shared memory Copy-on-write Memory-mapped files 19

Paging (1) Paging Permits the physical address space of a process to be noncontiguous. Divide physical memory into fixed-sized blocks called frames. Divide logical memory into blocks of same size called pages. Page (or frame) size is power of 2 (typically, 512B 8KB) To run a program of size n pages, need to find n free frames and load program. OS keeps track of all free frames. Set up a page table to translate virtual to physical addresses. 20

Paging (2) Physical memory Process B Virtual memory Page 3 Page 2 Frame 11 Frame 10 Frame 9 Page 1 Frame 8 Process A Page 0 Page 5 Page 4 Page tables Frame 7 Frame 6 Frame 5 Frame 4 Page 3 Frame 3 Page 2 Frame 2 Page 1 Frame 1 Page 0 Frame 0 21

Paging (3) User s perspective Users (and processes) view memory as one contiguous address space from 0 through N. Virtual address space (VAS) In reality, pages are scattered throughout the physical memory. Virtual-to-physical mapping This mapping is invisible to the program. Protection is provided because a program cannot reference memory outside of its VAS. The virtual address 0xdeadcafe maps to different physical addresses for different processes. 22

Paging (4) Logical to physical memory 23

Paging (5) Translating addresses A virtual address has two parts: <virtual page number (VPN)::offset> VPN is an index into a page table Page table determines page frame number (PFN) Physical address is <PFN::offset> Page tables Managed by OS Map VPN to PFN VPN is the index into the table that determines PFN One page table entry (PTE) per page in virtual address space, i.e. one PTE per VPN 24

Paging (6) Address translation hardware 25

Paging (7) Paging example Virtual address: 32 bits Physical address: 20 bits Page size: 4KB Offset: 12 bits VPN: 20 bits Page table entries: 2 20 0 Virtual address (32bits) 0 0 0 4 A F E VPN 2 7 D 0 3 1 F E 4 6 A 4 Page tables PFN Offset 4 6 A F E Physical address (20bits) 26

Paging (8) Protection Memory protection is implemented by associating protection bit with each frame. Valid / Invalid bit Valid indicates that the associated page is in the process virtual address space, and is thus a legal page. Invalid indicates that the page is not in the process virtual address space. Finer level of protection is possible for valid pages. Provide read-only, read-write, or execute-only protection. 27

Paging (9) Page Table Entries (PTEs) 1 1 1 2 20 V R M Prot Page Frame Number (PFN) Valid bit (V) says whether or not the PTE can be used. It is checked each time a virtual address is used. Reference bit (R) says whether the page has been accessed. It is set when a read or write to the page occurs. Modify bit (M) says whether or not the page is dirty. It is set when a write to the page occurs. Protection bits (Prot) control which operations are allowed on the page. Read, Write, Execute, etc. Page frame number (PFN) determines physical page. 28

Paging (10) Advantages Easy to allocate physical memory. Physical memory is allocated from free list of frames To allocate a frame, just remove it from its free list No external fragmentation. Easy to page out chunks of a program. All chunks are the same size (page size). Use valid bit to detect reference to paged-out pages. Page size is usually chosen to be convenient multiple of disk block sizes. Easy to protect pages from illegal accesses. Easy to share pages. 29

Paging (11) Disadvantages Can still have internal fragmentation Process may not use memory in exact multiple of pages. Memory reference overhead 2 references per address lookup (page table, then memory) Solution: get a hardware support (TLB) Memory required to hold page tables can be large Need one PTE per page in virtual address space 32-bit address space with 4KB pages = 2 20 PTEs 4 bytes/pte = 4MB per page table OS s typically have separate page tables per process (25 processes = 100MB of page tables) Solution: page the page tables, multi-level page tables, inverted page tables, etc. 30

Demand Paging (1) Demand paging Bring a page into memory only when it is needed. Less I/O needed Less memory needed Faster response More users OS uses main memory as a (page) cache of all of the data allocated by processes in the system. Initially, pages are allocated from physical memory frames. When physical memory fills up, allocating a page requires some other page to be evicted from its physical memory frame. Evicted pages go to disk (only need to write if they are dirty) To a swap file Movement of pages between memory/disks is done by the OS Transparent to the application 31

Demand Paging (2) Page faults What happens to a process that references a virtual address in a page that has been evicted? When the page was evicted, the OS sets the PTE as invalid and stores (in PTE) the location of the page in the swap file. When a process accesses the page, the invalid PTE will cause an exception (fault) The page fault handler (in kernel) is invoked by the fault. Handler uses invalid PTE to locate page in swap file. Handler reads page into a physical frame, updates PTE to point to it and to be valid. Handler restarts the faulted process. Where does the page that s read in go? Have to evict something else (page replacement algorithm) OS typically tries to keep a pool of free pages around so that allocations don t inevitably cause evictions. 32

Demand Paging (3) Handling a page fault 33

Demand Paging (4) Why does this work? Locality Temporal locality: locations referenced recently tend to be referenced again soon. Spatial locality: locations near recently referenced locations are likely to be referenced soon. Locality means paging can be infrequent. Once you ve paged something in, it will be used many times. On average, you use things that are paged in. But this depends on many things: - Degree of locality in application - Page replacement policy - Amount of physical memory - Application s reference pattern and memory footprint 34

Demand Paging (5) Why is this demand paging? When a process first starts up, it has a brand new page table, with all PTE valid bits false. No pages are yet mapped to physical memory. When the process starts executing: Instructions immediately fault on both code and data pages. Faults stop when all necessary code/data pages are in memory. Only the code/data that is needed (demanded!!) by process needs to be loaded. What is needed changes over time, of course 35

Advanced VM Functionality Virtual memory tricks Shared memory Copy on write Memory-mapped files 36

Shared Memory (1) Shared memory Private virtual address spaces protect applications from each other. But this makes it difficult to share data. Parents and children in a forking Web server or proxy will want to share an in-memory cache without copying. Read/Write (access to share data) Execute (shared libraries) We can use shared memory to allow processes to share data using direct memory reference. Both processes see updates to the shared memory segment. How are we going to coordinate access to shared data? 37

Shared Memory (2) Implementation How can we implement shared memory using page tables? Have PTEs in both tables map to the same physical frame. Each PTE can have different protection values. Must update both PTEs when page becomes invalid. Can map shared memory at same or different virtual addresses in each process address space Different: Flexible (no address space conflicts), but pointers inside the shared memory segment are invalid. Same: Less flexible, but shared pointers are valid. 38

Copy On Write (1) Process creation requires copying the entire address space of the parent process to the child process. Very slow and inefficient! Solution 1: Use threads Sharing address space is free. Solution 2: Use vfork() system call vfork() creates a process that shares the memory address space of its parent. To prevent the parent from overwriting data needed by the child, the parent s execution is blocked until the child exits or executes a new program. Any change by the child is visible to the parent once it resumes. Useful when the child immediately executes exec(). 39

Copy On Write (2) Solution 3: Copy On Write (COW) Instead of copying all pages, create shared mappings of parent pages in child address space. Shared pages are protected as read-only in child. Reads happen as usual Writes generate a protection fault, trap to OS, and OS copies the page, changes page mapping in client page table, restarts write instruction write Process fork Page table COW COW Physical memory copied child process 40

Memory-Mapped Files (1) Memory-mapped files Mapped files enable processes to do file I/O using memory references. Instead of open(), read(), write(), close() mmap(): bind a file to a virtual memory region PTEs map virtual addresses to physical frames holding file data <Virtual address base + N> refers to offset N in file Initially, all pages in mapped region marked as invalid. OS reads a page from file whenever invalid page is accessed. OS writes a page to file when evicted from physical memory. If page is not dirty, no write needed. 41

Memory-Mapped Files (2) Note: File is essentially backing store for that region of the virtual address space (instead of using the swap file). Virtual address space not backed by real files also called anonymous VM. Advantages Uniform access for files and memory (just use pointers) Less copying Several processes can map the same file allowing the pages in memory to be shared. Drawbacks Process has less control over data movement. Does not generalize to streamed I/O (pipes, sockets, etc.) 42

Address Translation Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu

Topics How to reduce the size of page tables? How to reduce the time for address translation? 44

Page Tables Managing page tables Space overhead of page tables The size of the page table for a 32-bit address space with 4KB pages = 4MB (per process) How can we reduce this overhead? Observation: Only need to map the portion of the address space actually being used (tiny fraction of entire address space) How do we only map what is being used? Make the page table structure dynamically extensible Use another level of indirection:» Two-level, hierarchical, hashed, etc. 45

Two-level Page Tables (1) 46

Two-level Page Tables (2) Two-level page tables Virtual addresses have 3 parts: Master page # Secondary page # Offset Master page table: master page number à secondary page table. Secondary page table: secondary page number à page frame number. 47

Two-level Page Tables (3) Example 32-bit address space, 4KB pages, 4bytes/PTE Want master page table in one page 10 10 Master page # Secondary page # Offset 12 Physical memory Page frame N Page frame Offset Physical address. Page frame 6 Page frame 5 Page frame 4 Page frame 3 Page frame 2 Page frame 1 Master page table Secondary page table Page frame 0 48

Multi-level Page Tables Address translation in Alpha AXP Architecture Three-level page tables 64-bit address divided into 3 segments (coded in bits63/62) seg0 (0x): user code seg1 (11): user stack kseg (10): kernel Alpha 21064 Page size: 8KB Virtual address: 43bits Each page table is one page long. 49

TLBs (1) Making address translation efficient Original page table scheme doubled the cost of memory lookups One lookup into the page table, another to fetch the data Two-level page tables triple the cost! Two lookups into the page tables, a third to fetch the data And this assumes the page table is in memory How can we make this more efficient? Goal: make fetching from a virtual address about as efficient as fetching from a physical address Solutions: - Cache the virtual-to-physical translation in hardware - Translation Lookaside Buffer (TLB) - TLB managed by the Memory Management Unit (MMU) 50

TLBs (2) Translation Lookaside Buffers Translate virtual page #s into PTEs (not physical address) Can be done in a single machine cycle 51

TLBs (3) TLB is implemented in hardware Fully associative cache (all entries looked up in parallel) Cache tags are virtual page numbers. Cache values are PTEs (entries from page tables). With PTE+offset, MMU can directly calculate the physical address. TLBs exploit locality Processes only use a handful of pages at a time. 16-48 entries in TLB is typical (64-192KB) Can hold the hot set or working set of process Hit rates are therefore really important. 52

TLBs (4) Address translation with TLB 53

TLBs (5) Handling TLB misses Address translations are mostly handled by the TLB > 99% of translations, but there are TLB misses occasionally In case of a miss, who places translations into the TLB? Hardware (MMU): Intel x86 Knows where page tables are in memory OS maintains tables, HW access them directly Page tables have to be in hardware-defined format Software loaded TLB (OS) TLB miss faults to OS, OS finds right PTE and loads TLB Must be fast (but, 20-200 cycles typically) CPU ISA has instructions for TLB manipulation Page tables can be in any format convenient for OS (flexible) 54

TLBs (6) Managing TLBs OS ensures that TLB and page tables are consistent. When OS changes the protection bits of a PTE, it needs to invalidate the PTE if it is in the TLB. Reload TLB on a process context switch. Remember, each process typically has its own page tables. Need to invalidate all the entries in TLB. (flush TLB) In IA-32, TLB is flushed automatically when the contents of CR3 (page directory base register) is changed. (cf.) Alternatively, we can store the PID as part of the TLB entry, but this is expensive. When the TLB misses, and a new PTE is loaded, a cached PTE must be evicted. Choosing a victim PTE called the TLB replacement policy. Implemented in hardware, usually simple (e.g., LRU) 55

Memory Reference (1) Situation Process is executing on the CPU, and it issues a read to a (virtual) address. VA TLB TLB hit PA Memory page fault protection fault TLB miss Page tables data PTE 56

Memory Reference (2) The common case The read goes to the TLB in the MMU. TLB does a lookup using the page number of the address. The page number matches, returning a PTE. TLB validates that the PTE protection allows reads. PTE specifies which physical frame holds the page. MMU combines the physical frame and offset into a physical address. MMU then reads from that physical address, returns value to CPU. 57

Memory Reference (3) TLB misses: two possibilities (1) MMU loads PTE from page table in memory. Hardware managed TLB, OS not involved in this step. OS has already set up the page tables so that the hardware can access it directly. (2) Trap to the OS. Software managed TLB, OS intervenes at this point. OS does lookup in page tables, loads PTE into TLB. OS returns from exception, TLB continues. At this point, there is a valid PTE for the address in the TLB. 58

Memory Reference (4) TLB misses Page table lookup (by HW or OS) can cause a recursive fault if page table is paged out. Assuming page tables are in OS virtual address space. Not a problem if tables are in physical memory. When TLB has PTE, it restarts translation. Common case is that the PTE refers to a valid page in memory. Uncommon case is that TLB faults again on PTE because of PTE protection bits. (e.g., page is invalid) 59

Memory Reference (5) Page faults PTE can indicate a protection fault Read/Write/Execute operation not permitted on page Invalid virtual page not allocated, or page not in physical memory. TLB traps to the OS (software takes over) Read/Write/Execute OS usually will send fault back to the process, or might be playing tricks (e.g., copy on write, mapped files). Invalid (Not allocated) OS sends fault to the process (e.g., segmentation fault). Invalid (Not in physical memory) OS allocates a frame, reads from disk, and maps PTE to physical frame. 60

Page Replacement Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu

Topics What if the physical memory becomes full? Page replacement algorithms How to manage memory among competing processes? 62

Page Replacement (1) Page replacement When a page fault occurs, the OS loads the faulted page from disk into a page frame of memory. At some point, the process has used all of the page frames it is allowed to use. When this happens, the OS must replace a page for each page faulted in. It must evict a page to free up a page frame. The page replacement algorithm determines how this is done. 63

Page Replacement (2) Evicting the best page The goal of the replacement algorithm is to reduce the fault rate by selecting the best victim page to remove. The best page to evict is the one never touched again. as process will never again fault on it. Never is a long time, so picking the page closest to never is the next best thing Belady s proof: Evicting the page that won t be used for the longest period of time minimizes the number of page faults. 64

Belady s Algorithm Optimal page replacement Replace the page that will not be used for the longest time in the future. Has the lowest fault rate for any page reference stream. Problem: have to predict the future Why is Belady s useful? Use it as a yardstick! Compare other algorithms with the optimal to gauge room for improvement. If optimal is not much better, then algorithm is pretty good, otherwise algorithm could use some work. Lower bound depends on workload, but random replacement is pretty bad. 65

FIFO (1) First-In First-Out Obvious and simple to implement Maintain a list of pages in order they were paged in On replacement, evict the one brought in longest time ago Why might this be good? Maybe the one brought in the longest ago is not being used. Why might this be bad? Maybe, it s not the case. We don t have any information either way. FIFO suffers from Belady s Anomaly The fault rate might increase when the algorithm is given more memory. 66

FIFO (2) Example: Belady s anomaly Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames: 9 faults 1 2 3 4 1 2 5 3 4 4 frames: 10 faults 1 2 3 4 5 1 2 3 4 5 67

LRU (1) Least Recently Used LRU uses reference information to make a more informed replacement decision. Idea: past experience gives us a guess of future behavior. On replacement, evict the page that has not been used for the longest time in the past. LRU looks at the past, Belady s wants to look at future. Implementation Counter implementation: put a timestamp Stack implementation: maintain a stack Why do we need an approximation? 68

LRU (2) Approximating LRU Many LRU approximations use the PTE reference (R) bit. R bit is set whenever the page is referenced (read or written) Counter-based approach Keep a counter for each page. At regular intervals, for every page, do: - If R = 0, increment the counter (hasn t been used) - If R = 1, zero the counter (has been used) - Zero the R bit The counter will contain the number of intervals since the last reference to the page. The page with the largest counter is the least recently used. 69

Second Chance (1) Second chance or LRU clock FIFO with giving a second chance to a recently referenced page. Arrange all of physical page frames in a big circle (clock). A clock hand is used to select a good LRU candidate. Sweep through the pages in circular order like a clock If the R bit is off, it hasn t been used recently and we have a victim. If the R bit is on, turn it off and go to next page. Arm moves quickly when pages are needed. Low overhead if we have plenty of memory. If memory is large, accuracy of information degrades. 70

Second Chance (2) 71

Not Recently Used (1) NRU or enhanced second chance Use R (reference) and M (modify) bits Periodically, (e.g., on each clock interrupt), R is cleared, to distinguish pages that have not been referenced recently from those that have been. interrupt Read Read Class 0 R=0, M=0 Class 2 R=1, M=0 interrupt Paged-in Write Write Read Write Class 1 R=0, M=1 Class 3 R=1, M=1 interrupt interrupt Read Write 72

Not Recently Used (2) Algorithm Removes a page at random from the lowest numbered nonempty class. It is better to remove a modified page that has not been referenced in at least one clock tick than a clean page that is in heavy use. Used in Macintosh. Advantages Easy to understand. Moderately efficient to implement. Gives a performance that, while certainly not optimal, may be adequate. 73

LFU (1) Counting-based page replacement A software counter is associated with each page. At each clock interrupt, for each page, the R bit is added to the counter. The counters denote how often each page has been referenced. Least frequently used (LFU) The page with the smallest count will be replaced. (cf.) Most frequently used (MFU) page replacement The page with the largest count will be replaced Based on the argument that the page with the smallest count was probably just brought in and has yet to be used. It never forgets anything. A page may be heavily used during the initial phase of a process, but then is never used again 74

LFU (2) Aging The counters are shifted right by 1 bit before the R bit is added to the leftmost. 75

Allocation of Frames Problem In a multiprogramming system, we need a way to allocate physical memory to competing processes. What if a victim page belongs to another process? How to determine how much memory to give to each process? Fixed space algorithms Each process is given a limit of pages it can use. When it reaches its limit, it replaces from its own pages. Local replacement: some process may do well, others suffer. Variable space algorithms Processes set of pages grows and shrinks dynamically. Global replacement: one process can ruin it for the rest (Linux) 76

Thrashing (1) Thrashing What the OS does if page replacement algorithms fail. Most of the time is spent by an OS paging data back and forth from disk. No time is spent doing useful work. The system is overcommitted. No idea which pages should be in memory to reduce faults. Could be that there just isn t enough physical memory for all processes. Possible solutions Swapping write out all pages of a process Buy more memory. 77

Thrashing (2) 78

Working Set Model (1) Working set A working set of a process is used to model the dynamic locality of its memory usage. i.e., working set = set of pages process currently needs Peter Denning, 1968. Definition WS(t,w) = {pages P such that P was referenced in the time interval (t, t-w)} t: time, w: working set window size (measured in page references) A page is in the working set only if it was referenced in the last w references. 79

Working Set Model (2) Working set size (WSS) The number of pages in the working set = The number of pages referenced in the interval (t, t-w) The working set size changes with program locality. During periods of poor locality, more pages are referenced. Within that period of time, the working set size is larger. Intuitively, working set must be in memory to prevent heavy faulting (thrashing). 80

Working Sets and Page Fault Rates Direct relationship between working set of a process and its page-fault rate Working set changes over time Peaks and valleys over time 81

Summary (1) VM mechanisms Physical and virtual addressing Partitioning, Paging, Segmentation Page table management, TLBs, etc. VM policies Page replacement algorithms Memory allocation policies VM requires hardware and OS support MMU (Memory Management Unit) TLB (Translation Lookaside Buffer) Page tables, etc. 82

Summary (2) VM optimizations Demand paging (space) Managing page tables (space) Efficient translation using TLBs (time) Page replacement policy (time) Advanced functionality Sharing memory Copy on write Mapped files 83