Processes and Virtual Memory Concepts Brad Karp UCL Computer Science CS 37 8 th February 28 (lecture notes derived from material from Phil Gibbons, Dave O Hallaron, and Randy Bryant)
Today Processes Virtual Memory Concepts 2
Processes Definition: A process is an instance of a running program. A central idea in computer science (in systems and theory both) Not the same as program or processor Process provides each program with two key abstractions: Logical control flow Each program seems to have exclusive use of CPU ( virtualizes the CPU ) Provided by kernel mechanism called context switching Private address space Each program seems to have exclusive use of main memory Provided by kernel mechanism called virtual memory Memory Stack Heap Data Code CPU Registers 3
Multiprocessing: The Illusion Memory Memory Memory Stack Heap Data Code Stack Heap Data Code Stack Heap Data Code CPU Registers CPU Registers CPU Registers Computer runs many processes simultaneously Applications for one or more users Web browsers, email clients, editors, Background tasks Monitoring network & I/O devices 4
Multiprocessing Example Running program top on Mac System has 23 processes, 5 of which are active Identified by Process ID (PID) 5
Multiprocessing: The (Traditional) Reality Memory Stack Heap Data Code Stack Heap Data Code Stack Heap Data Code Saved registers Saved registers CPU Registers Single processor executes multiple processes concurrently Process executions interleaved (multitasking) Address spaces managed by virtual memory system (our current topic) Register values for nonexecuting processes saved in memory 6
Multiprocessing: The (Traditional) Reality Memory Stack Heap Data Code Stack Heap Data Code Stack Heap Data Code Saved registers Saved registers Saved registers CPU Registers Save current registers in memory 7
Multiprocessing: The (Traditional) Reality Memory Stack Heap Data Code Stack Heap Data Code Stack Heap Data Code Saved registers Saved registers CPU Registers Schedule next process for execution 8
Multiprocessing: The (Traditional) Reality Memory Carnegie Mellon Stack Heap Data Code Stack Heap Data Code Stack Heap Data Code Saved registers Saved registers Saved registers CPU Registers Load saved registers and switch address space (context switch) 9
Multiprocessing: The (Modern) Reality Memory Stack Heap Data Code Stack Heap Data Code Stack Heap Data Code Saved registers CPU Registers CPU Registers Multicore processors Multiple CPUs on single chip Share main memory (and some caches) Each can execute a separate process Scheduling of processors onto cores done by kernel
Concurrent Processes Each process is a logical control flow Two processes run concurrently (are concurrent) if their flows overlap in time Otherwise, they are sequential Examples (running on single core): Concurrent: A & B, A & C Sequential: B & C Process A Process B Process C Time
User View of Concurrent Processes Control flows for concurrent processes are physically disjoint in time However, we can think of concurrent processes as running in parallel with each other Time Process A Process B Process C 2
Context Switching Processes are managed by a shared chunk of memoryresident OS code called the kernel N.B. that the kernel is not a separate process, but rather runs as part of an existing process Control flow passes from one process to another via a context switch Process A Process B Time user code kernel code user code kernel code user code context switch context switch 3
Today Processes Virtual Memory Concepts Address spaces VM as a tool for caching VM as a tool for memory management VM as a tool for memory protection Address translation 4
Hmmm, How Does This Work?! Process 5
Hmmm, How Does This Work?! Process Process 2 6
Hmmm, How Does This Work?! Process Process 2 Process n 7
Hmmm, How Does This Work?! Process Process 2 Process n Solution: Virtual Memory (today and next two lectures) 8
Today Address spaces VM as a tool for caching VM as a tool for memory management VM as a tool for memory protection Address translation 9
A System Using Physical Addressing CPU Physical address (PA) 4 Main memory : : 2: 3: 4: 5: 6: 7: 8:... M-: Data word Used in simple systems like embedded microcontrollers in devices like cars, elevators, and digital picture frames 2
A System Using Virtual Addressing Main memory CPU Chip CPU Virtual address (VA) 4 MMU Physical address (PA) 4 : : 2: 3: 4: 5: 6: 7: 8:... M-: Data word Used in all modern servers, laptops, and smartphones One of the most central concepts in computer systems: wide range of important uses 2
Address Spaces Linear address space: Ordered set of contiguous non-negative integer addresses: {,, 2, 3 } Virtual address space: Set of N = 2 n virtual addresses {,, 2, 3,, N-} Physical address space: Set of M = 2 m physical addresses {,, 2, 3,, M-} 22
Why Virtual Memory (VM)? Uses main memory efficiently Use DRAM as a cache for parts of a virtual address space Simplifies memory management Each process gets the same uniform linear address space Isolates address spaces One process can t interfere with another s memory User program cannot access privileged kernel information and code 23
Today Address spaces VM as a tool for caching VM as a tool for memory management VM as a tool for memory protection Address translation 24
VM as a Tool for Caching Conceptually, virtual memory is an array of N contiguous bytes stored on disk The contents of the array on disk are cached in physical memory (DRAM cache) These cache blocks are called pages (size is P = 2 p bytes) Virtual memory Physical memory VP VP Unallocated Cached Uncached Unallocated Empty Empty PP PP Cached Uncached Empty VP 2 n-p - Cached Uncached N- M- PP 2 m-p - Virtual pages (VPs) stored on disk Physical pages (PPs) cached in DRAM 25
DRAM Cache Organization DRAM cache organization driven by the enormous miss penalty DRAM is about x slower than SRAM Disk is about,x slower than DRAM Time to load block from disk > ms (> million clock cycles) CPU can do a lot of computation during that time Consequences Large page (block) size: typically 4 KB, sometimes 4 MB Fully associative Any VP can be placed in any PP Requires a large mapping function different from cache memories Sophisticated, expensive replacement algorithms Too complicated and open-ended to be implemented in hardware Write-back rather than write-through 26
Enabling Data Structure: Page Table A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages. Per-process kernel data structure in DRAM Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) 27
Enabling Data Structure: Page Table A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages. Per-process kernel data structure in DRAM Valid PTE Physical page number or disk address null null Memory resident page table (DRAM) Virtual memory (disk) PTE 7 VP VP 2 VP 3 VP 4 VP 6 VP 7 28
Enabling Data Structure: Page Table A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages. Per-process kernel data structure in DRAM Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 6 VP 7 PP PP 3 29
Enabling Data Structure: Page Table A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages. Per-process kernel data structure in DRAM Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 6 VP 7 PP PP 3 3
Page Hit Page hit: reference to VM word that is in physical memory (DRAM cache hit) Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 PP PP 3 VP 6 VP 7 3
Page Hit Page hit: reference to VM word that is in physical memory (DRAM cache hit) Virtual address Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 PP PP 3 VP 6 VP 7 32
Page Fault Page fault: reference to VM word that is not in physical memory (DRAM cache miss) Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 PP PP 3 VP 6 VP 7 33
Page Fault Page fault: reference to VM word that is not in physical memory (DRAM cache miss) Virtual address Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 PP PP 3 VP 6 VP 7 34
Handling Page Fault Page miss causes page fault (an exception) Virtual address Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 PP PP 3 VP 6 VP 7 35
Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Virtual address Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 PP PP 3 VP 6 VP 7 36
Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Virtual address Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 3 Virtual memory (disk) VP VP 2 VP 3 VP 4 PP PP 3 VP 6 VP 7 37
Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Offending instruction is restarted: page hit! Virtual address Valid PTE PTE 7 Physical page number or disk address null null Memory resident page table (DRAM) Key performance optimization: Waiting until the miss to copy the page to DRAM is known as demand paging Physical memory (DRAM) VP VP 2 VP 7 VP 3 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 6 VP 7 PP PP 3 38
Allocating Pages Allocating a new page (VP 5) of virtual memory: Valid PTE PTE 7 Physical page number or disk address Subsequent miss will bring it into memory null Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 3 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 5 VP 6 VP 7 PP PP 3 39
Locality to the Rescue Again! Virtual memory seems terribly inefficient, but it works because of locality At any point in time, programs tend to access a working set of active virtual pages called the working set Programs with better temporal locality will have smaller working sets If (working set size < main memory size) Good performance for one process after compulsory misses If ( SUM(working set sizes) > main memory size ) Thrashing: Performance plummets; pages are swapped (copied) in and out continuously 4
Today Address spaces VM as a tool for caching VM as a tool for memory management VM as a tool for memory protection Address translation 4
VM as a Tool for Memory Management Key idea: each process has its own virtual address space It can view memory as a simple linear array Mapping function scatters addresses through physical memory Well-chosen mappings can improve locality Virtual Address Space for Process : VP VP 2... Address translation PP 2 Physical Address Space (DRAM) N- Virtual Address Space for Process 2: VP VP 2... PP 6 PP 8... (e.g., read-only library code) N- M- 42
VM as a Tool for Memory Management Simplifying memory allocation Each virtual page can be mapped to any physical page A virtual page can be stored in different physical pages at different times Sharing code and data among processes Map virtual pages to the same physical page (here: PP 6) Virtual Address Space for Process : VP VP 2... Address translation PP 2 Physical Address Space (DRAM) N- Virtual Address Space for Process 2: VP VP 2... PP 6 PP 8... (e.g., read-only library code) N- M- 43
Simplifying Linking and Loading Linking Each program has similar virtual address space Code, data, and heap always start at the same addresses. Kernel virtual memory User stack (created at runtime) Memory-mapped region for shared libraries Memory invisible to user code %rsp (stack pointer) Loading execve allocates virtual pages for.text and.data sections & creates PTEs marked as invalid The.text and.data sections are copied, page by page, on demand by the virtual memory system x4 Run-time heap (created by malloc) Read/write segment (.data,.bss) Read-only segment (.init,.text,.rodata) Unused brk Loaded from the executable file 44
Today Address spaces VM as a tool for caching VM as a tool for memory management VM as a tool for memory protection Address translation 45
VM as a Tool for Memory Protection Extend PTEs with permission bits MMU checks these bits on each access Process i: VP : VP : VP 2: SUP No No Yes READ Yes Yes Yes WRITE No Yes Yes EXEC Yes Yes No Address PP 6 PP 4 PP 2 Physical Address Space PP 2 PP 4 PP 6 Process j: VP : VP : VP 2: SUP No Yes No READ Yes Yes Yes WRITE No Yes Yes EXEC Yes Yes Yes Address PP 9 PP 6 PP PP 8 PP 9 PP 46
Today Address spaces VM as a tool for caching VM as a tool for memory management VM as a tool for memory protection Address translation 47
VM Address Translation Virtual Address Space V = {,,, N } Physical Address Space P = {,,, M } Address Translation MAP: V P U {Æ} For virtual address a: MAP(a) = a if data at virtual address a is at physical address a in P MAP(a) = Æ if data at virtual address a is not in physical memory Either invalid or stored on disk 48
Summary of Address Translation Symbols Basic Parameters N = 2 n : Number of addresses in virtual address space M = 2 m : Number of addresses in physical address space P = 2 p : Page size (bytes) Components of the virtual address (VA) VPO: Virtual page offset VPN: Virtual page number Components of the physical address (PA) PPO: Physical page offset (same as VPO) PPN: Physical page number Carnegie Mellon 49
Address Translation With a Page Table Page table base register (PTBR) (CR3 in x86) Virtual address n- Virtual page number (VPN) p p- Virtual page offset (VPO) Carnegie Mellon Physical page table address for the current process Page table Valid Physical page number (PPN) Valid bit = : Page not in memory (page fault) Valid bit = m- p p- Physical page number (PPN) Physical page offset (PPO) Physical address 5
Address Translation: Page Hit CPU Chip CPU VA MMU Cache/ Memory 5
Address Translation: Page Hit CPU Chip CPU VA MMU 2 PTEA PTE 3 Cache/ Memory ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor 52
Address Translation: Page Hit CPU Chip CPU VA MMU 2 PTEA PTE 3 PA Cache/ Memory 4 ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor 53
Address Translation: Page Hit CPU Chip CPU VA MMU 2 PTEA PTE 3 PA Cache/ Memory 4 Data 5 ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor 54
Address Translation: Page Fault CPU Chip 2 PTEA CPU VA MMU PTE 3 Cache/ Memory Disk 55
Address Translation: Page Fault Exception 4 Page fault handler CPU Chip 2 PTEA CPU VA MMU PTE 3 Cache/ Memory Disk ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) Valid bit is zero, so MMU triggers page fault exception 5) Handler identifies victim (and, if dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction 56
Address Translation: Page Fault Exception 4 Page fault handler CPU Chip CPU VA MMU 2 PTEA PTE 3 Cache/ Memory Victim page 5 Disk ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) Valid bit is zero, so MMU triggers page fault exception 5) Handler identifies victim (and, if dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction 57
Address Translation: Page Fault Exception 4 Page fault handler CPU Chip CPU VA MMU 2 PTEA PTE 3 Cache/ Memory Victim page 5 New page Disk 6 ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) Valid bit is zero, so MMU triggers page fault exception 5) Handler identifies victim (and, if dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction 58
Address Translation: Page Fault Exception 4 Page fault handler CPU Chip CPU VA 7 MMU 2 PTEA PTE 3 Cache/ Memory Victim page 5 New page Disk 6 ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) Valid bit is zero, so MMU triggers page fault exception 5) Handler identifies victim (and, if dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction 59
Integrating VM and Cache PTE CPU Chip CPU VA MMU PTEA PA PTEA hit PTEA miss PA miss PTE PTEA PA Memory PA hit Data Data L cache VA: virtual address, PA: physical address, PTE: page table entry, PTEA = PTE address 6
Speeding up Translation with a TLB Page table entries (PTEs) are cached in L like any other memory word PTEs may be evicted by other data references PTE hit still requires a small L delay Solution: Translation Lookaside Buffer (TLB) Small set-associative hardware cache in MMU Maps virtual page numbers to physical page numbers Contains complete page table entries for small number of pages 6
Summary of Address Translation Symbols Basic Parameters N = 2 n : Number of addresses in virtual address space M = 2 m : Number of addresses in physical address space P = 2 p : Page size (bytes) Components of the virtual address (VA) TLBI: TLB index TLBT: TLB tag VPO: Virtual page offset VPN: Virtual page number Components of the physical address (PA) PPO: Physical page offset (same as VPO) PPN: Physical page number Carnegie Mellon 62
Accessing the TLB MMU uses the VPN portion of the virtual address to access the TLB: VPN T = 2 t sets n- TLB tag (TLBT) p+t p+t- TLB index (TLBI) p p- VPO Set v tag PTE v tag PTE Set v tag PTE v tag PTE Set T- v tag PTE v tag PTE 63
Accessing the TLB MMU uses the VPN portion of the virtual address to access the TLB: VPN T = 2 t sets n- TLB tag (TLBT) p+t p+t- TLB index (TLBI) p p- VPO Set v tag PTE v tag PTE TLBI selects the set Set v tag PTE v tag PTE Set T- v tag PTE v tag PTE 64
Accessing the TLB MMU uses the VPN portion of the virtual address to access the TLB: TLBT matches tag of line within set n- TLB tag (TLBT) VPN p+t p+t- TLB index (TLBI) T = 2 t sets p p- VPO Set v tag PTE v tag PTE TLBI selects the set Set v tag PTE v tag PTE Set T- v tag PTE v tag PTE 65
TLB Hit CPU Chip TLB 2 VPN CPU VA MMU Cache/ Memory A TLB hit eliminates a memory access 66
TLB Hit CPU Chip TLB 2 VPN PTE 3 CPU VA MMU Cache/ Memory A TLB hit eliminates a memory access 67
TLB Hit CPU Chip TLB 2 VPN PTE 3 CPU VA MMU PA 4 Cache/ Memory A TLB hit eliminates a memory access 68
TLB Hit CPU Chip TLB 2 VPN PTE 3 CPU VA MMU PA 4 Cache/ Memory Data 5 A TLB hit eliminates a memory access 69
TLB Miss CPU Chip TLB 2 VPN CPU VA MMU Cache/ Memory A TLB miss incurs an additional memory access (the PTE) Fortunately, TLB misses are rare. Why? 7
TLB Miss CPU Chip TLB 2 VPN 3 CPU VA MMU PTEA Cache/ Memory A TLB miss incurs an additional memory access (the PTE) Fortunately, TLB misses are rare. Why? 7
TLB Miss CPU Chip TLB 4 2 PTE VPN 3 CPU VA MMU PTEA Cache/ Memory A TLB miss incurs an additional memory access (the PTE) Fortunately, TLB misses are rare. Why? 72
TLB Miss CPU Chip TLB 4 2 PTE VPN 3 CPU VA MMU PTEA PA Cache/ Memory 5 A TLB miss incurs an additional memory access (the PTE) Fortunately, TLB misses are rare. Why? 73
TLB Miss CPU Chip TLB 4 2 PTE VPN 3 CPU VA MMU PTEA PA Cache/ Memory 5 Data 6 A TLB miss incurs an additional memory access (the PTE) Fortunately, TLB misses are rare. Why? 74
Multi-Level Page Tables Suppose: 4KB (2 2 ) page size, 48-bit address space, 8-byte PTE Problem: Would need a 52 GB page table! 2 48 * 2-2 * 2 3 = 2 39 bytes Common solution: Multi-level page table Example: 2-level page table Level table: each PTE points to a page table (always memory resident) Level 2 table: each PTE points to a page (paged in and out like any other data) 75
Multi-Level Page Tables Suppose: 4KB (2 2 ) page size, 48-bit address space, 8-byte PTE Level 2 Tables Problem: Would need a 52 GB page table! 2 48 * 2-2 * 2 3 = 2 39 bytes Level Table... Common solution: Multi-level page table Example: 2-level page table Level table: each PTE points to a page table (always memory resident) Level 2 table: each PTE points to a page (paged in and out like any other data)... 76
A Two-Level Page Table Hierarchy Level page table Level 2 page tables Virtual memory VP PTE PTE PTE 2 (null) PTE 3 (null) PTE 4 (null) PTE... PTE 23 PTE... VP 23 VP 24... VP 247 2K allocated VM pages for code and data PTE 5 (null)... PTE 6 (null) PTE 7 (null) PTE 23 Gap 6K unallocated VM pages PTE 8 (K - 9) null PTEs 23 null PTEs 64 bit addresses, 8KB pages, 8-byte PTEs PTE 23 23 unallocated pages VP 925... 23 unallocated pages allocated VM page for the stack 77
Translating with a k-level Page Table Page table base register (PTBR) n- VPN VIRTUAL ADDRESS VPN 2... VPN k p- VPO the Level page table a Level 2 page table...... a Level k page table PPN m- PPN p- PPO PHYSICAL ADDRESS 78
Summary Programmer s view of virtual memory Each process has its own private linear address space Cannot be corrupted by other processes System view of virtual memory Uses memory efficiently by caching virtual memory pages Efficient only because of locality Simplifies memory management and programming Simplifies protection by providing a convenient interpositioning point to check permissions 79