CSE 36S Motivations for Use DRAM a for the Address space of a process can exceed physical memory size Sum of address spaces of multiple processes can exceed physical memory Simplify Management 2 Multiple processes resident in main memory. Each process with its own address space Only active code and data is actually in memory Allocate more memory to process needed. Provide Protection One process can t interfere with another. because they operate in different address spaces. User process cannot access privileged information different sections of address spaces have different permissions. Motivation #: DRAM a for Full address space is quite large: 32-bit addresses: ~4,,, (4 billion) bytes 64-bit addresses: ~6,,,,,, (6 quintillion) bytes storage is ~3X cheaper than DRAM storage 8 GB of DRAM: ~ $33, 8 GB of disk: ~ $ To access large amounts of data in a cost-effective manner, the bulk of the data must be stored on disk 4 MB: ~$5 SRAM GB: ~$2 DRAM 8 GB: ~$ Levels in Hierarchy size: speed: $/Mbyte: line size: regs 32 B ns 8 B cache virtual memory 8 B C a c h e 32 B 4 KB disk Register 32 KB-4MB 2 ns $25/MB 32 B larger, slower, cheaper 24 MB 3 ns $.2/MB 4 KB GB 8 ms $./MB 3 4 DRAM vs. SRAM a DRAM vs. disk is more extreme than SRAM vs. DRAM 5 Access latencies: DRAM ~X slower than SRAM ~,X slower than DRAM Importance of exploiting spatial locality: First byte is ~,X slower than successive bytes on disk» vs. ~4X improvement for page-mode vs. regular accesses to DRAM Bottom line: Design decisions made for DRAM caches driven by enormous cost of misses SRAM DRAM Impact of Properties on Design If DRAM w to be organized similar to an SRAM cache, how would we set the following design parameters? Line size? Large, since disk better at transferring large blocks Associativity? High, to mimimize miss rate Write through or write back? Write back, since can t afford to perform small writes to disk What would the impact of these choices be on: miss rate Extremely low. << % hit time Must match cache/dram performance miss latency Very high. ~2ms tag storage overhead Low, relative to block size 6 Page
Locating an Object in a SRAM 7 stored with cache line Maps from cache block to memory blocks From cached to uncached form Save a few bits by only storing tag No tag for block not in cache Hardware retrieves information can quickly match against multiple tags Object Name X = X? : : N-: Data D 243 X 7 J 5 Locating an Object in (cont.) DRAM 8 Each allocated page of virtual memory h entry in page table Mapping from virtual pages to physical pages From uncached form to cached form Page table entry even if page Specifies disk address Only way to indicate where to find page OS retrieves information Object Name X Location D: J: On X: : : N-: Data 243 7 5 A System with Only A System with Examples: most Cray machines, early PCs, nearly all embedded systems, etc. : : Examples: workstations, servers, modern PCs, etc. : : : : P-: N-: N-: generated by the correspond directly to bytes in physical memory 9 Address Translation: Hardware converts es to es via OS-managed lookup table (page table) Page Faults (like Misses ) What if an object is on disk rather than in memory? Page table entry indicates OS exception handler invoked to move data from disk into memory current process suspends, others can resume OS h full control over placement, etc. Before fault After fault Servicing a Page Fault Processor Signals Controller Read block of length P starting at disk address X and store starting at memory address Y Read Occurs Direct Access (DMA) Under control of I/O controller I / O Controller Signals Completion 2 Interrupt processor OS resumes suspended process Processor Reg () Initiate Block Read (3) Read Done -I/O bus (2) DMA Transfer I/O controller disk disk Page 2
Motivation #2: Management Multiple processes can reside in physical memory. How do we resolve address conflicts? 3 what if two processes access something at the same address? Linux/x86 process memory image %esp kernel virtual memory stack mapped region forshared libraries runtime heap (via malloc) uninitialized data (.bss) initialized data (.data) program text (.text) forbidden memory invisible to user code the brk ptr Solution: Separate Virt. Addr. Spaces and spaces divided into equal-sized blocks blocks are called pages (both virtual and physical) Each process h its own space operating system controls how virtual pages signed to physical memory Address Space for Process : N- Address Space for Process 2: N- 4 VP VP 2... VP VP 2... Address Translation M- PP 2 PP 7 PP Address Space (DRAM) (e.g., read/only library code) Motivation #3: Protection Page table entry contains access rights information hardware enforces this protection (trap into OS if violation occurs) s Read? Write? Addr : VP : Yes No PP 9 : Process i: VP : Yes Yes PP 4 VP 2: No No XXXXXXX Process j: 5 Read? Write? Addr VP : Yes Yes PP 6 VP : Yes No PP 9 VP 2: No No XXXXXXX N-: VM Address Translation Address Space V = {,,, N } Address Space 6 P = {,,, M } M < N Address Translation MAP: V P U { } For a: MAP(a) = a if data at a at a in P MAP(a) = if data at a not in physical memory» Either in or stored on disk VM Address Translation: Hit VM Address Translation: Miss page fault Processor a Hardware Addr Trans Mechanism a' part of the on-chip memory mgmt unit (MMU) Processor a Hardware Addr Trans Mechanism a' fault handler part of the on-chip memory mgmt unit (MMU) Secondary memory OS performs this transfer (only if miss) 7 8 Page 3
VM Address Translation Parameters P = 2 p = page size (bytes). N = 2 n = address limit M = 2 m = address limit n p p virtual page number address translation m p p physical page number s Page Number resident page table (physical page or disk address) Storage (swap file or regular file system file) Page offset bits don t change a result of translation 9 2 Address Translation via page table be register acts n p p virtual page number () access physical page number () Operation Translation Separate (set of) page table(s) per process forms index into page table (points to a page table entry) page table be register acts n p p virtual page number () access physical page number () if = m p p physical page number () if = m p p physical page number () 2 22 Operation Computing Address Entry (PTE) provides information about page if ( bit = ) then the page is in memory.» Use physical page number () to construct address if ( bit = ) then the page is on disk» Page fault page table be register acts n p p virtual page number () access physical page number () Operation Checking Protection Access rights field indicate allowable access e.g., read-only, read-write, execute-only typically support multiple protection modes (e.g., kernel vs. user) Protection violation fault if user doesn t have necessary permission page table be register acts n p p virtual page number () access physical page number () 23 if = m p p physical page number () 24 if = m p p physical page number () Page 4
Integrating VM and data Most s ly Addressed 25 hit Accessed by es Allows multiple processes to have blocks in cache at same time Allows multiple processes to share pages doesn t need to be concerned with protection issues Access rights checked part of address translation Perform Address Translation Before Lookup But this could involve a memory access itself (of the PTE) Of course, page table entries can also become cached Speeding up Translation with a TLB Translation Lookide Buffer (TLB) 26 Small hardware cache in MMU Maps virtual page numbers to physical page numbers Contains complete page table entries for small number of pages hit VA PA miss TLB Lookup miss VA PA miss Translation Translation hit data Address Translation with a TLB Simple System Example n p p virtual page number tag physical page number... TLB Addressing 4-bit es 2-bit Page size = 64 bytes 3 2 9 8 7 6 5 4 3 2 = TLB hit tag index byte offset ( Page Number) ( Page Offset) tag data 9 8 7 6 5 4 3 2 cache hit = data ( Page Number) ( Page Offset) 27 28 Simple System Only show first 6 entries 28 8 3 9 7 2 33 A 9 3 2 B 4 C 5 6 D 2D 6 E 7 F D Simple System TLB TLB Set 6 entries 4-way sociative 3 2 9 8 7 6 5 4 3 2 3 9 D 7 2 3 2D 2 4 A 2 2 8 6 3 3 7 3 D A 34 2 29 3 Page 5
Simple System Idx 6 lines 4-byte line size Direct mapped 7 6 C2 3 9 8 7 6 5 4 3 2 B B B2 B3 Idx 9 99 23 8 24 3A 5 89 5 9 2D B B B2 2 B 2 4 8 A 2D 93 5 DA 3 36 B B 4 32 43 6D 5 D 36 72 F B3 3B 8F 9 C 2 D D 6 4 96 34 5 6 3 E 3 83 77 B D3 DF 3 F 4 Address Translation Example # Address x3d4 32 3 2 9 8 7 6 5 4 3 2 TLB Hit? Page Fault? : Address 9 8 7 6 5 4 3 2 Offset Hit? Byte: Address Translation Example #2 Address xb8f 3 2 9 8 7 6 5 4 3 2 Address Translation Example #3 Address x4 3 2 9 8 7 6 5 4 3 2 TLB Hit? Page Fault? : Address 9 8 7 6 5 4 3 2 TLB Hit? Page Fault? : Address 9 8 7 6 5 4 3 2 Offset Hit? Byte: Offset Hit? Byte: 33 34 Multi-Level s Given: 4KB (2 2 ) page size 32-bit address space 4-byte PTE Problem: Would need a 4 MB page table! 2 2 *4 bytes Common solution 35 multi-level page tables e.g., 2-level table (P6) Level table: 24 entries, each of which points to a Level 2 page table. Level 2 table: 24 entries, each of which points to a page Level Table Level 2 Tables... Themes Programmer s View Large flat address space Can allocate large blocks of contiguous addresses Processor owns machine H private address space Unaffected by behavior of other processes System View 36 User space created by mapping to set of pages Need not be contiguous Allocated dynamically Enforce protection during address translation OS manages many processes simultaneously Continually switching among processes Especially when one must wait for resource» E.g., disk I/O to handle page fault Page 6