Virtual Memory. Kevin Webb Swarthmore College March 8, 2018

Similar documents
CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

Recall from Tuesday. Our solution to fragmentation is to split up a process s address space into smaller chunks. Physical Memory OS.

Memory Management. Dr. Yingwu Zhu

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

COSC3330 Computer Architecture Lecture 20. Virtual Memory

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

Chapter 9 Memory Management

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

CS5460: Operating Systems Lecture 14: Memory Management (Chapter 8)

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Memory Management. Kevin Webb Swarthmore College February 27, 2018

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Chapter 8 Memory Management

CS399 New Beginnings. Jonathan Walpole

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

ECE 571 Advanced Microprocessor-Based Design Lecture 12

CS 3733 Operating Systems:

Memory Management. Memory Management

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8: Memory-Management Strategies

Virtual to physical address translation

Memory Management. Dr. Yingwu Zhu

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS370 Operating Systems

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

CS 5523 Operating Systems: Memory Management (SGG-8)

Virtual Memory 2. To do. q Handling bigger address spaces q Speeding translation

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

CS307 Operating Systems Main Memory

Chapter 8 Main Memory

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Goals of Memory Management

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Main Memory (Part II)

COS 318: Operating Systems. Virtual Memory and Address Translation

Chapter 8: Main Memory

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Virtual Memory. Today. Handling bigger address spaces Speeding translation

Virtual Memory 2. Today. Handling bigger address spaces Speeding translation

Main Memory: Address Translation

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Virtual Memory: From Address Translation to Demand Paging

Operating Systems. Paging... Memory Management 2 Overview. Lecture 6 Memory management 2. Paging (contd.)

Virtual Memory 2. q Handling bigger address spaces q Speeding translation

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

Views of Memory. Real machines have limited amounts of memory. Programmer doesn t want to be bothered. 640KB? A few GB? (This laptop = 2GB)

Virtual Memory. CS61, Lecture 15. Prof. Stephen Chong October 20, 2011

Introduction to Operating Systems

Module 8: Memory Management

CS162 Operating Systems and Systems Programming Lecture 12. Address Translation. Page 1

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 13: Address Translation

Today: Segmentation. Last Class: Paging. Costs of Using The TLB. The Translation Look-aside Buffer (TLB)

memory management Vaibhav Bajpai

CS 31: Intro to Systems Caching. Kevin Webb Swarthmore College March 24, 2015

Virtual Memory. CS 3410 Computer System Organization & Programming

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

16 Sharing Main Memory Segmentation and Paging

Chapter 8: Main Memory

VIRTUAL MEMORY II. Jo, Heeseung

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Virtual Memory. Virtual Memory

CSE 120 Principles of Operating Systems

EECS 470. Lecture 16 Virtual Memory. Fall 2018 Jon Beaumont

CIS Operating Systems Memory Management Cache and Demand Paging. Professor Qiang Zeng Spring 2018

CS307: Operating Systems

Computer Systems C S Cynthia Lee Today s materials adapted from Kevin Webb at Swarthmore College

CIS Operating Systems Memory Management Cache. Professor Qiang Zeng Fall 2017

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2015 Lecture 23

CS3600 SYSTEMS AND NETWORKS

Page Tables. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Virtual Memory. Motivation:

Module 8: Memory Management

CS370 Operating Systems

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, FALL 2012

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

csci 3411: Operating Systems

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Principles of Operating Systems

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013

Logical versus Physical Address Space

1. Creates the illusion of an address space much larger than the physical memory

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

Transcription:

irtual Memory Kevin Webb Swarthmore College March 8, 2018

Today s Goals Describe the mechanisms behind address translation. Analyze the performance of address translation alternatives. Explore page replacement policies for disk swapping.

Address Translation: Wish List OS Text Data Heap Stack OS Process 1 Process 3 Process 2 libc code Process 1 Combination of hardware and OS, working together. In hardware, MMU: Memory Management Unit Map virtual addresses to physical addresses. Allow multiple processes to be in memory at once, but isolate them from each other. Determine which subset of data to keep in memory / move to disk. Allow the same physical memory to be mapped in multiple process ASes. Make it easier to perform placement in a way that reduces fragmentation. Map addresses quickly with a little HW help.

Simple (Unrealistic) Translation Example Process P 2 s virtual addresses don t align with physical memory s addresses. 0 P 2 Determine offset from physical address 0 to start of P 2, store in base. Base + P 1 0 P 2 P 3 P 2max Phy max

Generalizing Problem: process may not fit in one contiguous region 0 0 Base? + Base? + Base? +? P 2 P 2 P 2 P 2 P 2max Phy max

Generalizing Problem: process may not fit in one contiguous region 0 Perm Base P 2 Solution: keep a table (one for each process) Keep details for each region in a row Store additional metadata (ex. permissions) R, X R R, W P 2 Interesting questions: How many regions should there be (and what size)? How to determine which row we should use? 0 P 2max P 2? P 2 Phy max

Defining Regions - Two Approaches Segmentation: Partition address space and memory into segments Segments have varying sizes Paging: Partition address space and memory into pages Pages are a constant, fixed size

Fragmentation Internal Process asks for memory, doesn t use it all. Possible reasons: Process was wrong about needs OS gave it more than it asked for internal: within an allocation Used Unused Memory allocated to process External Over time, we end up with these small gaps that become more difficult to use (eventually, wasted). external: unused memory between allocations OS

Which scheme is better for reducing internal and external fragmentation. A. Segmentation is better than paging for both forms of fragmentation. B. Segmentation is better for internal fragmentation, and paging is better for external fragmentation. C. Paging is better for internal fragmentation, and segmentation is better for external fragmentation. D. Paging is better than segmentation for both forms of fragmentation.

Which would you use? Why? Pros/Cons? A. Segmentation: Partition address space and memory into segments Segments have varying sizes B. Paging: Partition address space and memory into pages Pages are a constant, fixed size C. Something else (what?)

Segmentation vs. Paging A segment is good logical unit of information Can be sized to fit any contents Easy to share large regions (e.g., code, data) Protection requirements correspond to logical data segment A page is good physical unit of information Simple physical memory placement No external fragmentation Constant sizes make it easier for hardware to help

Generalizing Problem: process may not fit in one contiguous region 0 Perm Base P 2 Solution: keep a table (one for each process) Keep details for each region in a row Store additional metadata (ex. permissions) R, X R R, W P 2 Interesting questions: How many regions should there be (and what size)? How to determine which row we should use? 0 P 2max P 2? P 2 Phy max

For both segmentation and paging Each process gets a table to track memory address translations. When a process attempts to read/write to memory: use high order bits of virtual address to determine which row to look at in the table use low order bits of virtual address to determine an offset within the physical region

Address Translation irtual Address Upper bits Lower bits Table Meta Phy Loc Perm Physical Address Physical Memory

Performance Implications Without M: Go directly to address in memory. Upper bits irtual Address Lower bits With M: Table Meta Phy Loc Perm Do a lookup in memory to determine which address to use. Physical Address Concept: level of indirection Physical Memory

Defining Regions - Two Approaches Segmentation: Partition address space and memory into segments Segments have varying sizes Paging: Partition address space and memory into pages Pages are a constant, fixed size

Segment Table One table per process Where is the table located in memory? Segment table base register (STBR) Segment table size register (STSR) STBR STSR Base Bound Perm Table entry elements : valid bit (does it contain a mapping?) Base: segment location in physical memory Bound: segment size in physical memory Permissions

Address Translation Segment s irtual Address Offset i Physical address = base of s + i First, do a series of checks Base Bound Perm Physical Address

Check if Segment s is within Range Segment s irtual Address Offset i STBR STSR Base Bound Perm s < STSR Physical Address

Check if Segment Entry s is alid Segment s irtual Address Offset i STBR STSR Base Bound Perm == 1 Physical Address

Check if Offset i is within Bounds Segment s irtual Address Offset i STBR STSR Base Bound Perm i < Bound Physical Address

Check if Operation is Permitted Segment s irtual Address Offset i STBR STSR Base Bound Perm Perm (op) Physical Address

Translate Address Segment s irtual Address Offset i STBR STSR Base Bound Perm + Physical Address

Sizing the Segment Table Segment s irtual Address Offset i Number of bits n specifies max size of table, where number of entries = 2 n Base Bound Perm Number of bits n specifies max size of segment Helpful reminder: 2 10 => Kilobyte 2 20 => Megabyte 2 30 => Gigabyte Number of bits needed to address physical memory Number of bits needed to specify max segment size

Example of Sizing the Segment Table Segment s: 5 bits Offset i: 27 bits Base Bound Perm Given 32-bit virtual address space, 1 GB physical memory (max) 5 bit segment number, 27 bit offset

5 bit segment address, 32 bit logical address, 1 GB Physical memory. How many entries (rows) will we have in our segment table? A. 32: The logical address size is 32 bits B. 32: The segment address is five bits C. 30: We need to address 1 GB of physical memory D. 27: We need to address up to the maximum offset

Example of Sizing the Segment Table Segment s: 5 bits Offset i: 27 bits 5 bits to address 2 5 = 32 entries Base Bound Perm Given 32-bit virtual address space, 1 GB physical memory (max) 5 bit segment number, 27 bit offset

How many bits do we need for the base? Segment s: 5 bits Offset i: 27 bits 5 bits to address 2 5 = 32 entries? Base BoundPerm A. 30 bits, to address 1 GB of physical memory. B. 5 bits, because we have 32 rows in the segment table. C. 27 bits, to address any potential offset value.

How many bits do we need for the base? Segment s: 5 bits Offset i: 27 bits 5 bits to address 2 5 = 32 entries? Base BoundPerm 1 GB A. 30 bits, to address 1 GB of physical memory. B. 5 bits, because we have 32 rows in the segment table. C. 27 bits, to address any potential offset value.

Example of Sizing the Segment Table Segment s: 5 bits Offset i: 27 bits 5 bits to address 2 5 = 32 entries 30 bits needed to address 1 GB Base Bound Perm Given 32-bit virtual address space, 1 GB physical memory (max) 5 bit segment number, 27 bit offset

How many bits do we need for the bound? Segment s: 5 bits Offset i: 27 bits 5 bits to address 2 5 = 32 entries 30 bits needed to address 1 GB Base BoundPerm? A. 5 bits: the size of the segment portion of the virtual address. B. 27 bits: the size of the offset portion of the virtual address. C. 32 bits: the size of the virtual address.

How many bits do we need for the bound? Segment s: 5 bits Offset i: 27 bits 5 bits to address 2 5 = 32 entries 30 bits needed to address 1 GB Base BoundPerm? 128 MB (max) A. 5 bits: the size of the segment portion of the virtual address. B. 27 bits: the size of the offset portion of the virtual address. C. 32 bits: the size of the virtual address.

Example of Sizing the Segment Table Segment s: 5 bits Offset i: 27 bits 5 bits to address 2 5 = 32 entries 30 bits needed to address 1 GB Base Bound Perm 27 bits needed to size up to 128 MB? Given 32 bit logical, 1 GB physical memory (max) 5 bit segment number, 27 bit offset

Example of Sizing the Segment Table Segment s: 5 bits Offset i: 27 bits 5 bits to address 2 5 = 32 entries 30 bits needed to address 1 GB Base Bound Perm 27 bits needed to size up to 128 MB 8 bytes needed to contain 61 (1+30+27+3+ ) bits Total table size? Given 32 bit logical, 1 GB physical memory (max) 5 bit segment number, 27 bit offset

Example of Sizing the Segment Table Segment s: 5 bits Offset i: 27 bits 5 bits to address 2 5 = 32 entries 30 bits needed to address 1 GB Base Bound Perm 27 bits needed to size up to 128 MB 8 bytes needed to contain 61 (1+30+27+3+ ) bits Table size = 32 x 8 = 256 bytes Given 32 bit logical, 1 GB physical memory (max) 5 bit segment number, 27 bit offset

Pros and Cons of Segmentation Pro: Each segment can be located independently separately protected grown/shrunk independently Pro: Small segment table size Con: ariable-size allocation Difficult to find holes in physical memory External fragmentation

Defining Regions - Two Approaches Segmentation: Partition address space and memory into segments Segments have varying sizes Paging: Partition address space and memory into pages Pages are a constant, fixed size

Paging ocabulary For each process, the virtual address space is divided into fixed-size pages. For the system, the physical memory is divided into fixed-size frames. The size of a page is equal to that of a frame. Often 4 KB in practice. Some CPUs allow for small and large pages at the same time.

Page Table One table per process Table parameters in memory Page table base register Page table size register PTBR PTSR R D Frame Perm Table entry elements : valid bit R: referenced bit D: dirty bit Frame: location in phy mem Perm: access permissions

Address Translation irtual Address Page p Offset i Physical address = frame of p + offset i First, do a series of checks R D Frame Perm Physical Address

Check if Page p is Within Range Page p irtual Address Offset i PTBR PTSR R D Frame Perm p < PTSR Physical Address

Check if Page Table Entry p is alid Page p irtual Address Offset i PTBR PTSR R D Frame Perm == 1 Physical Address

Check if Operation is Permitted Page p irtual Address Offset i PTBR PTSR R D Frame Perm Perm (op) Physical Address

Translate Address Page p irtual Address Offset i PTBR PTSR R D Frame Perm concat Physical Address

Physical Address by Concatenation Page p irtual Address Offset i PTBR PTSR R D Frame Perm concat Frame f Physical Address Offset i Frames are all the same size. Only need to store the frame number in the table, not exact address!

Sizing the Page Table Page p irtual Address Offset i Number of bits n specifies max size of table, where number of entries = 2 n R D Frame Perm Number of bits specifies page size Number of bits needed to address physical memory in units of frames

Example of Sizing the Page Table Page p: 20 bits Offset i: 12 bits R D Frame Perm Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset

Example of Sizing the Page Table Page p: 20 bits Offset i: 12 bits? R D Frame Perm Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset

How many entries (rows) will there be in this page table? A. 2 12, because that s how many the offset field can address B. 2 20, because that s how many the page field can address C. 2 30, because that s how many we need to address 1 GB D. 2 32, because that s the size of the entire address space Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset

Example of Sizing the Page Table Page p: 20 bits Offset i: 12 bits 20 bits to address 2 20 = 1 M entries R D Frame Perm Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset

Example of Sizing the Page Table Page p: 20 bits Offset i: 12 bits 20 bits to address 2 20 = 1 M entries R D Frame Perm How big is a frame? Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset

What will be the frame size, in bytes? A. 2 12, because that s how many bytes the offset field can address B. 2 20, because that s how many bytes the page field can address C. 2 30, because that s how many bytes we need to address 1 GB D. 2 32, because that s the size of the entire address space Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset

Example of Sizing the Page Table Page p: 20 bits Offset i: 12 bits 20 bits to address 2 20 = 1 M entries R D Frame Perm Page size = frame size = 2 12 = 4096 bytes Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset

How many bits do we need to store the frame number? Page p: 20 bits Offset i: 12 bits 20 bits to address 2 20 = 1 M entries R D Frame Perm Page size = frame size = 2 12 = 4096 bytes? Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset A: 12 B: 18 C: 20 D: 30 E: 32

Example of Sizing the Page Table Page p: 20 bits Offset i: 12 bits 20 bits to address 2 20 = 1 M entries 18 bits to address 2 30 /2 12 frames R D Frame Perm Page size = frame size = 2 12 = 4096 bytes Size of an entry? Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset

Example of Sizing the Page Table Page p: 20 bits Offset i: 12 bits 20 bits to address 2 20 = 1 M entries 18 bits to address 2 30 /2 12 frames R D Frame Perm Page size = frame size = 2 12 = 4096 bytes 4 bytes needed to contain 24 (1+1+1+18+3+ ) bits Total table size? Given: 32 bit virtual addresses, 1 GB physical memory Address partition: 20 bit page number, 12 bit offset

Example of Sizing the Page Table Page p: 20 bits Offset i: 12 bits 20 bits to address 2 20 = 1 M entries 18 bits to address 2 30 /2 12 frames R D Frame Perm Page size = frame size = 2 12 = 4096 bytes 4 bytes needed to contain 24 (1+1+1+18+3+ ) bits Table size = 1 M x 4 = 4 MB 4 MB of bookkeeping for every process? 200 processes -> 800 MB just to store page tables

Pros and Cons of Paging Pro: Fixed-size pages and frames No external fragmentation No difficult placement decisions Con: large table size Con: maybe internal fragmentation

x86: Hybrid Approach Design: Multiple lookups: first in segment table, which points to a page table. Extra level of indirection. M Segment Table Page Tables PM Reality: All segments are max physical memory size Segments effectively unused, available for legacy reasons. (Mostly) disappeared in x86-64

Outstanding Problems Mostly considering paging from here on. 1. Page tables are way too big. Most processes don t need that many pages, can t justify a huge table. 2. Adding indirection hurts performance.

Outstanding Problems Mostly considering paging from here on. 1. Page tables are way too big. Most processes don t need that many pages, can t justify a huge table. 2. Adding indirection hurts performance. Solution: MORE indirection!

Multi-Level Page Tables irtual Address 1st-level Page d 2nd-level Page p Offset i Insight: AS is typically sparsely populated. Idea: every process gets a page directory (1 st -level table) R M Frame Points to (base) frame containing 2nd-level page table R M Frame concat Only allocate 2 nd -level tables when the process is using that AS region! Physical Address

Multi-Level Page Tables 1st-level Page d irtual Address 2nd-level Page p Offset i OS Text Data Heap R M Frame Points to (base) frame containing 2nd-level page table R M Frame concat Physical Address Stack

Multi-Level Page Tables With only a single level, the page table must be large enough for the largest processes. Multi-level table => extra level of indirection: WORSE performance more memory accesses Much better memory efficiency process s page table is proportional to how much of the AS it s using. Small process -> low page table storage Large process -> high page table storage, needed it anyway

Translation Cost Each application memory access now requires multiple accesses! Suppose memory takes 100 ns to access. one-level paging: 200 ns two-level paging: 300 ns Solution: Add hardware, take advantage of locality Most references are to a small number of pages Keep translations of these in high-speed memory

Memory Management Unit OS Text Data Heap Stack OS Process 1 Process 3 Process 2 libc code Process 1 Combination of hardware and OS, working together. In hardware, MMU: Memory Management Unit When a process tries to use memory, send the address to MMU. MMU will do as much work as it can. If it knows the answer, great! If it doesn t, trigger exception (OS gets control), consult software table.

Translation Look-aside Buffer (TLB) Fast memory mapping cache inside MMU keeps most recent translations If key matches, get frame number quickly otherwise, wait for normal translation (in parallel) Page p or or [page d, page p] or [segment s, page p] Offset i Match key key frame Frame f Offset i

Recall: Context Switching Performance Even though it s fast, context switching is expensive: 1. time spent is 100% overhead 2. must invalidate other processes resources (caches, memory mappings) 3. kernel must execute it must be accessible in memory Also recall: Advantage of threads Threads all share one process AS 0x0 Operating system Text Data Heap Stack 0xFFFFFFFF

Translation Cost with TLB Cost is determined by Speed of memory: ~ 100 nsec Speed of TLB: ~ 10 nsec Hit ratio: fraction of memory referencess satisfied by TLB, ~95% Speed to access memory with no address translation: 100 nsec Speed to access memory with address translation (2-level paging): TLB miss: 300 nsec (200% slowdown) TLB hit: 110 nsec (10% slowdown) Average: 110 x 0.95 + 300 x 0.05 = 119.5 nsec

TLB Design Issues The larger the TLB the higher the hit rate the slower the response the greater the expense the larger the space (in MMU, on chip) TLB has a major effect on performance! Must be flushed on context switches Alternative: tagging entries with PIDs

Summary Many options for translation mechanism: segmentation, paging, hybrid, multi-level paging. All of them: level(s) of indirection. Simplicity of paging makes it most common today. Multi-level page tables improve memory efficiency page table bookkeeping scales with process AS usage. TLB in hardware MMU exploits locality to improve performance