Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Similar documents
Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS 5523 Operating Systems: Memory Management (SGG-8)

Virtual Memory Outline

CS370 Operating Systems

Address spaces and memory management

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Chapter 8: Virtual Memory. Operating System Concepts

OPERATING SYSTEM. Chapter 9: Virtual Memory

CSE 120 Principles of Operating Systems

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

CS 550 Operating Systems Spring Memory Management: Page Replacement

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

2 nd Half. Memory management Disk management Network and Security Virtual machine

Memory Management. Outline. Memory. Virtual Memory. Instructor: Dr. Tongping Liu

Paging and Page Replacement Algorithms

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Chapter 9: Virtual Memory

Virtual Memory COMPSCI 386

Virtual Memory 1. Virtual Memory

Virtual Memory 1. Virtual Memory

Chapter 10: Virtual Memory

Chapter 9: Virtual Memory

Operating System Concepts 9 th Edition

VIRTUAL MEMORY READING: CHAPTER 9

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

CS307: Operating Systems

Memory Management. Goals of Memory Management. Mechanism. Policies

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,

Virtual Memory III. Jo, Heeseung

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Recap: Memory Management

Operating Systems. Operating Systems Sina Meraji U of T

Chapter 9: Virtual Memory

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Basic Memory Management

How to create a process? What does process look like?

Operating System Concepts

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

Memory Management. Dr. Yingwu Zhu

CSE 120 Principles of Operating Systems Spring 2017

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

Chapter 9: Virtual Memory

Lecture 12: Demand Paging

CS370 Operating Systems

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Lecture#16: VM, thrashing, Replacement, Cache state

Chapter 9: Virtual-Memory

CS 550 Operating Systems Spring Memory Management: Paging

CS 5523 Operating Systems: Memory Management

CSE 153 Design of Operating Systems

Chapter 6: Demand Paging

Virtual Memory Management

Virtual Memory 1. To do. q Segmentation q Paging q A hybrid system

CS 153 Design of Operating Systems Winter 2016

CS399 New Beginnings. Jonathan Walpole

CIS Operating Systems Memory Management Cache and Demand Paging. Professor Qiang Zeng Spring 2018

Main Memory: Address Translation

CS 318 Principles of Operating Systems

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Processes and Virtual Memory Concepts

CIS Operating Systems Memory Management Cache. Professor Qiang Zeng Fall 2017

Chapter 3: Virtual Memory ว ตถ ประสงค. Background สามารถอธ บายข อด ในการท ระบบใช ว ธ การจ ดการหน วยความจ าแบบเสม อนได

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Chapter 3 - Memory Management

CIS Operating Systems Memory Management Cache Replacement & Review. Professor Qiang Zeng Fall 2017

CS370 Operating Systems

1. Creates the illusion of an address space much larger than the physical memory

Multi-level Translation. CS 537 Lecture 9 Paging. Example two-level page table. Multi-level Translation Analysis

Lecture 19: Virtual Memory: Concepts

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Memory management, part 2: outline

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

Virtual Memory. Today. Segmentation Paging A good, if common example

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Lecture #15: Translation, protection, sharing

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

Operating Systems and Computer Networks. Memory Management. Dr.-Ing. Pascal A. Klein

Lecture 14: Cache & Virtual Memory

Computer Systems. Virtual Memory. Han, Hwansoo

Caching and Demand-Paged Virtual Memory

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Memory Management - Demand Paging and Multi-level Page Tables

Transcription:

Memory Management Disclaimer: some slides are adopted from book authors slides with permission 1

CPU management Roadmap Process, thread, synchronization, scheduling Memory management Virtual memory Disk management Other topics 2

Goals Memory Management Easy to use abstraction Same virtual memory space for all processes Isolation among processes Don t corrupt each other Efficient use of capacity limited physical memory Don t waste memory 3

Concepts to Learn Virtual address translation Paging and TLB Page table management Swap 4

Abstraction Virtual Memory (VM) 4GB linear address space for each process Reality 1GB of actual physical memory shared with 20 other processes How? 5

Hardware support Virtual Memory MMU (memory management unit) TLB (translation lookaside buffer) OS support Manage MMU (sometimes TLB) Determine address mapping Alternatives No VM: many real-time OS (RTOS) don t have VM 6

Virtual Address Process A Process B Process C MMU Physical Memory 7

MMU Hardware unit that translates virtual address to physical address Virtual address Physical address CPU MMU Memory 8

A Simple MMU BaseAddr: base register Paddr = Vaddr + BaseAddr Advantages Fast Disadvantages No protection Wasteful 28000 14000 P3 P2 P1 9

Base + Limit approach A Better MMU If Vaddr > limit, then trap to report error Else Paddr = Vaddr + BaseAddr 10

Base + Limit approach A Better MMU If Vaddr > limit, then trap to report error Else Paddr = Vaddr + BaseAddr Advantages Support protection Support variable size partitions Disadvantages Fragmentation P3 P2 P1 11

Fragmentation External fragmentation total available memory space exists to satisfy a request, but it is not contiguous P4 P3 Free P2, P4 P3 Alloc P5 P3 P2 P5 P1 P1 P1 12

Paging approach Modern MMU Divide physical memory into fixed-sized blocks called frames (e.g., 4KB each) Divide logical memory into blocks of the same size called pages (page size = frame size) Pages are mapped onto frames via a table page table 13

Paging hardware Modern MMU 14

Memory view Modern MMU 15

Virtual Address Translation Virtual address 0x12345678 Page # Ox12345 Offset 0x678 0x678 0x12345 Physical address 0xabcde678 frame #: 0xabcde frame # offset 16

Advantages of Paging No external fragmentation Efficient use of memory Internal fragmentation (waste within a page) still exists 17

Translation speed Issues of Paging Each load/store instruction requires a translation Table is stored in memory Memory is slow to access ~100 CPU cycles to access DRAM 18

Translation Lookaside Buffer (TLB) Cache frequent address translations So that CPU don t need to access the page table all the time Much faster 19

Issues of Paging Page size Small: minimize space waste, requires a large table Big: can waste lots of space, the table size is small Typical size: 4KB How many pages are needed for 4GB (32bit)? 4GB/4KB = 1M pages What is the required page table size? assume 1 page table entry (PTE) is 4bytes 1M * 4bytes = 4MB Btw, this is for each process. What if you have 100 processes? Or what if you have a 64bit address? 20

Advantages Paging No external fragmentation Two main Issues Translation speed can be slow TLB Table size is big 21

Two-level paging Multi-level Paging 22

Two Level Address Translation Virtual address 1 st level 2 nd level offset Base ptr 1 st level Page table 2 nd level Page Physical address Frame # Offset 23

Example 8 bits 1 st level 8 bits 8 bits 2 nd level offset Virtual address format (24bits) Vaddr: 0x0703FE 1 st level idx: 07 2 nd level idx: 03 Offset: FE Vaddr: 0x072370 1 st level idx: 2 nd level idx: Offset: Vaddr: 0x082370 1 st level idx: 2 nd level idx: Offset: 24

Can save table space How, why? Multi-level Paging Don t need to create all mappings in the outer page table 25

MMU Summary Virtual address physical address Various designs are possible, but Paged MMU Memory is divided into fixed-sized pages Use page table to store the translation table No external fragmentation 26

Summary Paged MMU: Two main Issues Translation speed can be slow TLB Table size is big Multi-level page table 27

Quiz What is the minimum page table size of a process that uses only 4MB memory space? assume a PTE size is 4B 20 bits 1 st level 12 bits offset 4 * 2^20 = 4MB 10 bits 1 st level 10 bits 12 bits 2 nd level offset 4 * 2^10 + 4* 2^10 = 8KB 28

Quiz What is the page table size for a process that only uses 8MB memory? Common: 32bit address space, 4KB page size Case 1) 1-level page table Assume each page table entry is 4 bytes Answer: 2^20 x 4 byte = 4MB Case 2) two-level page table Assume first 10 bits are used as the index of the first-level page table, next 10 bits are used as the index of the second-level page table. In both-levels, single page table entry size is 4 bytes Answer: 2^10 x 4 + 2 x (2^10 x 4) = 4KB + 8KB = 12KB 29

Quiz What is the page table size for a process that only uses 16MB memory? Common: 32bit address space, 4KB page size Case 1) 1-level page table Assume each page table entry is 4 bytes Answer: 2^20 x 4 byte = 4MB Case 2) two-level page table Assume first 10 bits are used as the index of the first-level page table, next 10 bits are used as the index of the second-level page table. In both-levels, single page table entry size is 4 bytes Answer: 2^10 x 4 + 4 x (2^10 x 4) = 4KB + 16KB = 20KB 30

Demand paging Concepts to Learn 31

Abstraction Virtual Memory (VM) 4GB linear address space for each process Reality 1GB of actual physical memory shared with 20 other processes Does each process use the (1) entire virtual memory (2) all the time? 32

Demand Paging Idea: instead of keeping the entire memory pages in memory all the time, keep only part of them on a on-demand basis 33

Page Table Entry (PTE) PTE format (architecture specific) 1 1 1 2 20 bits V M R P Page Frame No Valid bit (V): whether the page is in memory Modify bit (M): whether the page is modified Reference bit (R): whether the page is accessed Protection bits(p): readable, writable, executable 34

Partial Memory Mapping Not all pages are in memory (i.e., valid=1) 35

Page Fault When a virtual address can not be translated to a physical address, MMU generates a trap to the OS Page fault handling procedure Step 1: allocate a free page frame Step 2: bring the stored page on disk (if necessary) Step 3: update the PTE (mapping and valid bit) Step 4: restart the instruction 36

Page Fault Handling 37

Demand Paging 38

Starting Up a Process Stack Unmapped pages Heap Data Code 39

Starting Up a Process Stack Heap Data Access next instruction Code 40

Starting Up a Process Stack Heap Data Page fault Code 41

Starting Up a Process Stack OS 1) allocates free page frame 2) loads the missed page from the disk (exec file) 3) updates the page table entry Heap Data Code 42

Starting Up a Process Stack Over time, more pages are mapped as needed Heap Data Code 43

Anonymous Page An executable file contains code (binary) So we can read from the executable file What about heap? No backing storage (unless it is swapped out later) Simply map a new free page (anonymous page) into the address space 44

Program Binary Sharing Bash #1 Bash #2 Physical memory Bash text Multiple instances of the same program E.g., 10 bash shells 45

Multi-level paging Recap Instead of a single big table, many smaller tables Save space Demand paging Mapping memory dynamically over time keep necessary pages on-demand basis Page fault handling Happens when the CPU tries to access unmapped address. 46

Concepts to Learn Page replacement policy Thrashing 47

Memory Size Limit? Demand paging illusion of infinite memory 4GB 4GB 4GB Process A Process B Process C TLB MMU Page Table 1GB Physical Memory 500GB Disk 48

Illusion of Infinite Memory Demanding paging Allows more memory to be allocated than the size of physical memory Uses memory as cache of disk What to do when memory is full? On a page fault, there s no free page frame Someone (page) must go (be evicted) 49

On a page fault Recap: Page Fault Step 1: allocate a free page frame Step 2: bring the stored page on disk (if necessary) Step 3: update the PTE (mapping and valid bit) Step 4: restart the instruction 50

Page Replacement Procedure On a page fault Step 1: allocate a free page frame If there s a free frame, use it If there s no free frame, choose a victim frame and evict it to disk (if necessary) swap-out Step 2: bring the stored page on disk (if necessary) Step 3: update the PTE (mapping and valid bit) Step 4: restart the instruction 51

Page Replacement Procedure 52

Page Replacement Policy Which page (a.k.a. victim page) to go? What if the evicted page is needed soon? A page fault occurs, and the page will be re-loaded Important decision for performance reason The cost of choosing wrong page is very high: disk accesses 53

Page Replacement Policies FIFO (First In, First Out) Evict the oldest page first. Pros: fair Cons: can throw out frequently used pages Optimal Evict the page that will not be used for the longest period Pros: optimal Cons: you need to know the future 54

Random Page Replacement Policies Randomly choose a page Pros: simple. TLB commonly uses this method Cons: unpredictable LRU (Least Recently Used) Look at the past history, choose the one that has not been used for the longest period Pros: good performance Cons: complex, requires h/w support 55

LRU Example 56

LRU Example 57

Recap: Demand Paging Idea: instead of keeping the entire memory pages in memory all the time, keep only part of them on a on-demand basis 58

Recap: Page Fault Handling 59

Recap: Page Replacement Procedure On a page fault Step 1: allocate a free page frame If there s a free frame, use it If there s no free frame, choose a victim frame and evict it to disk (if necessary) swap-out Step 2: bring the stored page on disk (if necessary) Step 3: update the PTE (mapping and valid bit) Step 4: restart the instruction 60

Example Complete the following with the FIFO, Optimal, LRU replacement policies, respectively Reference E D H B D E D A E B E Page #1 E E E Page #2 D D Page #3 H Mark X for a fault X X X 61

FIFO Reference E D H B D E D A E B E Page #1 E E E B B B B A A A A Page #2 D D D * E E E * B B Page #3 H H H H D D D D E Mark X for a fault X X X X X X X X X 62

Optimal Reference E D H B D E D A E B E Page #1 E E E E E E E E E E E Page #2 D D D D D D A A A A Page #3 H B B B B B B B B Mark X for a fault X X X X X 63

LRU Reference E D H B D E D A E B E Page #1 E E E B B B B A A A A Page #2 D D D D D D D D B B Page #3 H H H E E E E E E Mark X for a fault X X X X X X X 64

Ideal solutions Timestamp List Implementing LRU Record access time of each page, and pick the page with the oldest timestamp Keep a list of pages ordered by the time of reference Head: recently used page, tail: least recently used page Problems: very expensive (time & space & cost) to implement 65

Page Table Entry (PTE) PTE format (architecture specific) 1 1 1 2 20 bits V M R P Page Frame No Valid bit (V): whether the page is in memory Modify bit (M): whether the page is modified Reference bit (R): whether the page is accessed Protection bits(p): readable, writable, executable 66

Implementing LRU: Approximation Second chance algorithm (or clock algorithm) Replace an old page, not the oldest page Use reference bit set by the MMU Algorithm details Arrange physical page frames in circle with a pointer On each page fault Step 1: advance the pointer by one Step 2: check the reference bit of the page: 1 Used recently. Clear the bit and go to Step 1 0 Not used recently. Selected victim. End. 67

Second Chance Algorithm 68

Implementing LRU: Approximation N chance algorithm OS keeps a counter per page On a page fault Step 1: advance the pointer by one Step 2: check the reference bit of the page: check the reference bit 1 reference=0; counter=0 0 counter++; if counter =N then found victim, otherwise repeat Step 1. Large N better approximation to LRU, but costly Small N more efficient but poor LRU approximation 69

Performance of Demand Paging Three major activities Service the interrupt hundreds of cpu cycles Read/write the page from/to disk lots of time Restart the process again just a small amount of time Page Fault Rate 0 p 1 if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT) EAT = (1 p) x memory access + p (page fault overhead + swap page out + swap page in ) 70

Performance of Demand Paging Memory access time = 200 nanoseconds Average page-fault service time = 8 milliseconds How to calculate EAT? (page fault probability = p) EAT = (1 p) x 200 + p (8 milliseconds) = (1 p) x 200 + p x 8,000,000 = 200 + p x 7,999,800 If one access out of 1,000 causes a page fault (p = 0.001), then EAT = 8.2 microseconds. This is a slowdown by a factor of 40!! If you want performance degradation < 10 percent 220 > 200 + 7,999,800 x p 20 > 7,999,800 x p p <.0000025 < one page fault in every 400,000 memory accesses 71

Recap: Page Replacement Policies FIFO Evict the oldest page first. Pros: fair; Cons: can throw out frequently used pages Optimal Evict the page that will not be used for the longest period Pros: optimal; Cons: you need to know the future Random LRU Randomly choose a page. Pros: simple. TLB commonly uses this method; Cons: unpredictable Look at the past history, choose the one that has not been used for the longest period. Pros: good performance; Cons: complex, requires h/w support 72

Recap: Page Table Entry (PTE) PTE format (architecture specific) 1 1 1 2 20 bits V M R P Page Frame No Valid bit (V): whether the page is in memory Modify bit (M): whether the page is modified Reference bit (R): whether the page is accessed Protection bits(p): readable, writable, executable 73

Recap: Second Chance Algorithm 74

Thrashing A processes is busy swapping pages in and out Don t make much progress Happens when a process do not have enough pages in memory Very high page fault rate Low CPU utilization (why?) CPU utilization based admission control may bring more programs to increase the utilization more page faults 75

Thrashing 76

Concepts to Learn Memory-mapped I/O Copy-on-Write (COW) Memory allocator 77

Recap: Program Binary Sharing Bash #1 Bash #2 Physical memory Bash text Multiple instances of the same program E.g., 10 bash shells 78

Memory Mapped I/O Idea: map a file on disk onto the memory space 79

Memory Mapped I/O Benefits: you don t need to use read()/write() system calls, just directly access data in the file via memory instructions How it works? Just like demand paging of an executable file What about writes? Mark the modified (M) bit in the PTE Write back the modified pages back to the original file 80

Page Table Entry (PTE) PTE format (architecture specific) 1 1 1 2 20 bits V M R P Page Frame No Valid bit (V): whether the page is in memory Modify bit (M): whether the page is modified Reference bit (R): whether the page is accessed Protection bits(p): readable, writable, executable 81

Copy-on-Write (COW) Fork() creates a copy of a parent process Copy the entire pages on new page frames? If the parent uses 1GB memory, then a fork() call would take a while Then, suppose you immediately call exec(). Was it of any use to copy the 1GB of parent process s memory? 82

Copy-on-Write Better way: copy the page table of the parent Page table is much smaller (so copy is faster) Both parent and child point to the exactly same physical page frames parent child 83

Copy-on-Write What happens when the parent/child reads? What happens when the parent/child writes? Trouble!!! parent child 84

Page Table Entry (PTE) PTE format (architecture specific) 1 1 1 2 20 bits V M R P Page Frame No Valid bit (V): whether the page is in memory Modify bit (M): whether the page is modified Reference bit (R): whether the page is accessed Protection bits(p): readable, writable, executable 85

Copy-on-Write All pages are marked as read-only Page tbl Page tbl parent RO RO RO RO RO RO child 86

Copy-on-Write Up on a write, a page fault occurs and the OS copies the page on a new frame and maps to it with R/W protection setting parent Page tbl RO RO RW Page tbl RO RO RW RO child 87

Kernel/User Virtual Memory 0xFFFFFFFF 0xC0000000 Kernel Kernel memory Kernel code, data Identical to all address spaces Fixed 1-1 mapping of physical memory 0x00000000 User User memory Process code, data, heap, stack,... Unique to each address space On-demand mapping (page fault) 88

User-level Memory Allocation When a process actually allocate a memory from the kernel? On a page fault Allocate a page (e.g., 4KB) What does malloc() do? Doesn t physically allocate pages Manage a process s heap Variable size objects in heap 89

Kernel-level Memory Allocation Page-level allocator (low-level) Page granularity (4K) Buddy allocator Other kernel-memory allocators Support fine-grained allocations Slab, kmalloc, vmalloc allocators 90

Kernel-Level Memory Allocators Kernel code kmalloc Arbitrary size objects vmalloc (large) non-physically contiguous memory SLAB allocator Multiple fixed-sized object caches Page allocator (buddy) Allocate power of two pages: 4K, 8K, 16K, 91

Buddy Allocator Linux s page-level allocator Allocate power of two number of pages: 1, 2, 4, 8, pages. Request rounded up to next highest power of 2 When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2 Quickly expand/shrink across the lists 32KB 16KB 8KB 4KB 92

Buddy Allocator Example Assume 256KB chunk available, kernel requests 21KB 256 Free 128 Free 128 Free 64 Free 64 Free 128 Free 32 Free 32 Free 64 Free 128 Free 32 A 32 Free 64 Free 128 Free 93

Buddy Allocator Example Free A 32 A 32 Free 64 Free 128 Free 32 Free 32 Free 64 Free 128 Free 64 Free 64 Free 128 Free 128 Free 128 Free 256 Free 94

Virtual Memory Summary MMU and address translation Paging Demand paging Copy-on-write Page replacement Kernel-level memory allocator 95

Quiz: Address Translation 8 bits 1 st level 8 bits 8 bits 2 nd level offset Virtual address format (24bits) 4 bits 3 Frame # Unused 1 V Page table entry (8bit) Vaddr: 0x0703FE Paddr: 0x3FE Vaddr: 0x072370 Paddr:??? Vaddr: 0x082370 Paddr:??? Page-table base address = 0x100 Addr +0 +1 +2 +3 +4 +5 +6 +7 +8 +A +B +C +D +E +F 0x000 31 0x010 0x020 41.. 0x100 00 01 01 00 01.. 0x200 96

Quiz: Address Translation 8 bits 1 st level 8 bits 8 bits 2 nd level offset Virtual address format (24bits) 4 bits 3 Frame # Unused 1 V Page table entry (8bit) Vaddr: 0x0703FE Paddr: 0x3FE Vaddr: 0x072370 Paddr: 0x470 Vaddr: 0x082370 Paddr: invalid Page-table base address = 0x100 Addr +0 +1 +2 +3 +4 +5 +6 +7 +8 +A +B +C +D +E +F 0x000 31 0x010 0x020 41.. 0x100 00 01 01 00 01.. 0x200 97