Virtual Memory. Alan L. Cox Some slides adapted from CMU slides

Similar documents
14 May 2012 Virtual Memory. Definition: A process is an instance of a running program

This lecture. Virtual Memory. Virtual memory (VM) CS Instructor: Sanjeev Se(a

Computer Systems. Virtual Memory. Han, Hwansoo

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

CSE 153 Design of Operating Systems

Systems Programming and Computer Architecture ( ) Timothy Roscoe

Carnegie Mellon. 16 th Lecture, Mar. 20, Instructors: Todd C. Mowry & Anthony Rowe

Roadmap. Java: Assembly language: OS: Machine code: Computer system:

Processes and Virtual Memory Concepts

198:231 Intro to Computer Organization. 198:231 Introduction to Computer Organization Lecture 14

University of Washington Virtual memory (VM)

Virtual Memory: Concepts

Virtual Memory. CS61, Lecture 15. Prof. Stephen Chong October 20, 2011

Foundations of Computer Systems

Virtual Memory: Concepts

Virtual Memory: Concepts

Memory System Case Studies Oct. 13, 2008

Pentium/Linux Memory System March 17, 2005

Lecture 19: Virtual Memory: Concepts

Virtual Memory: Systems

P6 memory system P6/Linux Memory System October 31, Overview of P6 address translation. Review of abbreviations. Topics. Symbols: ...

Intel P The course that gives CMU its Zip! P6/Linux Memory System November 1, P6 memory system. Review of abbreviations

Virtual Memory II. CSE 351 Autumn Instructor: Justin Hsia

Understanding the Design of Virtual Memory. Zhiqiang Lin

Virtual Memory II CSE 351 Spring

VM as a cache for disk

Intel P6 (Bob Colwell s Chip, CMU Alumni) The course that gives CMU its Zip! Memory System Case Studies November 7, 2007.

Virtual Memory. Physical Addressing. Problem 2: Capacity. Problem 1: Memory Management 11/20/15

Virtual Memory II. CSE 351 Autumn Instructor: Justin Hsia

Virtual Memory Oct. 29, 2002

Virtual Memory I. CSE 351 Spring Instructor: Ruth Anderson

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

Virtual Memory: Systems

Page Which had internal designation P5

Motivations for Virtual Memory Virtual Memory Oct. 29, Why VM Works? Motivation #1: DRAM a Cache for Disk

CISC 360. Virtual Memory Dec. 4, 2008

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

virtual memory. March 23, Levels in Memory Hierarchy. DRAM vs. SRAM as a Cache. Page 1. Motivation #1: DRAM a Cache for Disk

VIRTUAL MEMORY: CONCEPTS

A Few Problems with Physical Addressing. Virtual Memory Process Abstraction, Part 2: Private Address Space

Virtual Memory Nov 9, 2009"

virtual memory Page 1 CSE 361S Disk Disk

CSE 351. Virtual Memory

Virtual Memory I. CSE 351 Winter Instructor: Mark Wyse

Virtual Memory. Computer Systems Principles

Virtual Memory. Samira Khan Apr 27, 2017

Virtual Memory. Spring Instructors: Aykut and Erkut Erdem

CS 153 Design of Operating Systems

@2010 Badri Computer Architecture Assembly II. Virtual Memory. Topics (Chapter 9) Motivations for VM Address translation

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 23

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2015 Lecture 23

University*of*Washington*

Virtual Memory III /18-243: Introduc3on to Computer Systems 17 th Lecture, 23 March Instructors: Bill Nace and Gregory Kesden

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 24

P6/Linux Memory System Nov 11, 2009"

CSE 560 Computer Systems Architecture

18-447: Computer Architecture Lecture 16: Virtual Memory

Applications of. Virtual Memory in. OS Design

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

CS 153 Design of Operating Systems

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

CIS Operating Systems Memory Management Cache. Professor Qiang Zeng Fall 2017

CIS Operating Systems Memory Management Cache and Demand Paging. Professor Qiang Zeng Spring 2018

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

Virtual Memory. CS 3410 Computer System Organization & Programming

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Operating Systems: Internals and Design. Chapter 8. Seventh Edition William Stallings

CSE 120 Principles of Operating Systems

Virtual Memory 1. Virtual Memory

Virtual Memory 1. Virtual Memory

Memory Management! How the hardware and OS give application pgms:" The illusion of a large contiguous address space" Protection against each other"

CIS Operating Systems Memory Management Cache. Professor Qiang Zeng Fall 2015

Topic 18: Virtual Memory

Memory Management! Goals of this Lecture!

Foundations of Computer Systems

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS 261 Fall Mike Lam, Professor. Virtual Memory

Virtual Memory. CS 3410 Computer System Organization & Programming. [K. Bala, A. Bracy, E. Sirer, and H. Weatherspoon]

Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance

PROCESS VIRTUAL MEMORY PART 2. CS124 Operating Systems Winter , Lecture 19

Memory Management. Goals of this Lecture. Motivation for Memory Hierarchy

CSE 120 Principles of Operating Systems Spring 2017

Virtual Memory I. CSE 351 Autumn Instructor: Justin Hsia

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Chapter 8. Virtual Memory

COSC3330 Computer Architecture Lecture 20. Virtual Memory

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

This Unit: Main Memory. Virtual Memory. Virtual Memory. Other Uses of Virtual Memory

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

Virtual Memory 2: demand paging

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Lecture 15: Caches and Optimization Computer Architecture and Systems Programming ( )

Recap: Memory Management

Systems Programming and Computer Architecture ( ) Timothy Roscoe

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Transcription:

Alan L. Cox alc@rice.edu Some slides adapted from CMU 5.23 slides

Objectives Be able to explain the rationale for VM Be able to explain how VM is implemented Be able to translate virtual addresses to physical addresses Be able to explain how VM benefits fork() and exec() 2

Programs refer to virtual memory addresses movl (%ecx),%eax Conceptually very large array of bytes Each byte has its own address System provides address space private to particular process Compiler and run-time system allocate VM Where different program objects should be stored All allocation within single virtual address space FF F 3

Problem : How Does Everything Fit? 64-bit addresses: 6 Exabyte (6 billion GB!) Physical main memory: Tens or Hundreds of Gigabytes? And there are many processes. 4

Problem 2: Memory Management Physical main memory Process Process 2 Process 3 Process n x stack heap.text.data What goes where? 5

Problem 3: How To Protect Process i Physical main memory Process j Problem 4: How To Share? Physical main memory Process i Process j 6

Solution: Indirection All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection. David Wheeler Virtual memory Process Physical memory Virtual memory mapping Process n Each process gets its own private memory space Solves the previous problems 7

Address Spaces Virtual address space: Set of N = 2 n virtual addresses {,, 2, 3,, N-} Physical address space: Set of M = 2 m physical addresses {,, 2, 3,, M-} Clear distinction between data (bytes) and their attributes (addresses) Each object can have multiple addresses Every byte in main memory: one physical address, one (or more) virtual addresses 8

A System Using Physical Addressing CPU Physical address (PA) Main memory : : 2: 3: 4: 5: 6: 7: 8:... M-: Data word Used in simple systems like embedded microcontrollers in devices like cars, elevators, and digital picture frames 9

A System Using Virtual Addressing CPU Chip CPU Virtual address (VA) MMU Physical address (PA) Main memory : : 2: 3: 4: 5: 6: 7: 8:... M-: Data word Used in all modern desktops, laptops, workstations

Why (VM)? Efficient use of limited main memory (RAM) Use RAM as a cache for parts of a virtual address space some non-cached parts stored on disk some (unallocated) non-cached parts stored nowhere Keep only active areas of virtual address space in memory transfer data back and forth as needed Simplifies memory management for programmers Each process gets the same full, private linear address space Isolates address spaces One process can t interfere with another s memory because they operate in different address spaces User process cannot access privileged information different sections of address spaces have different permissions

Outline Virtual memory (VM) Overview and motivation VM as tool for caching VM as tool for memory management VM as tool for memory protection Address translation mmap(), fork(), exec() 2

VM as a Tool for Caching Virtual memory: array of N = 2 n contiguous bytes think of the array (allocated part) as being stored on disk Physical main memory (DRAM) = cache for allocated virtual memory Basic unit: page; size = 2 p Virtual memory Physical memory Disk VP VP Unallocated Cached Uncached Unallocated Cached Empty Empty PP PP Uncached Empty VP 2 n-p - Cached Uncached 2 n - 2 m - PP 2 m-p - Virtual pages (VP's) stored on disk Physical pages (PP's) cached in DRAM 3

Memory Hierarchy: Core 2 Duo L/L2 cache: 64 B blocks Not drawn to scale CP U Re g L I-cache 32 KB L D- cache ~4 MB L2 unified cache ~4 GB ~5 GB Main Memory hroughput: 6 B/cycle 8 B/cycle 2 B/cycle B/3 cycles atency: 3 cycles 4 cycles cycles millions Miss penalty (latency): 3x Disk Miss penalty (latency):,x 4

DRAM Cache Organization DRAM cache organization driven by the enormous miss penalty DRAM is about x slower than SRAM Disk is about,x slower than DRAM For first byte, faster for next byte Consequences Large page size: at least 4-8 KB, sometimes 4 MB Fully associative Any VP can be placed in any PP Highly sophisticated replacement algorithms Too complicated and open-ended to be implemented in hardware Write-back rather than write-through 5

A System Using Virtual Addressing CPU Chip CPU Virtual address (VA) MMU Physical address (PA) Main memory : : 2: 3: 4: 5: 6: 7: 8:... M-: Data word Used in all modern desktops, laptops, workstations 6

Address Translation: Page Tables A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages. Here: 8 VPs Per-process kernel data structure in DRAM Physical page number or Valid disk address PTE null null PTE 7 Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 PP PP 3 VP 6 VP 7 7

Address Translation With a Page Table Page table base register (PTBR) Virtual address Virtual page number (VPN) Virtual page offset (VPO) Page table address for process Page ValidPhysical table page number (PPN) Valid bit = : page not in memory (page fault) Physical page number (PPN) Physical address Physical page offset (PPO) 8

Page Hit Page hit: reference to VM word that is in physical memory Virtual address Physical page number or Valid disk address PTE null null PTE 7 Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 6 VP 7 PP PP 3 9

Page Miss Page miss: reference to VM word that is not in physical memory Virtual address Physical page number or Valid disk address PTE null null PTE 7 Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 6 VP 7 PP PP 3 2

Handling Page Fault Page miss causes page fault (an exception) Virtual address Physical page number or Valid disk address PTE null null PTE 7 Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 6 VP 7 PP PP 3 2

Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Virtual address Physical page number or Valid disk address PTE null null PTE 7 Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 4 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 6 VP 7 PP PP 3 22

Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Virtual address Physical page number or Valid disk address PTE null null PTE 7 Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 3 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 6 VP 7 PP PP 3 23

Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Offending instruction is restarted: page hit! Virtual address Physical page number or Valid disk address PTE null null PTE 7 Memory resident page table (DRAM) Physical memory (DRAM) VP VP 2 VP 7 VP 3 Virtual memory (disk) VP VP 2 VP 3 VP 4 VP 6 VP 7 PP PP 3 24

Servicing a Page Fault () Processor signals disk controller Read block of length P starting at disk address X and store starting at memory address Y (2) Read occurs Direct Memory Access (DMA) Under control of I/O controller (3) Controller signals completion Interrupts processor OS resumes suspended process Processor Reg Cache Cache Memory-I/O bus bus (2) DMA Transfer I/O Memory () Initiate Block Read (3) Read Done I/O controller Disk Disk 25

Why does it work? Locality Virtual memory works because of locality At any point in time, programs tend to access a set of active virtual pages called the working set Programs with better temporal locality will have smaller working sets If (working set size < main memory size) Good performance for one process after initial compulsory misses If ( SUM(working set sizes) > main memory size ) Thrashing: Performance meltdown where pages are swapped (copied) in and out continuously 26

Outline Virtual memory (VM) Overview and motivation VM as tool for caching VM as tool for memory management VM as tool for memory protection Address translation mmap(), fork(), exec() 27

VM as a Tool for Memory Management Memory allocation Each virtual page can be mapped to any physical page A virtual page can be stored in different physical pages at different times Sharing code and data among processes Map virtual pages to the same physical page (here: PP 6) Virtual Address Space for Process : Virtual Address Space for Process 2: N- N- VP VP 2... VP VP 2... Address translation M- PP 2 PP 6 PP 8... Physica l Addres s Space (DRAM) (e.g., readonly library code) 28

Simplifying Linking and Loading Linking Each program has similar virtual address space Code, stack, and shared libraries always start at the same address Loading execve() allocates virtual pages for.text and.data sections = creates PTEs marked as invalid The.text and.data sections are copied, page by page, on demand by the virtual memory system xc x4 x848 Kernel virtual memory User stack (created at runtime) Memory-mapped region for shared libraries Run-time heap (created by malloc) Read/write segment (.data,.bss) Read-only segment (.init,.text,.rodata) Unused Memory invisible to user code %esp (stack pointer) Loaded from the executable file 29

Outline Virtual memory (VM) Overview and motivation VM as tool for caching VM as tool for memory management VM as tool for memory protection Address translation mmap(), fork(), exec() 3

VM as a Tool for Memory Protection Extend PTEs with permission bits Page fault handler checks these before remapping If violated, send process SIGSEGV (segmentation Process i: fault) VP : VP : VP 2: SUP No No Yes kernel/supervisor mode needed READ WRITE Yes Yes Yes No Yes Yes Address PP 6 PP 4 PP 2 Physical Address Space PP 2 PP 4 PP 6 Process j: VP : VP : VP 2: SUP No Yes No READ WRITE Yes No Yes Yes Yes Yes Address PP 9 PP 6 PP PP 8 PP 9 PP 3

Outline Virtual memory (VM) Overview and motivation VM as tool for caching VM as tool for memory management VM as tool for memory protection Address translation mmap(), fork(), exec() 32

Address Translation: Page Hit CPU Chip CPU VA MMU 2 PTEA PTE 3 PA Cache/ Memory 4 Data 5 ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor 33

Address Translation: Page Fault Exception 4 Page fault handler CPU Chip CPU VA 7 MMU 2 PTEA PTE 3 Cache/ Memory Victim page 5 New page Disk 6 ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) Valid bit is zero, so MMU triggers page fault exception 5) Handler identifies victim (and, if dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction 34

Page Tables are often Multi-level in reality; accessing them is slow Given: 4KB (2 2 ) page size 48-bit address space 4-byte PTE Problem: Would need a 256 GB page table! 2 48 * 2-2 * 2 2 = 2 38 bytes Common solution Multi-level page tables Example: 2-level page table Level table: each PTE points to a page table Level 2 table: each PTE points to a page (paged in and out like other data) Level Table... Level 2 Tables... Level table stays in memory Level 2 tables paged in and out 35

A Two-Level Page Table Hierarchy Level page table PTE PTE PTE 2 (null) PTE 3 (null) PTE 4 (null) PTE 5 (null) PTE 6 (null) PTE 7 (null) PTE 8 (K - 9) null PTEs Level 2 page tables PTE... PTE 23 PTE... PTE 23 23 null PTEs Virtual memory VP... VP 23 VP 24... VP 247 Gap PTE 23 23 unallocated pages VP 925... 2K allocated VM pages for code and data 6K unallocated VM pages 23 unallocated pages allocated VM page for the stack 36

Translating with a k-level Page Table Virtual Address n- VPN VPN 2... VPN k p- VPO Level page table Level 2 page table...... Level k page table PPN m- PPN p- PPO Physical Address 37

Speeding up Translation with a TLB Page table entries (PTEs) are cached in L like any other memory word PTEs may be evicted by other data references PTE hit still incurs memory access delay Solution: Translation Lookaside Buffer (TLB) Small hardware cache in MMU Maps virtual page numbers to physical page numbers Contains complete page table entries for small number of pages 38

Without TLB CPU Chip CPU VA MMU 2 PTEA PTE 3 PA Cache/ Memory 4 Data 5 ) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor 39

TLB Hit CPU Chip TLB 2 PTE VPN 3 CPU VA MMU PA 4 Cache/ Memory Data 5 A TLB hit eliminates a memory access 4

TLB Miss CPU Chip 2 TLB 4 PTE Might need multiple iterations if a multi-level page table is used VPN CPU VA MMU 3 PTEA PA 5 Cache/ Memory Data 6 A TLB miss incurs an add l memory access (the PTE) Fortunately, TLB misses are rare 4

Simple Memory System Example Addressing 4-bit virtual addresses 2-bit physical address Page size = 64 bytes 3 2 9 8 7 6 5 4 3 2 VPN Virtual Page Number VPO Virtual Page Offset 9 8 7 6 5 4 3 2 PPN PPO Physical Page Number Physical Page Offset 42

Simple Memory System Page Table Only show first 6 entries (out of 256) Assume all other entries are invalid VPN PPN Valid VPN PPN Valid 28 8 3 9 7 2 33 A 9 3 2 B 4 C 5 6 D 2D 6 E 7 F D 43

Simple Memory System TLB 6 entries 4-way associative TLBT TLBI 3 2 9 8 7 6 5 4 3 2 VPN VPO Set Tag 3 PPN Valid Tag 9 PPN D Valid Tag PPN Valid Tag 7 PPN 2 Vali d 3 2D 2 4 A 2 2 8 6 3 3 7 3 D A 34 2 44

Simple Memory System Cache 6 lines, 4-byte block size Physically addressed Direct mapped CT CI CO 9 8 7 6 5 4 3 2 PPN PPO Idx Tag 9 Vali d B 99 B B2 23 B3 Idx 8 Tag 24 Vali d B 3A B B2 5 B3 89 5 9 2D 2 B 2 4 8 A 2D 93 5 DA 3B 3 36 B B 4 32 43 6D 8F 9 C 2 5 D 36 72 F D D 6 4 96 34 5 6 3 E 3 83 77 B D3 7 6 C2 DF 3 F 4 45

Address Translation Example # Virtual Address: x3d4 TLBT TLBI 3 2 9 8 7 6 5 4 3 2 VPN VPO VPN xf TLBI 3TLBT x3 TLB Hit? Page Y Fault? PPN: N xd Physical Address CT CI CO 9 8 7 6 5 4 3 2 PPN PPO CO CI x5 CT xd Hit? Byte: Y x36 46

Address Translation Example #2 Virtual Address: xb8f TLBT TLBI 3 2 9 8 7 6 5 4 3 2 VPN VPO VPN x2e TLBI 2TLBT xb TLB Hit? Page N Fault? PPN: Y TBD Physical Address CT CI CO 9 8 7 6 5 4 3 2 PPN PPO CO CI CT Hit? Byte: 47

Address Translation Example #3 Virtual Address: x2 TLBT TLBI 3 2 9 8 7 6 5 4 3 2 VPN VPO VPN x TLBI TLBT x TLB Hit? Page N Fault? PPN: N x28 Physical Address CT CI CO 9 8 7 6 5 4 3 2 PPN PPO CO CI x8 CT x28 Hit? Byte: N In DRAM, don t know value 48

Outline Virtual memory (VM) Overview and motivation VM as tool for caching VM as tool for memory management VM as tool for memory protection Address translation mmap(), fork(), exec() 49

Memory Mapping Creation of new VM area done via memory mapping Create new vm_area_struct and page tables for area Area can be backed by (i.e., get its initial values from) : Regular file on disk (e.g., an executable object file) Initial page bytes come from a section of a file Nothing (e.g.,.bss) First fault will allocate a physical page full of 's (demandzero) Once the page is written to (dirtied), it is like any other page Dirty pages are swapped back and forth between a special swap area of the disk. Key point: no virtual pages are copied into physical memory until they are referenced! Known as demand paging Crucial for time and space efficiency 5

User-Level Memory Mapping void *mmap(void *start, size_t len, int prot, int flags, int fd, off_t offset) len bytes len bytes start (or address chosen by kernel) offset (bytes) Disk file specified by file descriptor fd Process virtual memory 5

User-Level Memory Mapping void *mmap(void *start, size_t len, int prot, int flags, int fd, off_t offset) Map len bytes starting at offset offset of the file specified by file descriptor fd, preferably at address start start: may be for pick an address prot: PROT_READ, PROT_WRITE,... flags: MAP_PRIVATE, MAP_SHARED,... Return a pointer to start of mapped area (may not be start) Example: fast file-copy Useful for applications like Web servers that need to quickly copy files. mmap()allows file transfers without copying into user space. 52

mmap() Example: Fast File Copy /* mmap.c - a program that uses mmap to copy itself to stdout */ /* include <unistd.h>, <sys/mman.h>, <sys/stat.h>, and <fcntl.h> */ int main(void) { struct stat stat; int fd, size; char *bufp; /* open the file & get its size*/ fd = open("./mmap.c", O_RDONLY); fstat(fd, &stat); size = stat.st_size; /* map the file to a new VM area */ bufp = mmap(null, size, PROT_READ, MAP_PRIVATE, fd, ); /* write the VM area to stdout */ write(stdout_fileno, bufp, size); } return ; 53

fork() Revisited To create a new process using fork(): Make copies of the old process page table, etc. The two processes now share all of their pages Copy-on-write Allows each process to have a separate address space without copying all of the virtual pages Make pages of writeable areas read-only Flag these areas as private copy-on-write in OS Writes to these pages will cause protection faults Net result: Fault handler recognizes copy-on-write, makes a copy of the page, and restores write permissions Processes have identical address spaces Copies are deferred until absolutely necessary 54

exec() Revisited xffffffff xffbec %sp xff3dc brk x x User Stack Shared Libraries Heap Read/Write Data Read-only Code and Data Unused demand-zero.data.text libc.so demand-zero (.bss).data.rodata.text p To load p using exec: Delete existing page tables, etc. Create new page tables, etc.: Stack/heap/.bss are anonymous, demand-zero Code and data is mapped to ELF executable file p Shared libraries are dynamically linked and mapped Set program counter to entry point in.text OS will swap in pages from disk as they are used 55

Supports many OS-related functions Process creation Initial Forking children Task switching Protection/sharing Combination of hardware & software implementation Software management of tables, page allocations Hardware access of tables Page fault when no entry Hardware enforcement of protection Protection fault when invalid access 56

Next Time System-Level I/O 57