CSE 120 Principles of Operating Systems Spring 2017

Similar documents
CSE 120 Principles of Operating Systems

CS 318 Principles of Operating Systems

CS 153 Design of Operating Systems Winter 2016

Opera&ng Systems ECE344

CS 153 Design of Operating Systems Winter 2016

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

VIRTUAL MEMORY II. Jo, Heeseung

Lecture 12: Demand Paging

Recap: Memory Management

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Memory Management. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CSE 153 Design of Operating Systems

Operating Systems. Operating Systems Sina Meraji U of T

Lecture 8: Memory Management

Virtual Memory: From Address Translation to Demand Paging

Virtual Memory: From Address Translation to Demand Paging

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Multi-level Translation. CS 537 Lecture 9 Paging. Example two-level page table. Multi-level Translation Analysis

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

VIRTUAL MEMORY. Operating Systems 2015 Spring by Euiseong Seo

Translation Buffers (TLB s)

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

CS 153 Design of Operating Systems

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

CSE 351. Virtual Memory

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Virtual Memory Review. Page faults. Paging system summary (so far)

Virtual Memory I. Jo, Heeseung

CS 153 Design of Operating Systems

Virtual Memory III. Jo, Heeseung

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Virtual Memory 2. q Handling bigger address spaces q Speeding translation

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

CS 333 Introduction to Operating Systems. Class 11 Virtual Memory (1) Jonathan Walpole Computer Science Portland State University

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Advanced Memory Management

16 Sharing Main Memory Segmentation and Paging

Virtual Memory 2. To do. q Handling bigger address spaces q Speeding translation

CS5460: Operating Systems Lecture 14: Memory Management (Chapter 8)

The memory of a program. Paging and Virtual Memory. Swapping. Paging. Physical to logical address mapping. Memory management: Review

Virtual Memory. Virtual Memory

Address spaces and memory management

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

15 Sharing Main Memory Segmentation and Paging

Virtual to physical address translation

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

Virtual Memory. Today. Handling bigger address spaces Speeding translation

Memory Management. Goals of Memory Management. Mechanism. Policies

Outline. V Computer Systems Organization II (Honors) (Introductory Operating Systems) Advantages of Multi-level Page Tables

CS399 New Beginnings. Jonathan Walpole

Virtual Memory 2. Today. Handling bigger address spaces Speeding translation

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

Virtual Memory II CSE 351 Spring

Virtual Memory. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

CIS Operating Systems Memory Management Cache and Demand Paging. Professor Qiang Zeng Spring 2018

Memory Management - Demand Paging and Multi-level Page Tables

ECE232: Hardware Organization and Design

Carnegie Mellon. 16 th Lecture, Mar. 20, Instructors: Todd C. Mowry & Anthony Rowe

virtual memory. March 23, Levels in Memory Hierarchy. DRAM vs. SRAM as a Cache. Page 1. Motivation #1: DRAM a Cache for Disk

Virtual Memory Oct. 29, 2002

ECE468 Computer Organization and Architecture. Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

virtual memory Page 1 CSE 361S Disk Disk

ECE4680 Computer Organization and Architecture. Virtual Memory

Lecture 13: Virtual Memory Management. CSC 469H1F Fall 2006 Angela Demke Brown

CS 61C: Great Ideas in Computer Architecture. Lecture 23: Virtual Memory. Bernhard Boser & Randy Katz

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Virtual Memory. Samira Khan Apr 27, 2017

Motivations for Virtual Memory Virtual Memory Oct. 29, Why VM Works? Motivation #1: DRAM a Cache for Disk

Processes and Virtual Memory Concepts

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Operating Systems. Paging... Memory Management 2 Overview. Lecture 6 Memory management 2. Paging (contd.)

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Computer Architecture. Lecture 8: Virtual Memory

CISC 360. Virtual Memory Dec. 4, 2008

CIS Operating Systems Memory Management Cache. Professor Qiang Zeng Fall 2017

Virtual Memory: Systems

14 May 2012 Virtual Memory. Definition: A process is an instance of a running program

Virtual Memory. CS61, Lecture 15. Prof. Stephen Chong October 20, 2011

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS 61C: Great Ideas in Computer Architecture. Lecture 23: Virtual Memory

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

EECS 470. Lecture 16 Virtual Memory. Fall 2018 Jon Beaumont

Virtual Memory. control structures and hardware support

John Wawrzynek & Nick Weaver

ECE 571 Advanced Microprocessor-Based Design Lecture 13

198:231 Intro to Computer Organization. 198:231 Introduction to Computer Organization Lecture 14

Modern Virtual Memory Systems. Modern Virtual Memory Systems

Virtual Memory II. CSE 351 Autumn Instructor: Justin Hsia

Virtual Memory. CS 351: Systems Programming Michael Saelee

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

Transcription:

CSE 120 Principles of Operating Systems Spring 2017 Lecture 12: Paging

Lecture Overview Today we ll cover more paging mechanisms: Optimizations Managing page tables (space) Efficient translations (TLBs) (time) Demand paged virtual memory (space) Recap address translation Advanced Functionality Sharing memory Copy on Write Mapped files November 5, 2015 CSE 120 Lecture 10 Paging 2

Managing Page Tables Last lecture we computed the size of the page table for a 32-bit address space w/ 4K pages to be 4MB This is far far too much overhead for each process How can we reduce this overhead? Observation: Only need to map the portion of the address space actually being used (tiny fraction of entire addr space) How do we only map what is being used? Can dynamically extend page table Does not work if addr space is sparce (internal fragmentation) Use another level of indirection: two-level page tables November 5, 2015 CSE 120 Lecture 10 Paging 3

Two-Level Page Tables Two-level page tables Virtual addresses (VAs) have three parts:» Master page number, secondary page number, and offset Master page table maps VAs to secondary page table Secondary page table maps page number to physical page Offset indicates where in physical page address is located Example 4K pages, 4 bytes/pte How many bits in offset? 4K = 12 bits Want master page table in one page: 4K/4 bytes = 1K entries Hence, 1K secondary page tables. How many bits? Master (1K) = 10, offset = 12, inner = 32 10 12 = 10 bits November 5, 2015 CSE 120 Lecture 10 Paging 4

Two-Level Page Tables Physical Memory Virtual Address Master page number Secondary Offset Page table Physical Address Page frame Offset Master Page Table Page frame Secondary Page Table November 5, 2015 CSE 120 Lecture 10 Paging 5

Page Table Evolution Virtual Address Space Linear (Flat) Page Table Page 0 Physical Memory Page 1 Page 2 Page N-1 November 5, 2015 CSE 120 Lecture 10 Paging 6

Page Table Evolution Master Hierarchical Page Table Secondary Virtual Address Space Page 0 Page 1 Page 2 Physical Memory Page N-1 November 5, 2015 CSE 120 Lecture 10 Paging 7

Page Table Evolution Master Hierarchical Page Table Secondary Not Needed Virtual Address Space Page 0 Page 1 Page 2 Physical Memory Unmapped Page N-1 November 5, 2015 CSE 120 Lecture 10 Paging 8

Addressing Page Tables Where do we store page tables (which address space)? Physical memory Easy to address, no translation required But, allocated page tables consume memory for lifetime of VAS Virtual memory (OS virtual address space) Cold (unused) page table pages can be paged out to disk But, addressing page tables requires translation How do we stop recursion? Do not page the outer page table (called wiring) If we re going to page the page tables, might as well page the entire OS address space, too Need to wire special code and data (fault, interrupt handlers) November 5, 2015 CSE 120 Lecture 10 Paging 9

Efficient Translations Our original page table scheme already doubled the cost of doing memory lookups One lookup into the page table, another to fetch the data Now two-level page tables triple the cost! Two lookups into the page tables, a third to fetch the data Worse, 64-bit architectures support 4-level page tables And this assumes the page table is in memory How can we use paging but also have lookups cost about the same as fetching from memory? Cache translations in hardware Translation Lookaside Buffer (TLB) TLB managed by Memory Management Unit (MMU) November 5, 2015 CSE 120 Lecture 10 Paging 10

TLBs Translation Lookaside Buffers Translate virtual page #s into PTEs (not physical addrs) Can be done in a single machine cycle TLBs implemented in hardware Fully associative cache (all entries looked up in parallel) Cache tags are virtual page numbers Cache values are PTEs (entries from page tables) With PTE + offset, can directly calculate physical address TLBs exploit locality Processes only use a handful of pages at a time» 32-128 entries/pages (128-512K)» Only need those pages to be mapped Hit rates are therefore very important November 5, 2015 CSE 120 Lecture 10 Paging 11

TLBs Typical Details: Cache of PTEs Small (Just 32-128 PTEs) Separate Instruction and Data TLBs Two-level (256-512 combined I/D) Full Page Table in Memory CPU TLB DRAM Virtual Addresses Physical Addresses November 5, 2015 CSE 120 Lecture 10 Paging 12

Managing TLBs Address translations for most instructions are handled using the TLB >99% of translations, but there are misses (TLB miss) Who places translations into the TLB (loads the TLB)? Hardware (Memory Management Unit) [x86]» Knows where page tables are in main memory» OS maintains tables, HW accesses them directly» Tables have to be in HW-defined format (inflexible) Software loaded TLB (OS) [MIPS, Alpha, Sparc, PowerPC]» TLB faults to the OS, OS finds appropriate PTE, loads it in TLB» Must be fast (but still 20-200 cycles)» CPU ISA has instructions for manipulating TLB» Tables can be in any format convenient for OS (flexible) November 5, 2015 CSE 120 Lecture 10 Paging 13

Managing TLBs (2) OS ensures that TLB and page tables are consistent When it changes the protection bits of a PTE, it needs to invalidate the PTE if it is in the TLB Reload TLB on a process context switch Invalidate all entries Why? What is one way to fix it? When the TLB misses and a new PTE has to be loaded, a cached PTE must be evicted Choosing PTE to evict is called the TLB replacement policy Implemented in hardware, often simple (e.g., Last-Not-Used) November 5, 2015 CSE 120 Lecture 10 Paging 14

Paged Virtual Memory We ve mentioned before that pages can be moved between memory and disk This process is called demand paging OS uses main memory as a page cache of all the data allocated by processes in the system Initially, pages are allocated from memory When memory fills up, allocating a page in memory requires some other page to be evicted from memory» Why physical memory pages are called frames Evicted pages go to disk (where? the swap file/backing store) The movement of pages between memory and disk is done by the OS, and is transparent to the application November 5, 2015 CSE 120 Lecture 10 Paging 15

Page Faults What happens when a process accesses a page that has been evicted? 1. When it evicts a page, the OS sets the PTE as invalid and stores the location of the page in the swap file in the PTE 2. When a process accesses the page, the invalid PTE will cause a trap (page fault) 3. The trap will run the OS page fault handler 4. Handler uses the invalid PTE to locate page in swap file 5. Reads page into a physical frame, updates PTE to point to it 6. Restarts process But where does it put it? Have to evict something else OS usually keeps a pool of free pages around so that allocations do not always cause evictions November 5, 2015 CSE 120 Lecture 10 Paging 16

Address Translation Redux We started this topic with the high-level problem of translating virtual addresses into physical addresses We ve covered all of the pieces Virtual and physical addresses Virtual pages and physical page frames Page tables and page table entries (PTEs), protection TLBs Demand paging Now let s put it together, bottom to top November 5, 2015 CSE 120 Lecture 10 Paging 17

The Common Case Situation: Process is executing on the CPU, and it issues a read to an address What kind of address is it? Virtual or physical? The read goes to the TLB in the MMU 1. TLB does a lookup using the page number of the address 2. Common case is that the page number matches, returning a page table entry (PTE) for the mapping for this address 3. TLB validates that the PTE protection allows reads (in this example) 4. PTE specifies which physical frame holds the page 5. MMU combines the physical frame and offset into a physical address 6. MMU then reads from that physical address, returns value to CPU Note: This is all done by the hardware November 5, 2015 CSE 120 Lecture 10 Paging 18

TLB Misses At this point, two other things can happen 1. TLB does not have a PTE mapping this virtual address 2. PTE in TLB, but memory access violates PTE protection bits We ll consider each in turn November 5, 2015 CSE 120 Lecture 10 Paging 19

Reloading the TLB If the TLB does not have mapping, two possibilities: 1. MMU loads PTE from page table in memory» Hardware managed TLB, OS not involved in this step» OS has already set up the page tables so that the hardware can access it directly 2. Trap to the OS» Software managed TLB, OS intervenes at this point» OS does lookup in page table, loads PTE into TLB» OS returns from exception, TLB continues A machine will only support one method or the other At this point, there is a PTE for the address in the TLB November 5, 2015 CSE 120 Lecture 10 Paging 20

TLB Misses (2) Note that: Page table lookup (by HW or OS) can cause a recursive fault if page table is paged out Assuming page tables are in OS virtual address space Not a problem if tables are in physical memory Yes, this is a complicated situation When TLB has PTE, it restarts translation Common case is that the PTE refers to a valid page in memory» These faults are handled quickly, just read PTE from the page table in memory and load into TLB Uncommon case is that TLB faults again on PTE because of PTE protection bits (e.g., page is invalid)» Becomes a page fault November 5, 2015 CSE 120 Lecture 10 Paging 21

Page Faults PTE can indicate a protection fault Read/write/execute operation not permitted on page Invalid virtual page not allocated, or page not in physical memory TLB traps to the OS (software takes over) R/W/E OS usually will send fault back up to process, or might be playing games (e.g., copy on write, mapped files) Invalid» Virtual page not allocated in address space OS sends fault to process (e.g., segmentation fault)» Page not in physical memory OS allocates frame, reads from disk, maps PTE to physical frame November 5, 2015 CSE 120 Lecture 10 Paging 22

Advanced Functionality Now we re going to look at some advanced functionality that the OS can provide applications using virtual memory tricks Shared memory Copy on Write Mapped files November 5, 2015 CSE 120 Lecture 10 Paging 23

Sharing Private virtual address spaces protect applications from each other Usually exactly what we want But this makes it difficult to share data (have to copy) Parents and children in a forking Web server or proxy will want to share an in-memory cache without copying We can use shared memory to allow processes to share data using direct memory references Both processes see updates to the shared memory segment» Process B can immediately read an update by process A How are we going to coordinate access to shared data? November 5, 2015 CSE 120 Lecture 10 Paging 24

Sharing (2) How can we implement sharing using page tables? Have PTEs in both tables map to the same physical frame Each PTE can have different protection values Must update both PTEs when page becomes invalid Can map shared memory at same or different virtual addresses in each process address space Different: Flexible (no address space conflicts), but pointers inside the shared memory segment are invalid (Why?) Same: Less flexible, but shared pointers are valid (Why?) What happens if a pointer inside the shared segment references an address outside the segment? November 5, 2015 CSE 120 Lecture 10 Paging 25

Isolation: No Sharing Virtual Address Space #1 Physical Memory Virtual Address Space #2 November 5, 2015 CSE 120 Lecture 10 Paging 26

Sharing Pages Virtual Address Space #1 PTEs Point to Same Physical Page Physical Memory Virtual Address Space #2 November 5, 2015 CSE 120 Lecture 10 Paging 27

Copy on Write OSes spend a lot of time copying data System call arguments between user/kernel space Entire address spaces to implement fork() Use Copy on Write (CoW) to defer large copies as long as possible, hoping to avoid them altogether Instead of copying pages, create shared mappings of parent pages in child virtual address space Shared pages are protected as read-only in parent and child» Reads happen as usual» Writes generate a protection fault, trap to OS, copy page, change page mapping in client page table, restart write instruction How does this help fork()? November 5, 2015 CSE 120 Lecture 10 Paging 28

Copy on Write: Before Fork Parent Virtual Address Space Physical Memory November 5, 2015 CSE 120 Lecture 10 Paging 29

Copy on Write: Fork Parent Virtual Address Space Read-Only Mappings Physical Memory Child Virtual Address Space November 5, 2015 CSE 120 Lecture 10 Paging 30

Copy on Write: On A Write Parent Virtual Address Space Now Read-Write & Private Physical Memory Child Virtual Address Space November 5, 2015 CSE 120 Lecture 10 Paging 31

Mapped Files Mapped files enable processes to do file I/O using loads and stores Instead of open, read into buffer, operate on buffer, Bind a file to a virtual memory region (mmap() in Unix) PTEs map virtual addresses to physical frames holding file data Virtual address base + N refers to offset N in file Initially, all pages mapped to file are invalid OS reads a page from file when invalid page is accessed OS writes a page to file when evicted, or region unmapped If page is not dirty (has not been written to), no write needed» Another use of the dirty bit in PTE November 5, 2015 CSE 120 Lecture 10 Paging 32

Mapped Files Virtual Address Space Mapped File November 5, 2015 CSE 120 Lecture 10 Paging 33

Mapped Files (2) File is essentially backing store for that region of the virtual address space (instead of using the swap file) Virtual address space not backed by real files also called Anonymous VM Advantages Uniform access for files and memory (just use pointers) Less copying Drawbacks Process has less control over data movement» OS handles faults transparently Does not generalize to streamed I/O (pipes, sockets, etc.) November 5, 2015 CSE 120 Lecture 10 Paging 34

Summary Paging mechanisms: Optimizations Managing page tables (space) Efficient translations (TLBs) (time) Demand paged virtual memory (space) Recap address translation Advanced Functionality Sharing memory Copy on Write Mapped files Next time: Paging policies November 5, 2015 CSE 120 Lecture 10 Paging 35

Next time Chapters 21-23 November 5, 2015 CSE 120 Lecture 10 Paging 36