Lecture 12: Demand Paging

Similar documents
CS 153 Design of Operating Systems Winter 2016

CSE 120 Principles of Operating Systems

CSE 153 Design of Operating Systems

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Virtual Memory III. Jo, Heeseung

Operating Systems. Operating Systems Sina Meraji U of T

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

CS 153 Design of Operating Systems Winter 2016

CSE 120 Principles of Operating Systems Spring 2017

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

CSE 120 Principles of Operating Systems

Memory Management. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS 318 Principles of Operating Systems

CS 153 Design of Operating Systems Spring 18

Virtual Memory - II. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XVI. University at Buffalo.

EvicMng the best page #2: FIFO. #1: Belady s Algorithm. CS 537 Lecture 11 Page Replacement Policies 11/1/11. Michael SwiF

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Opera&ng Systems ECE344

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Swapping. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Swapping. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Virtual Memory Design and Implementation

Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms

VIRTUAL MEMORY READING: CHAPTER 9

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

Virtual Memory Management

Address spaces and memory management

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

Recap: Memory Management

Memory management, part 2: outline

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Memory: Paging System

Chapter 4 Memory Management. Memory Management

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

Operating Systems Lecture 6: Memory Management II

Virtual Memory Design and Implementation

CS370 Operating Systems

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Basic Page Replacement

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Lecture 14 Page Replacement Policies

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

How to create a process? What does process look like?

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Chapter 9: Virtual Memory

Virtual Memory Outline

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Chapter 6: Demand Paging

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

Virtual Memory Design and Implementation

CS450/550 Operating Systems

Last Class: Demand Paged Virtual Memory

Chapters 9 & 10: Memory Management and Virtual Memory

Modeling Page Replacement: Stack Algorithms. Design Issues for Paging Systems

Chapter 8 Virtual Memory

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Basic Memory Management

Chapter 4 Memory Management

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

Page Replacement Algorithms

Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Page replacement algorithms OS

16 Sharing Main Memory Segmentation and Paging

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Lecture#16: VM, thrashing, Replacement, Cache state

21 Lecture: More VM Announcements From last time: Virtual Memory (Fotheringham, 1961)

OPERATING SYSTEM. Chapter 9: Virtual Memory

a process may be swapped in and out of main memory such that it occupies different regions

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Last Class. Demand Paged Virtual Memory. Today: Demand Paged Virtual Memory. Demand Paged Virtual Memory

MEMORY MANAGEMENT/1 CS 409, FALL 2013

CSE 153 Design of Operating Systems

CS510 Operating System Foundations. Jonathan Walpole

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

Chapter 8: Virtual Memory. Operating System Concepts

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Virtual Memory. ICS332 Operating Systems

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

Transcription:

Lecture 1: Demand Paging CSE 10: Principles of Operating Systems Alex C. Snoeren HW 3 Due 11/9

Complete Address Translation We started this topic with the high-level problem of translating virtual addresses into physical address We ve covered all of the pieces Virtual and physical addresses Virtual pages and physical page frames Page tables and page table entries (PTEs), protection Translation lookaside buffers (TLBs) Demand paging Now let s put it together, bottom to top CSE 10 Lecture 1: Demand Paging

The Common Case Situation: Process is executing on the CPU, and it issues a read to an address What kind of address is it? Virtual or physical? The read goes to the TLB in the MMU 1. TLB does a lookup using the page number of the address. Common case is that the page number matches, returning a page table entry (PTE) for the mapping for this address 3. TLB validates that the PTE protection allows reads (in this example) 4. PTE specifies which physical frame holds the page 5. MMU combines the physical frame and offset into a physical address 6. MMU then reads from that physical address, returns value to CPU Note: This is all done by the hardware CSE 10 Lecture 1: Demand Paging 3

TLB Misses At this point, two other things can happen 1. TLB does not have a PTE mapping this virtual address. PTE exists, but memory access violates PTE protection bits We ll consider each in turn CSE 10 Lecture 1: Demand Paging 4

Reloading the TLB If the TLB does not have mapping, two possibilities: 1. MMU loads PTE from page table in memory» Hardware managed TLB, OS not involved in this step» OS has already set up the page tables so that the hardware can access it directly. Trap to the OS» Software managed TLB, OS intervenes at this point» OS does lookup in page table, loads PTE into TLB» OS returns from exception, TLB continues A machine will only support one method or the other At this point, there is a PTE for the address in the TLB CSE 10 Lecture 1: Demand Paging 5

TLB: Exceptional Cases Page table lookup (by HW or OS) can cause a recursive fault if page table is paged out Assuming page tables are in OS virtual address space Not a problem if tables are in physical memory Yes, this is a complicated situation When TLB has PTE, it restarts translation Common case is that the PTE refers to a valid page in memory» These faults are handled quickly, just read PTE from the page table in memory and load into TLB Uncommon case is that TLB faults again on PTE because of PTE protection bits (e.g., page is invalid)» Becomes a page fault CSE 10 Lecture 1: Demand Paging 6

Page Faults PTE can indicate a protection fault Read/write/execute operation not permitted on page Invalid virtual page not allocated, or page not in physical memory TLB traps to the OS (software takes over) R/W/E OS usually will send fault back up to process, or might be playing games (e.g., copy on write, mapped files) Invalid» Virtual page not allocated in address space OS sends fault to process (e.g., segmentation fault)» Page not in physical memory OS allocates frame, reads from disk, maps PTE to physical frame CSE 10 Lecture 1: Demand Paging 7

Advanced Functionality Now we re going to look at some advanced functionality that the OS can provide applications using virtual memory tricks Shared memory Copy on Write Mapped files CSE 10 Lecture 1: Demand Paging 8

Sharing Private virtual address spaces protect applications from each other Usually exactly what we want But this makes it difficult to share data (have to copy) Parents and children in a forking Web server or proxy will want to share an in-memory cache without copying We can use shared memory to allow processes to share data using direct memory references Both processes see updates to the shared memory segment» Process B can immediately read an update by process A How are we going to coordinate access to shared data? CSE 10 Lecture 1: Demand Paging 9

Page-Level Sharing How can we implement sharing using page tables? Have PTEs in both tables map to the same physical frame Each PTE can have different protection values Must update both PTEs when page becomes invalid Can map shared memory at same or different virtual addresses in each process address space Different: Flexible (no address space conflicts), but pointers inside the shared memory segment are invalid (Why?) Same: Less flexible, but shared pointers are valid (Why?) What happens if a pointer inside the shared segment references an address outside the segment? CSE 10 Lecture 1: Demand Paging 10

Read Sharing Page X Offset Page Y Offset Virtual Address Virtual Address Frame A Offset Frame A Frame A Offset Physical Address Physical Address Frame A 1 0 Frame A 1 0 P0 Page Table Physical Memory P1 Page Table CSE 10 Lecture 1: Demand Paging 11

Copy on Write OSes spend a lot of time copying data System call arguments between user/kernel space Entire address spaces to implement fork() Use Copy on Write (CoW) to defer large copies as long as possible, hoping to avoid them altogether Instead of copying pages, create shared mappings of parent pages in child virtual address space Shared pages are protected as read-only in child» Reads happen as usual» Writes generate a protection fault, trap to OS, copy page, change page mapping in client page table, restart write instruction How does this help fork()? (Implemented as Unix vfork()) CSE 10 Lecture 1: Demand Paging 1

CoW: : Read Sharing to Start Page X Offset Page Y Offset Virtual Address Virtual Address Frame A Offset Frame A Frame A Offset Physical Address Physical Address Frame A 1 0 Frame A 1 0 P0 Page Table Physical Memory P1 Page Table CSE 10 Lecture 1: Demand Paging 13

CoW: : Exclusive Page on Write Page X Offset Page Y Offset Virtual Address Virtual Address Frame B Offset Frame A Frame A Offset Physical Address Physical Address Frame B Frame B 1 1 Frame A 1 0 P0 Page Table Physical Memory P1 Page Table CSE 10 Lecture 1: Demand Paging 14

Mapped Files Mapped files enable processes to do file I/O using loads and stores Instead of open, read into buffer, operate on buffer, Bind a file to a virtual memory region (mmap() in Unix) PTEs map virtual addresses to physical frames holding file data Virtual address base + N refers to offset N in file Initially, all pages mapped to file are invalid OS reads a page from file when invalid page is accessed OS writes a page to file when evicted, or region unmapped If page is not dirty (has not been written to), no write needed» Another use of the dirty bit in PTE CSE 10 Lecture 1: Demand Paging 15

Mapped Files () File is essentially backing store for that region of the virtual address space (instead of using the swap file) Virtual address space not backed by real files also called Anonymous VM Advantages Uniform access for files and memory (just use pointers) Less copying Drawbacks Process has less control over data movement» OS handles faults transparently Does not generalize to streamed I/O (pipes, sockets, etc.) CSE 10 Lecture 1: Demand Paging 16

Paging Summary Optimizations Managing page tables (space) Efficient translations (TLBs) (time) Demand paged virtual memory (space) Advanced Functionality Sharing memory Copy on Write Mapped files CSE 10 Lecture 1: Demand Paging 17

Locality All paging schemes depend on locality Processes reference pages in localized patterns Temporal locality Locations referenced recently likely to be referenced again Spatial locality Locations near recently referenced locations are likely to be referenced soon Although the cost of paging is high, if it is infrequent enough it is acceptable Processes usually exhibit both kinds of locality during their execution, making paging practical CSE 10 Lecture 1: Demand Paging 18

Demand Paging (OS) Recall demand paging from the OS perspective: Pages are evicted to disk when memory is full Pages loaded from disk when referenced again References to evicted pages cause a TLB miss» PTE was invalid, causes fault OS allocates a page frame, reads page from disk When I/O completes, the OS fills in PTE, marks it valid, and restarts faulting process Dirty vs. clean pages Actually, only dirty pages (modified) need to be written to disk Clean pages do not but you need to know where on disk to read them from again CSE 10 Lecture 1: Demand Paging 19

Demand Paging (Process) Demand paging is also used when a process first starts up When a process is created, it has A brand new page table with all valid bits off No pages in memory When the process starts executing Instructions fault on code and data pages Faulting stops when all necessary code and data pages are in memory Only code and data needed by a process needs to be loaded This, of course, changes over time CSE 10 Lecture 1: Demand Paging 0

Page Replacement When a page fault occurs, the OS loads the faulted page from disk into a page frame of memory At some point, the process has used all of the page frames it is allowed to use This is likely less than all of available memory When this happens, the OS must replace a page for each page faulted in It must evict a page to free up a page frame The page replacement algorithm determines how this is done And they come in all shapes and sizes CSE 10 Lecture 1: Demand Paging 1

Evicting the Best Page The goal of the replacement algorithm is to reduce the fault rate by selecting the best victim page to remove The best page to evict is the one never touched again Will never fault on it Never is a long time, so picking the page closest to never is the next best thing Evicting the page that won t be used for the longest period of time minimizes the number of page faults Proved by Belady We re now going to survey various replacement algorithms, starting with Belady s CSE 10 Lecture 1: Demand Paging

Belady s Algorithm Belady s algorithm is known as the optimal page replacement algorithm because it has the lowest fault rate for any page reference stream Idea: Replace the page that will not be used for the longest time in the future Problem: Have to predict the future Why is Belady s useful then? Use it as a yardstick Compare implementations of page replacement algorithms with the optimal to gauge room for improvement If optimal is not much better, then algorithm is pretty good If optimal is much better, then algorithm could use some work» Random replacement is often the lower bound CSE 10 Lecture 1: Demand Paging 3

First-In First-Out (FIFO) FIFO is an obvious algorithm and simple to implement Maintain a list of pages in order in which they were paged in On replacement, evict the one brought in longest time ago Why might this be good? Maybe the one brought in the longest ago is not being used Why might this be bad? Then again, maybe it s not We don t have any info to say one way or the other FIFO suffers from Belady s Anomaly The fault rate might actually increase when the algorithm is given more memory (very bad) CSE 10 Lecture 1: Demand Paging 4

CSE 10 Lecture 1: Demand Paging 5 Belady Belady s Anomaly Anomaly w/fifo w/fifo Page References 1 3 4 1 5 1 3 4 5 1 1 3 3 4 1 4 1 5 5 1 1 3 4 3 4 5 1 1 1 1 3 4 3 4 1 3 4 1 5 1 5 3 5 3 4 9 1 1 1 3 1 3 4 5 3 4 5 1 3 4 5 1 4 5 1 3 4 1 3 4 5 3 10

Least Recently Used (LRU) LRU uses reference information to make a more informed replacement decision Idea: We can t predict the future, but we can make a guess based upon past experience On replacement, evict the page that has not been used for the longest time in the past (Belady s: future) When does LRU do well? When does LRU do poorly? Implementation To be perfect, need to time stamp every reference (or maintain a stack) much too costly So we need to approximate it CSE 10 Lecture 1: Demand Paging 6

Approximating LRU LRU approximations use the PTE reference bit Keep a counter for each page At regular intervals, for every page do:» If ref bit = 0, increment counter» If ref bit = 1, zero the counter» Zero the reference bit The counter will contain the number of intervals since the last reference to the page The page with the largest counter is the least recently used Some architectures don t have a reference bit Can simulate reference bit using the valid bit to induce faults What happens when we make a page invalid? CSE 10 Lecture 1: Demand Paging 7

LRU Clock Not Recently Used (NRU) Used by Unix Replace page that is old enough Arrange all of physical page frames in a big circle (clock) A clock hand is used to select a good LRU candidate» Sweep through the pages in circular order like a clock» If the ref bit is off, it hasn t been used recently What is the minimum age if ref bit is off?» If the ref bit is on, turn it off and go to next page Arm moves quickly when pages are needed Low overhead when plenty of memory If memory is large, accuracy of information degrades» Use additional hands CSE 10 Lecture 1: Demand Paging 8

Fixed vs. Variable Space In a multiprogramming system, we need a way to allocate memory to competing processes Problem: How to determine how much memory to give to each process? Fixed space algorithms» Each process is given a limit of pages it can use» When it reaches the limit, it replaces from its own pages» Local replacement Some processes may do well while others suffer Variable space algorithms» Process set of pages grows and shrinks dynamically» Global replacement One process can ruin it for the rest CSE 10 Lecture 1: Demand Paging 9

Working Set Model A working set of a process is used to model the dynamic locality of its memory usage Defined by Peter Denning in 60s Definition WS(t,w) = {pages P such that P was referenced in the time interval (t, t-w)} t time, w working set window (measured in page refs) A page is in the working set (WS) only if it was referenced in the last w references CSE 10 Lecture 1: Demand Paging 30

Working Set Size The working set size is the number of pages in the working set The number of pages referenced in the interval (t, t-w) The working set size changes with program locality During periods of poor locality, you reference more pages Within that period of time, the working set size is larger Intuitively, want the working set to be the set of pages a process needs in memory to prevent heavy faulting Each process has a parameter w that determines a working set with few faults Denning: Don t run a process unless working set is in memory CSE 10 Lecture 1: Demand Paging 31

Working Set Problems Problems How do we determine w? How do we know when the working set changes? Too hard to answer So, working set is not used in practice as a page replacement algorithm However, it is still used as an abstraction The intuition is still valid When people ask, How much memory does Netscape need?, they are in effect asking for the size of Netscape s working set CSE 10 Lecture 1: Demand Paging 3

Page Fault Frequency (PFF) Page Fault Frequency (PFF) is a variable space algorithm that uses a more ad-hoc approach Monitor the fault rate for each process If the fault rate is above a high threshold, give it more memory» So that it faults less» But not always (FIFO, Belady s Anomaly) If the fault rate is below a low threshold, take away memory» Should fault more» But not always Hard to use PFF to distinguish between changes in locality and changes in size of working set CSE 10 Lecture 1: Demand Paging 33

Thrashing Page replacement algorithms avoid thrashing When most of the time is spent by the OS in paging data back and forth from disk No time spent doing useful work (making progress) In this situation, the system is overcommitted» No idea which pages should be in memory to reduce faults» Could just be that there isn t enough physical memory for all of the processes in the system» Ex: Running Windows XP with 64 MB of memory Possible solutions» Swapping write out all pages of a process» Buy more memory CSE 10 Lecture 1: Demand Paging 34

Summary Page replacement algorithms Belady s optimal replacement (minimum # of faults) FIFO replace page loaded furthest in past LRU replace page referenced furthest in past» Approximate using PTE reference bit LRU Clock replace page that is old enough Working Set keep the set of pages in memory that has minimal fault rate (the working set ) Page Fault Frequency grow/shrink page set as a function of fault rate Multiprogramming Should a process replace its own page, or that of another? CSE 10 Lecture 1: Demand Paging 35

Next time New topic: Filesystems Read Chapters 9, 10 CSE 10 Lecture 1: Demand Paging 36