CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

Similar documents
CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

10/9/2013 Anthony D. Joseph and John Canny CS162 UCB Fall 2013

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

Page 1. CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

CMPT 300 Introduction to Operating Systems. Page Replacement Algorithms

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

Past: Making physical memory pretty

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

VIRTUAL MEMORY READING: CHAPTER 9

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement. Page 1

CMPT 300 Introduction to Operating Systems

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

CS162 Operating Systems and Systems Programming Lecture 16. Page Allocation and Replacement (con t) I/O Systems

Lecture#16: VM, thrashing, Replacement, Cache state

CS Advanced Operating Systems Structures and Implementation Lecture 16. Virtual Memory (finished) Paging Devices.

CS162 Operating Systems and Systems Programming Lecture 16. Page Allocation and Replacement (con t) I/O Systems

CS370 Operating Systems

Virtual Memory Design and Implementation

Virtual Memory Design and Implementation

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

CS370 Operating Systems

Last Class. Demand Paged Virtual Memory. Today: Demand Paged Virtual Memory. Demand Paged Virtual Memory

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

! What is virtual memory and when is it useful? ! What is demand paging? ! What pages should be. ! What is the working set model?

Last class: Today: Paging. Virtual Memory

Virtual Memory - II. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XVI. University at Buffalo.

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

Chapter 9: Virtual Memory

Last Class: Demand Paged Virtual Memory

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

CS370 Operating Systems

CS 153 Design of Operating Systems Winter 2016

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Operating System Principles: Memory Management Swapping, Paging, and Virtual Memory CS 111. Operating Systems Peter Reiher

Lecture 12: Demand Paging

Chapter 6: Demand Paging

CS510 Operating System Foundations. Jonathan Walpole

Chapter 8: Virtual Memory. Operating System Concepts

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management: Virtual Memory and Paging CS 111. Operating Systems Peter Reiher

CSE 153 Design of Operating Systems

Virtual Memory. ICS332 Operating Systems

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

Operating Systems. Operating Systems Sina Meraji U of T

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Virtual Memory Management

Page Replacement Algorithms

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

CSE 120 Principles of Operating Systems

Basic Page Replacement

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

CS370 Operating Systems

Operating System Concepts

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Chapter 9: Virtual-Memory

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Virtual Memory Design and Implementation

Virtual Memory III. Jo, Heeseung

Swapping. Operating Systems I. Swapping. Motivation. Paging Implementation. Demand Paging. Active processes use more physical memory than system has

Address spaces and memory management

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Chapter 9: Virtual Memory

Virtual Memory COMPSCI 386

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Lecture #15: Translation, protection, sharing

Lecture 14 Page Replacement Policies

Today. Adding Memory Does adding memory always reduce the number of page faults? FIFO: Adding Memory with LRU. Last Class: Demand Paged Virtual Memory

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Principles of Operating Systems

Roadmap. Tevfik Ko!ar. CSC Operating Systems Spring Lecture - XIV Virtual Memory - I. Louisiana State University.

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Lecture 14: Cache & Virtual Memory

CS162 Operating Systems and Systems Programming Lecture 12. Address Translation. Page 1

OPERATING SYSTEM. Chapter 9: Virtual Memory

Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms

Basic Memory Management

CS162 Operating Systems and Systems Programming Lecture 13. Caches and TLBs. Page 1

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

CS420: Operating Systems

Module 9: Virtual Memory

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Transcription:

S6 Operating Systems and Systems Programming Lecture 5 Page llocation and Replacement October, 007 Prof. John Kubiatowicz http://inst.eecs.berkeley.edu/~cs6 Review: emand Paging Mechanisms PT helps us implement demand paging Valid Page in memory, PT points at physical page Not Valid Page not in memory; use info in PT to find it on disk when necessary Suppose user references page with invalid PT? Memory Management Unit (MMU) traps to OS» Resulting trap is a Page Fault What does OS do on a Page Fault?:» hoose an old page to replace» If old page modified ( = ), write contents back to disk» hange its PT and any cached TL to be invalid» Load new page into memory from disk» Update page table entry, invalidate TL for new entry» ontinue thread from original faulting location TL for new page will be loaded when thread continued! While pulling pages off disk for one process, OS runs another process from ready queue» Suspended process sits on wait queue Lec 5. Review: Software-Loaded TL MIPS/Nachos TL is loaded by software High TL hit rate ok to trap to software to fill the TL, even if slower Simpler hardware and added flexibility: software can maintain translation tables in whatever convenient format How can a process run without hardware TL fill? Fast path (TL hit with valid=):» Translation to physical page done by hardware Slow path (TL hit with valid=0 or TL miss)» Hardware receives a TL Fault What does OS do on a TL Fault?» Traverse page table to find appropriate PT» If valid=, load page table entry into TL, continue thread» If valid=0, perform Page Fault detailed previously» ontinue thread verything is transparent to the user process: It doesn t know about paging to/from disk It doesn t even know about software TL handling Lec 5. Goals for Today Finish iscussion of Precise xceptions Page Replacement Policies lock lgorithm N th chance algorithm Second-hance-List lgorithm Page llocation Policies Working Set/Thrashing Note: Some slides and/or pictures in the following are adapted from slides 005 Silberschatz, Galvin, and Gagne. Many slides generated from my lecture notes by Kubiatowicz. Lec 5.4

User OS TL Faults Review: Transparent xceptions Inst Inst Load TL Inst Fetch page/ Load TL Hardware must help out by saving: instruction and partial state Processor State: sufficient to restart user thread» Save/restore registers, stack, etc Precise xception state of the machine is preserved as if program executed up to the offending instruction ll previous instructions completed Offending instruction and all following instructions act as if they have not even started ifficult with pipelining, out-of-order execution,... MIPS takes this position Modern techniques for out-of-order execution and branch prediction help implement precise interrupts Inst Lec 5.5 onsider weird things that can happen What if an instruction has side effects? Options:» Unwind side-effects (easy to restart)» Finish off side-effects (messy!) xample : mov (sp)+,0» What if page fault occurs when write to stack pointer?» id sp get incremented before or after the page fault? xample : strcpy (r), (r)» Source and destination overlap: can t unwind in principle!» IM S/70 and VX solution: execute twice once read-only What about RIS processors? For instance delayed branches?» xample: bne somewhere ld r,(sp)» Precise exception state consists of two Ps: P and np elayed exceptions:» xample: div r, r, r ld r, (sp)» What if takes many cycles to discover divide by zero, but load has already caused page fault? Lec 5.6 Precise xceptions Precise state of the machine is preserved as if program executed up to the offending instruction ll previous instructions completed Offending instruction and all following instructions act as if they have not even started Same system code will work on different implementations ifficult in the presence of pipelining, out-of-order execution,... MIPS takes this position Imprecise system software has to figure out what is where and put it all back together Performance goals often lead designers to forsake precise interrupts system software developers, user, markets etc. usually wish they had not done this Modern techniques for out-of-order execution and branch prediction help implement precise interrupts Steps in Handling a Page Fault Lec 5.7 Lec 5.8

emand Paging xample Since emand Paging like caching, can compute average access time! ( ffective ccess Time ) T = Hit Rate x Hit Time + Miss Rate x Miss Time xample: Memory access time = 00 nanoseconds verage page-fault service time = 8 milliseconds Suppose p = Probability of miss, -p = Probably of hit Then, we can compute T as follows: T = ( p) x 00ns + p x 8 ms = ( p) x 00ns + p x 8,000,000ns = 00ns + p x 7,999,800ns If one access out of,000 causes a page fault, then T = 8. μs: This is a slowdown by a factor of 40! What if want slowdown by less than 0%? 00ns x. < T p <.5 x 0-6 This is about page fault in 400000! Lec 5.9 What Factors Lead to Misses? ompulsory Misses: Pages that have never been paged into memory before How might we remove these misses?» Prefetching: loading them into memory before needed» Need to predict future somehow! More later. apacity Misses: Not enough memory. Must somehow increase size. an we do this?» One option: Increase amount of RM (not quick fix!)» nother option: If multiple processes in memory: adjust percentage of memory allocated to each one! onflict Misses: Technically, conflict misses don t exist in virtual memory, since it is a fully-associative cache Policy Misses: aused when pages were in memory, but kicked out prematurely because of the replacement policy How to fix? etter replacement policy Lec 5.0 Page Replacement Policies Why do we care about Replacement Policy? Replacement is an issue with any cache Particularly important with pages» The cost of being wrong is high: must go to disk» Must keep important pages in memory, not toss them out FIFO (First In, First Out) Throw out oldest page. e fair let every page live in memory for same amount of time. ad, because throws out heavily used pages instead of infrequently used pages MIN (Minimum): Replace page that won t be used for the longest time Great, but can t really know future Makes good comparison case, however RNOM: Pick random page for every replacement Typical solution for TL s. Simple hardware Pretty unpredictable makes it hard to make real-time guarantees Lec 5. Replacement Policies (on t) LRU (Least Recently Used): Replace page that hasn t been used for the longest time Programs have locality, so if something not used for a while, unlikely to be used in the near future. Seems like LRU should be a good approximation to MIN. How to implement LRU? Use a list! Head Page 6 Page 7 Page Page Tail (LRU) On each use, remove page from list and place at head LRU page is at tail Problems with this scheme for paging? Need to know immediately when each page used so that can change position in list Many instructions for each hardware access In practice, people approximate LRU (more later) Lec 5.

xample: FIFO Suppose we have page frames, 4 virtual pages, and following reference stream: onsider FIFO Page replacement: xample: MIN Suppose we have the same reference stream: onsider MIN Page replacement: FIFO: 7 faults. When referencing, replacing is bad choice, since need again right away Lec 5. MIN: 5 faults Where will be brought in? Look for page not referenced farthest in future. What will LRU do? Same decisions as MIN here, but won t always be true! Lec 5.4 dministrivia xam is graded: Sorry this took too long grades should be in glookup very soon If you are or more standard-deviations below the mean, you need to do better: You are in danger of getting a or F Feel free to come to talk with me Solutions to the Midterm will be up this afternoon on the Handouts page Project autograder: Will be run a couple of times today and tomorrow More times on Wednesday Yet more times on Thursday Web mirror: Problem with links after last class: people couldn t get notes» Sorry about that! I am the right person to complain to There is a mirror of the course web site at: http://www.cs.berkeley.edu/~kubitron/cs6 Lec 5.5 When will LRU perform badly? onsider the following: LRU Performs as follows (same as FIFO here): very reference is a page fault! MIN oes much better: Lec 5.6

Graph of Page Faults Versus The Number of Frames One desirable property: When you add memory the miss rate goes down oes this always happen? Seems like it should, right? No: elady s anomaly ertain replacement algorithms (FIFO) don t have this obvious property! Lec 5.7 dding Memory oesn t lways Help Fault Rate oes adding memory reduce number of page faults? Yes for LRU and MIN Not necessarily for FIFO! (alled elady s anomaly) 4 fter adding memory: With FIFO, contents can be completely different In contrast, with LRU or MIN, contents of memory with X pages are a subset of contents with X+ Page Lec 5.8 Implementing LRU Perfect: Timestamp page on each reference Keep list of pages ordered by time of reference Too expensive to implement in reality for many reasons lock lgorithm: rrange physical pages in circle with single clock hand pproximate LRU (approx to approx to MIN) Replace an old page, not the oldest page etails: Hardware use bit per physical page:» Hardware sets use bit on each reference» If use bit isn t set, means not referenced in a long time» Nachos hardware sets use bit in the TL; you have to copy this back to page table when TL entry gets replaced On page fault:» dvance clock hand (not real time)» heck use bit: used recently; clear and leave alone 0 selected candidate for replacement Will always find a page or loop forever?» ven if all use bits set, will eventually loop around FIFO Lec 5.9 lock lgorithm: Not Recently Used Set of all pages in Memory Single lock Hand: dvances only on page fault! heck for pages not used recently Mark pages as not used recently What if hand moving slowly? Good sign or bad sign?» Not many page faults and/or find page quickly What if hand is moving quickly? Lots of page faults and/or lots of reference bits set One way to view clock algorithm: rude partitioning of pages into two groups: young and old Why not partition into more than groups? Lec 5.0

N th hance version of lock lgorithm N th chance algorithm: Give page N chances OS keeps counter per page: # sweeps On page fault, OS checks use bit:» clear use and also clear counter (used in last sweep)» 0 increment counter; if count=n, replace page Means that clock hand has to sweep by N times without page being used before page is replaced How do we pick N? Why pick large N? etter approx to LRU» If N ~ K, really good approximation Why pick small N? More efficient» Otherwise might have to look a long way to find free page What about dirty pages? Takes extra overhead to replace a dirty page, so give dirty pages an extra chance before replacing? ommon approach:» lean pages, use N=» irty pages, use N= (and write back to disk when N=) Lec 5. lock lgorithms: etails Which bits of a PT entry are useful to us? Use: Set when page is referenced; cleared by clock algorithm Modified: set when page is modified, cleared when page written to disk Valid: ok for program to reference this page Read-only: ok for program to read page, but not modify» For example for catching modifications to code pages! o we really need hardware-supported modified bit? No. an emulate it (S Unix) using read-only bit» Initially, mark all pages as read-only, even data pages» On write, trap to OS. OS sets software modified bit, and marks page as read-write.» Whenever page comes back in from disk, mark read-only Lec 5. lock lgorithms etails (continued) o we really need a hardware-supported use bit? No. an emulate it similar to above:» Mark all pages as invalid, even if in memory» On read to invalid page, trap to OS» OS sets use bit, and marks page read-only Get modified bit in same way as previous:» On write, trap to OS (either invalid or read-only)» Set use and modified bits, mark page read-write When clock hand passes by, reset use and modified bits and mark page as invalid again Remember, however, that clock is just an approximation of LRU an we do a better approximation, given that we have to take page faults on some reads and writes to collect use information? Need to identify an old page, not oldest page! nswer: second chance list Lec 5. Second-hance List lgorithm (VX/VMS) irectly Mapped Pages Marked: RW List: FIFO Page-in From disk New ctive Pages Overflow ccess New S Victims Split memory in two: ctive list (RW), S list (Invalid) ccess pages in ctive list at full speed Otherwise, Page Fault lways move overflow page from end of ctive list to front of Second-chance list (S) and mark invalid esired Page On S List: move to front of ctive list, mark RW Not on S list: page in to front of ctive list, mark RW; page out LRU victim at end of S list LRU victim Second hance List Marked: Invalid List: LRU Lec 5.4

Second-hance List lgorithm (con t) How many pages for second chance list? If 0 FIFO If all LRU, but page fault on every page reference Pick intermediate value. Result is: Pro: Few disk accesses (page only goes to disk if unused for a long time) on: Increased overhead trapping to OS (software / hardware tradeoff) With page translation, we can adapt to any kind of access the program makes Later, we will show how to use page translation / protection to share memory between threads on widely separated machines Question: why didn t VX include use bit? Strecker (architect) asked OS people, they said they didn t need it, so didn t implement it He later got blamed, but VX did OK anyway Lec 5.5 Set of all pages in Memory Free List Single lock Hand: dvances as needed to keep freelist full ( background ) Free Pages For Processes Keep set of free pages ready for use in demand paging Freelist filled in background by lock algorithm or other technique ( Pageout demon ) irty pages start copying back to disk when enter list Like VX second-chance list If page needed before reused, just return to active set dvantage: Faster for page fault an always use page (or pages) immediately on fault Lec 5.6 emand Paging (more details) oes software-loaded TL need use bit? Two Options: Hardware sets use bit in TL; when TL entry is replaced, software copies use bit back to page table Software manages TL entries as FIFO list; everything not in TL is Second-hance list, managed as strict LRU ore Map Page tables map virtual page physical page o we need a reverse mapping (i.e. physical page virtual page)?» Yes. lock algorithm runs through page frames. If sharing, then multiple virtual-pages per physical page» an t push page out to disk without invalidating all PTs Summary Replacement policies FIFO: Place pages on queue, replace page at end MIN: Replace page that will be used farthest in future LRU: Replace page used farthest in past lock lgorithm: pproximation to LRU rrange all pages in circular list Sweep through them, marking as not in use If page not in use for one pass, than can replace N th -chance clock algorithm: nother approx LRU Give pages multiple passes of clock hand before replacing Second-hance List algorithm: Yet another approx LRU ivide pages into two groups, one of which is truly LRU and managed on page faults. Lec 5.7 Lec 5.8