CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

Similar documents
CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

10/9/2013 Anthony D. Joseph and John Canny CS162 UCB Fall 2013

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

Page 1. CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CMPT 300 Introduction to Operating Systems. Page Replacement Algorithms

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

Past: Making physical memory pretty

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CMPT 300 Introduction to Operating Systems

VIRTUAL MEMORY READING: CHAPTER 9

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement. Page 1

Lecture#16: VM, thrashing, Replacement, Cache state

CS Advanced Operating Systems Structures and Implementation Lecture 16. Virtual Memory (finished) Paging Devices.

CS162 Operating Systems and Systems Programming Lecture 16. Page Allocation and Replacement (con t) I/O Systems

CS370 Operating Systems

Virtual Memory Design and Implementation

CS162 Operating Systems and Systems Programming Lecture 16. Page Allocation and Replacement (con t) I/O Systems

Virtual Memory Design and Implementation

CS370 Operating Systems

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

Virtual Memory - II. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XVI. University at Buffalo.

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

Last Class: Demand Paged Virtual Memory

! What is virtual memory and when is it useful? ! What is demand paging? ! What pages should be. ! What is the working set model?

Last class: Today: Paging. Virtual Memory

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

Last Class. Demand Paged Virtual Memory. Today: Demand Paged Virtual Memory. Demand Paged Virtual Memory

Operating Systems. Operating Systems Sina Meraji U of T

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

Chapter 8: Virtual Memory. Operating System Concepts

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

Chapter 9: Virtual Memory

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

CS370 Operating Systems

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

Virtual Memory. ICS332 Operating Systems

Chapter 6: Demand Paging

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Operating System Principles: Memory Management Swapping, Paging, and Virtual Memory CS 111. Operating Systems Peter Reiher

Virtual Memory Management

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS 153 Design of Operating Systems Winter 2016

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS370 Operating Systems

Memory Management: Virtual Memory and Paging CS 111. Operating Systems Peter Reiher

CSE 153 Design of Operating Systems

Page Replacement Algorithms

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Lecture 12: Demand Paging

Lecture 14 Page Replacement Policies

CSE 120 Principles of Operating Systems

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

CS510 Operating System Foundations. Jonathan Walpole

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Virtual Memory Design and Implementation

Swapping. Operating Systems I. Swapping. Motivation. Paging Implementation. Demand Paging. Active processes use more physical memory than system has

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Basic Page Replacement

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory

Operating System Concepts

Chapter 9: Virtual-Memory

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Today. Adding Memory Does adding memory always reduce the number of page faults? FIFO: Adding Memory with LRU. Last Class: Demand Paged Virtual Memory

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Chapter 9: Virtual Memory

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Principles of Operating Systems

Address spaces and memory management

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Virtual Memory Outline

Lecture #15: Translation, protection, sharing

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual Memory III. Jo, Heeseung

Memory Management Outline. Operating Systems. Motivation. Paging Implementation. Accessing Invalid Pages. Performance of Demand Paging

How to create a process? What does process look like?

Memory Hierarchy. Mehran Rezaei

CS420: Operating Systems

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

Basic Memory Management

OPERATING SYSTEM. Chapter 9: Virtual Memory

Roadmap. Tevfik Ko!ar. CSC Operating Systems Spring Lecture - XIV Virtual Memory - I. Louisiana State University.

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

CS162 Operating Systems and Systems Programming Lecture 13. Caches and TLBs. Page 1

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Virtual Memory COMPSCI 386

Transcription:

S6 Operating Systems and Systems Programming Lecture 5 Page llocation and Replacement October, 009 Prof. John Kubiatowicz http://inst.eecs.berkeley.edu/~cs6 Review: emand Paging Mechanisms PT helps us implement demand paging Valid Page in memory, PT points at physical page Not Valid Page not in memory; use info in PT to find it on disk when necessary Suppose user references page with invalid PT? Memory Management Unit (MMU) traps to OS» Resulting trap is a Page Fault What does OS do on a Page Fault?:» hoose an old page to replace» If old page modified ( = ), write contents back to disk» hange its PT and any cached TL to be invalid» Load new page into memory from disk» Update page table entry, invalidate TL for new entry» ontinue thread from original faulting location TL for new page will be loaded when thread continued! While pulling pages off disk for one process, OS runs another process from ready queue» Suspended process sits on wait queue Lec 5. Goals for Today Precise xceptions Page Replacement Policies lock lgorithm N th chance algorithm Second-hance-List lgorithm Page llocation Policies Working Set/Thrashing Note: Some slides and/or pictures in the following are adapted from slides 005 Silberschatz, Galvin, and Gagne. Many slides generated from my lecture notes by Kubiatowicz. Lec 5. Software-Loaded TL MIPS/Nachos TL is loaded by software High TL hit rate ok to trap to software to fill the TL, even if slower Simpler hardware and added flexibility: software can maintain translation tables in whatever convenient format How can a process run without hardware TL fill? Fast path (TL hit with valid=):» Translation to physical page done by hardware Slow path (TL hit with valid=0 or TL miss)» Hardware receives a TL Fault What does OS do on a TL Fault?» Traverse page table to find appropriate PT» If valid=, load page table entry into TL, continue thread» If valid=0, perform Page Fault detailed previously» ontinue thread verything is transparent to the user process: It doesn t know about paging to/from disk It doesn t even know about software TL handling Lec 5.4

User OS TL Faults Faulting Inst Transparent xceptions Faulting Inst Load TL Faulting Inst Fetch page/ Load TL How to transparently restart faulting instructions? ould we just skip it?» No: need to perform load or store after reconnecting physical page Hardware must help out by saving: Faulting instruction and partial state» Need to know which instruction caused fault» Is single P sufficient to identify faulting position???? Processor State: sufficient to restart user thread» Save/restore registers, stack, etc What if an instruction has side-effects? Faulting Inst Lec 5.5 onsider weird things that can happen What if an instruction has side effects? Options:» Unwind side-effects (easy to restart)» Finish off side-effects (messy!) xample : mov (sp)+,0» What if page fault occurs when write to stack pointer?» id sp get incremented before or after the page fault? xample : strcpy (r), (r)» Source and destination overlap: can t unwind in principle!» IM S/70 and VX solution: execute twice once read-only What about RIS processors? For instance delayed branches?» xample: bne somewhere ld r,(sp)» Precise exception state consists of two Ps: P and np elayed exceptions:» xample: div r, r, r ld r, (sp)» What if takes many cycles to discover divide by zero, but load has already caused page fault? Lec 5.6 Precise xceptions Precise state of the machine is preserved as if program executed up to the offending instruction ll previous instructions completed Offending instruction and all following instructions act as if they have not even started Same system code will work on different implementations ifficult in the presence of pipelining, out-of-order execution,... MIPS takes this position Imprecise system software has to figure out what is where and put it all back together Performance goals often lead designers to forsake precise interrupts system software developers, user, markets etc. usually wish they had not done this Modern techniques for out-of-order execution and branch prediction help implement precise interrupts Steps in Handling a Page Fault Lec 5.7 Lec 5.8

emand Paging xample Since emand Paging like caching, can compute average access time! ( ffective ccess Time ) T = Hit Rate x Hit Time + Miss Rate x Miss Time xample: Memory access time = 00 nanoseconds verage page-fault service time = 8 milliseconds Suppose p = Probability of miss, -p = Probably of hit Then, we can compute T as follows: T = ( p) x 00ns + p x 8 ms = ( p) x 00ns + p x 8,000,000ns = 00ns + p x 7,999,800ns If one access out of,000 causes a page fault, then T = 8. μs: This is a slowdown by a factor of 40! What if want slowdown by less than 0%? 00ns x. < T p <.5 x 0-6 This is about page fault in 400000! Lec 5.9 What Factors Lead to Misses? ompulsory Misses: Pages that have never been paged into memory before How might we remove these misses?» Prefetching: loading them into memory before needed» Need to predict future somehow! More later. apacity Misses: Not enough memory. Must somehow increase size. an we do this?» One option: Increase amount of RM (not quick fix!)» nother option: If multiple processes in memory: adjust percentage of memory allocated to each one! onflict Misses: Technically, conflict misses don t exist in virtual memory, since it is a fully-associative cache Policy Misses: aused when pages were in memory, but kicked out prematurely because of the replacement policy How to fix? etter replacement policy Lec 5.0 Page Replacement Policies Why do we care about Replacement Policy? Replacement is an issue with any cache Particularly important with pages» The cost of being wrong is high: must go to disk» Must keep important pages in memory, not toss them out FIFO (First In, First Out) Throw out oldest page. e fair let every page live in memory for same amount of time. ad, because throws out heavily used pages instead of infrequently used pages MIN (Minimum): Replace page that won t be used for the longest time Great, but can t really know future Makes good comparison case, however RNOM: Pick random page for every replacement Typical solution for TL s. Simple hardware Pretty unpredictable makes it hard to make real-time guarantees Lec 5. Replacement Policies (on t) LRU (Least Recently Used): Replace page that hasn t been used for the longest time Programs have locality, so if something not used for a while, unlikely to be used in the near future. Seems like LRU should be a good approximation to MIN. How to implement LRU? Use a list! Head Page 6 Page 7 Page Page Tail (LRU) On each use, remove page from list and place at head LRU page is at tail Problems with this scheme for paging? Need to know immediately when each page used so that can change position in list Many instructions for each hardware access In practice, people approximate LRU (more later) Lec 5.

dministrivia xam not graded yet Hopefully get it back to you in section on Tuesday Will get solutions up early next week Project II ue by Friday at midnight Good Luck! In the news: Sunday (0/8) First use of On Star to recover stolen vehicle Two guys carjacked in 009 hevy Tahoe On Star locates vehicle and while police chasing it, give command to SUV to slow down» On Star operator flashed lights first so that police could make sure that they had correct car» Hapless crook tries to escape on foot, falls in swimming pool Is this ig rother, or a Good Thing??? Related: My OS group talked to researchers from osch They make lots of things including car electronics onfirmed that high-end cars have over 70 ngine ontrol Units» U for short» Processors controlling breaks, fuel injection, navigation, climate» Interconnected with N (ontroller rea Network) Thinking about consolidating functions into smaller number of Multiore chips (ut how to meet hard realtime requirements??) Lec 5. In the News: ndroid ndroid is the new operating system from Google For Mobile devices» Phones» book Readers (i.e. &N) Linux version.6.x Java virtual machine and runtime system Lots of media extensions» WebKit for browsing» Media Libraries» ellular Networking Mobile Systems are the hottest new software stack Ubiquitous omputing Worldwide, more than billion new cell phones purchased/year for last few years» ompare: worldwide number Ps purchased/year ~ 50M Lec 5.4 xample: FIFO Suppose we have page frames, 4 virtual pages, and following reference stream: onsider FIFO Page replacement: xample: MIN Suppose we have the same reference stream: onsider MIN Page replacement: Ref: Ref: FIFO: 7 faults. When referencing, replacing is bad choice, since need again right away Lec 5.5 MIN: 5 faults Where will be brought in? Look for page not referenced farthest in future. What will LRU do? Same decisions as MIN here, but won t always be true! Lec 5.6

When will LRU perform badly? onsider the following: LRU Performs as follows (same as FIFO here): Ref: very reference is a page fault! MIN oes much better: Ref: Lec 5.7 Graph of Page Faults Versus The Number of Frames One desirable property: When you add memory the miss rate goes down oes this always happen? Seems like it should, right? No: elady s anomaly ertain replacement algorithms (FIFO) don t have this obvious property! Lec 5.8 dding Memory oesn t lways Help Fault Rate oes adding memory reduce number of page faults? Yes for LRU and MIN Not necessarily for FIFO! (alled elady s anomaly) Ref: Ref: 4 fter adding memory: With FIFO, contents can be completely different In contrast, with LRU or MIN, contents of memory with X pages are a subset of contents with X+ Page Lec 5.9 Implementing LRU Perfect: Timestamp page on each reference Keep list of pages ordered by time of reference Too expensive to implement in reality for many reasons lock lgorithm: rrange physical pages in circle with single clock hand pproximate LRU (approx to approx to MIN) Replace an old page, not the oldest page etails: Hardware use bit per physical page:» Hardware sets use bit on each reference» If use bit isn t set, means not referenced in a long time» Nachos hardware sets use bit in the TL; you have to copy this back to page table when TL entry gets replaced On page fault:» dvance clock hand (not real time)» heck use bit: used recently; clear and leave alone 0 selected candidate for replacement Will always find a page or loop forever?» ven if all use bits set, will eventually loop around FIFO Lec 5.0

lock lgorithm: Not Recently Used Set of all pages in Memory Single lock Hand: dvances only on page fault! heck for pages not used recently Mark pages as not used recently What if hand moving slowly? Good sign or bad sign?» Not many page faults and/or find page quickly What if hand is moving quickly? Lots of page faults and/or lots of reference bits set One way to view clock algorithm: rude partitioning of pages into two groups: young and old Why not partition into more than groups? Lec 5. N th hance version of lock lgorithm N th chance algorithm: Give page N chances OS keeps counter per page: # sweeps On page fault, OS checks use bit:» clear use and also clear counter (used in last sweep)» 0 increment counter; if count=n, replace page Means that clock hand has to sweep by N times without page being used before page is replaced How do we pick N? Why pick large N? etter approx to LRU» If N ~ K, really good approximation Why pick small N? More efficient» Otherwise might have to look a long way to find free page What about dirty pages? Takes extra overhead to replace a dirty page, so give dirty pages an extra chance before replacing? ommon approach:» lean pages, use N=» irty pages, use N= (and write back to disk when N=) Lec 5. lock lgorithms: etails Which bits of a PT entry are useful to us? Use: Set when page is referenced; cleared by clock algorithm Modified: set when page is modified, cleared when page written to disk Valid: ok for program to reference this page Read-only: ok for program to read page, but not modify» For example for catching modifications to code pages! o we really need hardware-supported modified bit? No. an emulate it (S Unix) using read-only bit» Initially, mark all pages as read-only, even data pages» On write, trap to OS. OS sets software modified bit, and marks page as read-write.» Whenever page comes back in from disk, mark read-only Lec 5. lock lgorithms etails (continued) o we really need a hardware-supported use bit? No. an emulate it similar to above:» Mark all pages as invalid, even if in memory» On read to invalid page, trap to OS» OS sets use bit, and marks page read-only Get modified bit in same way as previous:» On write, trap to OS (either invalid or read-only)» Set use and modified bits, mark page read-write When clock hand passes by, reset use and modified bits and mark page as invalid again Remember, however, that clock is just an approximation of LRU an we do a better approximation, given that we have to take page faults on some reads and writes to collect use information? Need to identify an old page, not oldest page! nswer: second chance list Lec 5.4

Second-hance List lgorithm (VX/VMS) irectly Mapped Pages Marked: RW List: FIFO Page-in From disk New ctive Pages Overflow ccess New S Victims Split memory in two: ctive list (RW), S list (Invalid) ccess pages in ctive list at full speed Otherwise, Page Fault lways move overflow page from end of ctive list to front of Second-chance list (S) and mark invalid esired Page On S List: move to front of ctive list, mark RW Not on S list: page in to front of ctive list, mark RW; page out LRU victim at end of S list LRU victim Second hance List Marked: Invalid List: LRU Lec 5.5 Second-hance List lgorithm (con t) How many pages for second chance list? If 0 FIFO If all LRU, but page fault on every page reference Pick intermediate value. Result is: Pro: Few disk accesses (page only goes to disk if unused for a long time) on: Increased overhead trapping to OS (software / hardware tradeoff) With page translation, we can adapt to any kind of access the program makes Later, we will show how to use page translation / protection to share memory between threads on widely separated machines Question: why didn t VX include use bit? Strecker (architect) asked OS people, they said they didn t need it, so didn t implement it He later got blamed, but VX did OK anyway Lec 5.6 Set of all pages in Memory Free List Single lock Hand: dvances as needed to keep freelist full ( background ) Free Pages For Processes Keep set of free pages ready for use in demand paging Freelist filled in background by lock algorithm or other technique ( Pageout demon ) irty pages start copying back to disk when enter list Like VX second-chance list If page needed before reused, just return to active set dvantage: Faster for page fault an always use page (or pages) immediately on fault Lec 5.7 emand Paging (more details) oes software-loaded TL need use bit? Two Options: Hardware sets use bit in TL; when TL entry is replaced, software copies use bit back to page table Software manages TL entries as FIFO list; everything not in TL is Second-hance list, managed as strict LRU ore Map Page tables map virtual page physical page o we need a reverse mapping (i.e. physical page virtual page)?» Yes. lock algorithm runs through page frames. If sharing, then multiple virtual-pages per physical page» an t push page out to disk without invalidating all PTs Lec 5.8

Summary Precise xception specifies a single instruction for which: ll previous instructions have completed (committed state) No following instructions nor actual instruction have started Replacement policies FIFO: Place pages on queue, replace page at end MIN: Replace page that will be used farthest in future LRU: Replace page used farthest in past lock lgorithm: pproximation to LRU rrange all pages in circular list Sweep through them, marking as not in use If page not in use for one pass, than can replace N th -chance clock algorithm: nother approx LRU Give pages multiple passes of clock hand before replacing Second-hance List algorithm: Yet another approx LRU ivide pages into two groups, one of which is truly LRU and managed on page faults. Lec 5.9