CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

Similar documents
Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

VIRTUAL MEMORY READING: CHAPTER 9

10/9/2013 Anthony D. Joseph and John Canny CS162 UCB Fall 2013

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

Page 1. CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

10/7/13! Anthony D. Joseph and John Canny CS162 UCB Fall 2013! " (0xE0)" " " " (0x70)" " (0x50)"

CS370 Operating Systems

Lecture#16: VM, thrashing, Replacement, Cache state

Past: Making physical memory pretty

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Operating Systems. Operating Systems Sina Meraji U of T

Lecture 12: Demand Paging

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

CMPT 300 Introduction to Operating Systems. Page Replacement Algorithms

CS 153 Design of Operating Systems Winter 2016

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS370 Operating Systems

CS370 Operating Systems

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Chapter 6: Demand Paging

CS162 Operating Systems and Systems Programming Lecture 12. Address Translation. Page 1

CSE 153 Design of Operating Systems

CSE 120 Principles of Operating Systems

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement. Page 1

Basic Memory Management

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Page 1. Goals for Today" Important Aspects of Memory Multiplexing" Virtualizing Resources" CS162 Operating Systems and Systems Programming Lecture 9

Virtual Memory - II. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XVI. University at Buffalo.

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

Chapter 9: Virtual-Memory

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

CS162 Operating Systems and Systems Programming Lecture 16. Page Allocation and Replacement (con t) I/O Systems

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

How to create a process? What does process look like?

CS162 Operating Systems and Systems Programming Lecture 13. Caches and TLBs. Page 1

CS370 Operating Systems

Last Class: Demand Paged Virtual Memory

CMPT 300 Introduction to Operating Systems

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual Memory Management

Chapter 9: Virtual Memory

Virtual Memory III. Jo, Heeseung

Address spaces and memory management

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

Lecture #15: Translation, protection, sharing

Operating System Concepts

Chapter 8: Virtual Memory. Operating System Concepts

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Virtual Memory Outline

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

Virtual Memory Design and Implementation

Memory: Paging System

CS370 Operating Systems

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

CS Advanced Operating Systems Structures and Implementation Lecture 16. Virtual Memory (finished) Paging Devices.

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

CS510 Operating System Foundations. Jonathan Walpole

CSE 4/521 Introduction to Operating Systems. Lecture 15 Virtual Memory I (Background, Demand Paging) Summer 2018

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

EvicMng the best page #2: FIFO. #1: Belady s Algorithm. CS 537 Lecture 11 Page Replacement Policies 11/1/11. Michael SwiF

Recall: Paging. Recall: Paging. Recall: Paging. CS162 Operating Systems and Systems Programming Lecture 13. Address Translation, Caching

CS162 Operating Systems and Systems Programming Lecture 16. Page Allocation and Replacement (con t) I/O Systems

Chapter 8 Virtual Memory

Frequently asked questions from the previous class survey

Last Class. Demand Paged Virtual Memory. Today: Demand Paged Virtual Memory. Demand Paged Virtual Memory

CS 5523 Operating Systems: Memory Management (SGG-8)

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Lecture 14 Page Replacement Policies

Virtual Memory COMPSCI 386

Transcription:

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement" October 3, 2012 Ion Stoica http://inst.eecs.berkeley.edu/~cs162

Lecture 9 Followup: Inverted Page Table" With paging and segmentation Size of page table is at least as large as amount of virtual memory allocated to processes Physical memory may be much less» Much of process space may be out on disk or not in use Virtual" Page #" Offset" Hash" Table" Physical" Page #" Offset" Answer: use a hash table Called an Inverted Page Table Size is independent of virtual address space Directly related to amount of physical memory Very attractive option for 64-bit address spaces Cons: Complexity of managing hash changes Often in hardware 11.2

Quiz 11.1: Address Translation" Q1: True _ False _ Paging does not suffer from external fragmentation Q2: True _ False _ The segment offset can be larger than the segment size Q3: True _ False _ Paging: to compute the physical address, add physical page # and offset Q4: True _ False _ Uni-programming doesn t provide address protection Q5: True _ False _ Virtual address space is always larger than physical address space Q6: True _ False _ Inverted page tables keeps fewer entries than two-page tables 11.3

Quiz 11.1: Address Translation" Q1: True _ X" False _ Paging does not suffer from external fragmentation Q2: True _ False _ X" The segment offset can be larger than the segment size Q3: True _ False _ X" Paging: to compute the physical address, add physical page # and offset Q4: True _ X" False _ Uni-programming doesn t provide address protection Q5: True _ False _ X" Virtual address space is always larger than physical address space Q6: True X" _ False _ Inverted page tables keeps fewer entries than two-page tables 11.4

Summary: Address Segmentation " Virtual memory view" 1111 1111" 1111 0000" stack (0xF0)" 1100 0000" (0xC0)" 1000 0000" (0x80)" 0100 0000" (0x40)" heap data 1011 0000 +" 11 0000" ---------------" 1110 0000" Seg #" base" limit" 11" 1011 0000" 1 0000" 10" 0111 0000" 1 1000" 01" 0101 0000" 10 0000" 00" 0001 0000" 10 0000" Physical memory view" stack heap data 1110 0000" (0xE0)" 0111 0000" (0x70)" 0101 0000" (0x50)" 0000 0000" code seg # offset" code 0001 0000" (0x10)" 0000 0000" 11.5

Summary: Address Segmentation " Virtual memory view" 1111 1111" 1110 0000" stack Physical memory view" stack 1110 0000" 1100 0000" What happens if stack grows to 1110 0000? heap 1000 0000" Seg #" base" limit" 11" 1011 0000" 1 0000" 10" 0111 0000" 1 1000" 01" 0101 0000" 10 0000" 00" 0001 0000" 10 0000" heap 0111 0000" 0100 0000" data data 0101 0000" 0000 0000" code code 0001 0000" 0000 0000" seg # offset" 11.6

Recap: Address Segmentation " Virtual memory view" 1111 1111" 1110 0000" stack Physical memory view" stack 1110 0000" 1100 0000" 1000 0000" 0100 0000" heap data Seg #" base" limit" 11" 1011 0000" 1 0000" 10" 0111 0000" 1 1000" 01" 0101 0000" 10 0000" 00" 0001 0000" 10 0000" No room to grow Buffer overflow error or resize segment heap and move segments around to make room data 0111 0000" 0101 0000" 0000 0000" code code 0001 0000" 0000 0000" seg # offset" 11.7

Summary: Paging" Page Table" Virtual memory view" 11111 11101" 1111 1111" 11110 11100" 1111 0000" stack 11101 null " 11100 null " 11011 null" 11010 null" 1100 0000" 11001 null" 11000 null" 10111 null" 10110 null" 10101 null" 10100 null" 1000 0000" 10011 null" heap 10010 10000" 10001 01111" 10000 01110" 01111 null" 01110 null " data 01101 null" 01100 null" 0100 0000" 01010 01100 " 01011 01101 " 01001 01011" 01000 01010" 00111 null" 00110 null" code 00101 null " 0000 0000" 00100 null " page #offset" 00011 00101" 00010 00100" 10/3/2012 00001 00011" Ion Stoica CS162 UCB Fall 2012 00000 00010" Physical memory view" stack heap data code 1110 0000" 0111 000" 0101 000" 0001 0000" 0000 0000" 11.8

Virtual memory view" 1111 1111" Summary: Paging" Page Table" 11111 11101" 11110 11100" 11101 null " stack 11100 null " 1110 0000" 11011 null" 11010 null" 11001 null" 1100 0000" 11000 null" What happens if 10111 null" stack grows to 10110 null" 10101 null" 1110 0000? 10100 null" 1000 0000" 10011 null" heap 10010 10000" 10001 01111" 10000 01110" 01111 null" 01110 null " data 01101 null" 01100 null" 0100 0000" 01010 01100 " 01011 01101 " 01001 01011" 01000 01010" 00111 null" 00110 null" code 00101 null " 0000 0000" 00100 null " page #offset" 00011 00101" 00010 00100" 10/3/2012 00001 00011" Ion Stoica CS162 UCB Fall 2012 00000 00010" Physical memory view" stack heap data code 1110 0000" 0111 000" 0101 000" 0001 0000" 0000 0000" 11.9

Summary: Paging" Page Table" Virtual memory view" 11111 11101" 1111 1111" 11110 11100" 11101 10111" stack 11100 10110" 1110 0000" 11011 null" 11010 null" 1100 0000" 11001 null" 11000 null" 10111 null" 10110 null" 10101 null" 10100 null" 1000 0000" 10011 null" heap 10010 10000" 10001 01111" 10000 01110" 01111 null" 01110 null" data 01101 null" 01100 null" 0100 0000" 01010 01100 " 01011 01101 " 01001 01011" 01000 01010" 00111 null" 00110 null" code 00101 null " 0000 0000" 00100 null " page #offset" 00011 00101" 00010 00100" 10/3/2012 00001 00011" Ion Stoica CS162 UCB Fall 2012 00000 00010" Physical memory view" stack stack Allocate new pages where heap room data code 1110 0000" 0111 000" 0101 000" 0001 0000" 0000 0000" 11.10

Summary: Two-Level Paging" Virtual memory view" 1111 1111" 1110 0000" 1100 0000" 1000 0000" 0100 0000" stack heap data Page Table" (level 1)" 111 " 110 null" 101 null" 100 " 011 null" 010 " 001 null" 000 " Page Tables" (level 2)" 11 11101 " 10 11100" 01 10111" 00 10110" 11 null " 10 10000" 01 01111" 00 01110" 11 01101 " 10 01100" 01 01011" 00 01010" Physical memory view" stack stack heap data 1110 0000" 0111 000" 0101 000" page2 # 0000 0000" code 11 00101 " 10 00100" 01 00011" 00 00010" code 0001 0000" 0000 0000" page1 # offset" 11.11

Summary: Two-Level Paging" 1001 0000" (0x90)" Virtual memory view" stack heap data Page Table" (level 1)" 111 " 110 null" 101 null" 100 " 011 null" 010 " 001 null" 000 " Page Tables" (level 2)" 11 11101 " 10 11100" 01 10111" 00 10110" 11 null " 10 10000" 01 01111" 00 01110" 11 01101 " 10 01100" 01 01011" 00 01010" Physical memory view" stack stack heap data 1110 0000" 1000 0000" (0x80)" code 11 00101 " 10 00100" 01 00011" 00 00010" code 0001 0000" 0000 0000" 11.12

Summary: Inverted Table" Virtual memory view" 1111 1111" 1110 0000" 1100 0000" 1000 0000" 0100 0000" 0000 0000" stack heap data code Inverted Table" hash(virt. page #) = " phys. page #" h(11111) = 1110 11101" " h(11110) = 1101 11100" " h(11101) = 1100 10111 " " h(11100) = 1011" 10110" h(10010)= 1010 10000" " h(10001)= 1001 " 01111" h(10000)= 1000" 01110" h(01011)= 0111" 01101 " h(01010)= 0110 " 01100" h(01001)= 0101 " 01011" h(01000)= 0100 01010 " " h(00011)= 0011 00101 " " h(00010)= 0010 00100 " " h(00001)= 0001 00011 " " h(00000)= 0000 00010" " Physical memory view" stack stack heap data code 1110 0000" 1011 0000" 0111 000" 0101 000" 0001 0000" 0000 0000" page #offset" 11.13

Address Translation Comparison" Advantages" Segmentation Fast context switching: Segment mapping maintained by CPU Paging (single-level page) Paged segmentation Two-level pages No external fragmentation Table size ~ # of virtual memory pages allocated to the process Inverted Table Table size ~ # of pages in physical memory Disadvantages" External fragmentation Large table size ~ virtual memory Multiple memory references per page access Hash function more complex 11.14

Quiz 11.2: Caches & TLBs" Q1: True _ False _ Associative caches have fewer compulsory misses than direct caches Q2: True _ False _ Two-way associative caches can cache two addresses with same cache index Q3: True _ False _ With write-through caches a read miss can result in a write Q4: True _ False _ LRU caches are more complex than Random caches Q5: True _ False _ A TLB caches translations to virtual addresses 11.15

Quiz 11.2: Caches & TLBs" Q1: True _ False _ X" Associative caches have fewer compulsory misses than direct caches Q2: True X" _ False _ Two-way associative caches can cache two addresses with same cache index Q3: True _ False _ X" With write-through caches, a read miss can result in a write Q4: True X" _ False _ LRU caches are more complex than Random caches Q5: True _ False _ X" A TLB caches translations to virtual addresses 11.16

Review: Paging & Address Translation" Virtual Address: Virtual Virtual P1 index P2 index Offset Physical Memory: PageTablePtr Physical Address: Physical Page # Offset Page Table (1 st level) Page Table (2 nd level) 11.17

Review: Translation Look-aside Buffer" Virtual Address: Virtual Virtual P1 index P2 index Offset Physical Memory: PageTablePtr Physical Address: Physical Page # Offset Page Table (1 st level) TLB: Page Table (2 nd level) 11.18

Virtual Address: Virtual Virtual P1 index P2 index Offset Review: Cache" Physical Memory: PageTablePtr Physical Address: Physical Page # Offset Page Table (1 st level) TLB: Page Table (2 nd level) tag index byte cache: tag: block: 11.19

Goals for Today" Page Replacement Policies FIFO, LRU Clock Algorithm Note: Some slides and/or pictures in the following are" adapted from slides 2005 Silberschatz, Galvin, and Gagne. Many slides generated from my lecture notes by Kubiatowicz." 11.20

Demand Paging" Modern programs require a lot of physical memory Memory per system growing faster than 25%-30%/year But they don t use all their memory all of the time 90-10 rule: programs spend 90% of their time in 10% of their code Wasteful to require all of user s code to be in memory Solution: use main memory as cache for disk Processor Datapath Control On-Chip Cache Second Level Cache (SRAM) Main Memory (DRAM) Caching" Secondary Storage (Disk) Tertiary Storage (Tape) 11.21

Demand Paging is Caching" Since Demand Paging is Caching, we must ask: Question" What is the block size? What is the organization of this cache (i.e., direct-mapped, setassociative, fully-associative)? How do we find a page in the cache? What is page replacement policy? (i.e., LRU, Random, ) What happens on a miss? What happens on a write? (i.e., write-through, write-back) Choice" 1 page Fully-associative: arbitrary virtual physical mapping First check TLB, then traverse page tables Requires more explanation (kinda LRU) Go to lower level to fill a miss (i.e., disk) Definitely write-back. Need a dirty bit (D) 11.22

Demand Paging Mechanisms" PTE helps us implement demand paging Valid Page in memory, PTE points at physical page Not Valid Page not in memory; use info in PTE to find it on disk when necessary Suppose user references page with invalid PTE? Memory Management Unit (MMU) traps to OS» Resulting trap is a Page Fault What does OS do on a Page Fault?:» Choose an old page to replace» If old page modified ( D=1 ), write contents back to disk» Change its PTE and any cached TLB to be invalid» Load new page into memory from disk» Update page table entry, invalidate TLB for new entry» Continue thread from original faulting location TLB for new page will be loaded when thread continued While pulling pages off disk for one process, OS runs another process from ready queue» Suspended process sits on wait queue 11.23

Steps in Handling a Page Fault" 11.24

Administrivia" Project 2 Design Doc due Thursday 10/11 at 11:59PM Please fill the anonymous course survey at https://www.surveymonkey.com/s/69dzcjs We ll try to make changes semester based on your feedback 11.25

5min Break"

Demand Paging Example" Since Demand Paging like caching, can compute average access time ( Effective Access Time ) EAT = Hit Rate x Hit Time + Miss Rate x Miss Time Example: Memory access time = 200 nanoseconds Average page-fault service time = 8 milliseconds Suppose p = Probability of miss, 1-p = Probably of hit Then, we can compute EAT as follows: EAT = (1 p) x 200ns + p x 8 ms = (1 p) x 200ns + p x 8,000,000ns = 200ns + p x 7,999,800ns If one access out of 1,000 causes a page fault, then EAT = 8.2 μs: This is a slowdown by a factor of 40 What if want slowdown by less than 10%? 200ns x 1.1 < EAT p < 2.5 x 10-6 This is about 1 page fault in 400,000 11.27

What Factors Lead to Misses?" Compulsory Misses: Pages that have never been paged into memory before How might we remove these misses?» Prefetching: loading them into memory before needed» Need to predict future somehow More later. Capacity Misses: Not enough memory. Must somehow increase size. Can we do this?» One option: Increase amount of DRAM (not quick fix)» Another option: If multiple processes in memory: adjust percentage of memory allocated to each one Conflict Misses: Technically, conflict misses don t exist in virtual memory, since it is a fully-associative cache Policy Misses: Caused when pages were in memory, but kicked out prematurely because of the replacement policy How to fix? Better replacement policy 11.28

Page Replacement Policies" Why do we care about Replacement Policy? Replacement is an issue with any cache Particularly important with pages» The cost of being wrong is high: must go to disk» Must keep important pages in memory, not toss them out FIFO (First In, First Out) Throw out oldest page. Be fair let every page live in memory for same amount of time. Bad, because throws out heavily used pages instead of infrequently used pages MIN (Minimum): Replace page that won t be used for the longest time Great, but can t really know future Makes good comparison case, however RANDOM: Pick random page for every replacement Typical solution for TLB s. Simple hardware Unpredictable 11.29

Replacement Policies (Con t)" LRU (Least Recently Used): Replace page that hasn t been used for the longest time Programs have locality, so if something not used for a while, unlikely to be used in the near future. Seems like LRU should be a good approximation to MIN. How to implement LRU? Use a list Head" Page 6" Page 7" Page 1" Page 2" Tail (LRU)" On each use, remove page from list and place at head LRU page is at tail Problems with this scheme for paging? List operations complex» Many instructions for each hardware access In practice, people approximate LRU (more later) 11.30

Example: FIFO" Suppose we have 3 page frames, 4 virtual pages, and following reference stream: A B C A B D A D B C B Consider FIFO Page replacement: Ref:" Page:" A" B" C" A" B" D" A" D" B" C" B" 1" A" D" C" 2" B" A" 3" C" B" FIFO: 7 faults. When referencing D, replacing A is bad choice, since need A again right away 11.31

Example: MIN" Suppose we have the same reference stream: A B C A B D A D B C B Consider MIN Page replacement: Ref:" Page:" A" B" C" A" B" D" A" D" B" C" B" 1" A" C" 2" B" 3" C" D" MIN: 5 faults Look for page not referenced farthest in future. What will LRU do? Same decisions as MIN here, but won t always be true 11.32

When will LRU perform badly?" Consider the following: A B C D A B C D A B C D LRU Performs as follows (same as FIFO here): Ref:" Page:" 1" A" B" C" D" A" B" C" D" A" B" C" D" A" D" C" B" 2" B" A" D" C" 3" C" B" A" D" Every reference is a page fault MIN Does much better: Ref:" Page:" 1" A" B" C" D" A" B" C" D" A" B" C" D" A" B" 2" B" C" 3" C" D" 11.33

Graph of Page Faults Versus The Number of Frames" One desirable property: When you add memory the miss rate goes down Does this always happen? Seems like it should, right? No: Belady s anomaly Certain replacement algorithms (FIFO) don t have this obvious property 11.34

Adding Memory Doesn t Always Help Fault Rate" Does adding memory reduce number of page faults? Yes for LRU and MIN Not necessarily for FIFO (Called Belady s anomaly) Page:" 1" 2" 3" Page:" 1" 2" 3" 4" A" B" C" D" A" B" E" A B" C" D" E" " A" D" E" A" B" B" C" C" After adding memory: With FIFO, contents can be completely different In contrast, with LRU or MIN, contents of memory with X pages are a subset of contents with X+1 Page D" A" B" A" B" C" D" A" B" E" A" B" C" D" E" E" A" B" C" C" D" D" E" 11.35

Implementing LRU & Second Chance" Perfect: Timestamp page on each reference Keep list of pages ordered by time of reference Too expensive to implement in reality for many reasons Second Chance Algorithm: Approximate LRU» Replace an old page, not the oldest page FIFO with use bit Details A use bit per physical page On page fault check page at head of queue» If use bit=1 à clear bit, and move page at tail (give the page second chance)» If use bit=0 à replace page Moving pages to tail still complex 11.36

Second Chance Illustration" Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives first loaded page B u:0 last loaded page A u:1 D u:0 C u:0 11.37

Second Chance Illustration" Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives first loaded page B u:0 last loaded page A u:1 D u:0 C u:0 11.38

Second Chance Illustration" Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives first loaded page A u:1 last loaded page D u:0 C u:0 F u:0 11.39

Second Chance Illustration" Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D Page E arrives first loaded page A u:1 last loaded page D u:1 C u:0 F u:0 11.40

Second Chance Illustration" Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D Page E arrives first loaded page D u:1 last loaded page C u:0 F u:0 A u:0 11.41

Second Chance Illustration" Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D Page E arrives first loaded page C u:0 last loaded page F u:0 A u:0 D E u:0 11.42

Clock Algorithm" Clock Algorithm: more efficient implementation of second chance algorithm Arrange physical pages in circle with single clock hand Details: On page fault:» Check use bit: 1 used recently; clear and leave it alone 0 selected candidate for replacement» Advance clock hand (not real time) Will always find a page or loop forever? 11.43

Clock Replacement Illustration" Max page table size 4 Invariant: point at oldest page Page B arrives B u: 0 11.44

Clock Replacement Illustration" Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A B u: 0 A u: 0 11.45

Clock Replacement Illustration" Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A Page D arrives B u: 0 D u: 0 A u: 1 11.46

Clock Replacement Illustration" Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A Page D arrives Page C arrives C u: 0 B u: 0 A u: 1 D u: 0 11.47

Clock Replacement Illustration" Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives C u: 0 B u: F u:0 0 D u: 0 A u: 1 11.48

Clock Replacement Illustration" Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D Page E arrives C E u: 0 F u:0 D u: 10 A u: 10 11.49

Clock Algorithm: Discussion" What if hand moving slowly? Good sign or bad sign?» Not many page faults and/or find page quickly What if hand is moving quickly? Lots of page faults and/or lots of reference bits set 11.50

N th Chance version of Clock Algorithm" N th chance algorithm: Give page N chances OS keeps counter per page: # sweeps On page fault, OS checks use bit:» 1 clear use and also clear counter (used in last sweep)» 0 increment counter; if count=n, replace page Means that clock hand has to sweep by N times without page being used before page is replaced How do we pick N? Why pick large N? Better approx to LRU» If N ~ 1K, really good approximation Why pick small N? More efficient» Otherwise might have to look a long way to find free page What about dirty pages? Takes extra overhead to replace a dirty page, so give dirty pages an extra chance before replacing? Common approach:» Clean pages, use N=1» Dirty pages, use N=2 (and write back to disk when N=1) 11.51

Clock Algorithms: Details" Which bits of a PTE entry are useful to us? Use: Set when page is referenced; cleared by clock algorithm Modified: set when page is modified, cleared when page written to disk Valid: ok for program to reference this page Read-only: ok for program to read page, but not modify» For example for catching modifications to code pages Do we really need hardware-supported modified bit? No. Can emulate it (BSD Unix) using read-only bit» Initially, mark all pages as read-only, even data pages» On write, trap to OS. OS sets software modified bit, and marks page as read-write.» Whenever page comes back in from disk, mark read-only 11.52

Clock Algorithms Details (cont d)" Do we really need a hardware-supported use bit? No. Can emulate it using invalid bit:» Mark all pages as invalid, even if in memory» On read to invalid page, trap to OS» OS sets use bit, and marks page read-only When clock hand passes by, reset use bit and mark page as invalid again 11.53

Summary (1/2)" Demand Paging: Treat memory as cache on disk Cache miss find free page, get page from disk Transparent Level of Indirection User program is unaware of activities of OS behind scenes Data can be moved without affecting application correctness Replacement policies FIFO: Place pages on queue, replace page at end» Fair but can eject in-use pages, suffers from Belady s anomaly MIN: Replace page that will be used farthest in future» Benchmark for comparisons, can t implement in practice LRU: Replace page used farthest in past» For efficiency, use approximation 11.54

Summary (2/2)" Clock Algorithm: Approximation to LRU Arrange all pages in circular list Sweep through them, marking as not in use If page not in use for one pass, than can replace 11.55