VIRTUAL MEMORY READING: CHAPTER 9

Similar documents
3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

CS370 Operating Systems

10/9/2013 Anthony D. Joseph and John Canny CS162 UCB Fall 2013

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

CS370 Operating Systems

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

Virtual Memory COMPSCI 386

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Chapter 6: Demand Paging

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chapter 9: Virtual Memory

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

Operating System Concepts

Lecture 12: Demand Paging

CS 153 Design of Operating Systems Winter 2016

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Lecture#16: VM, thrashing, Replacement, Cache state

CSE 153 Design of Operating Systems

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Virtual Memory Outline

Chapter 9: Virtual-Memory

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

Chapter 8: Virtual Memory. Operating System Concepts

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CMPT 300 Introduction to Operating Systems. Page Replacement Algorithms

Chapter 9: Virtual Memory

Module 9: Virtual Memory

Operating Systems. Operating Systems Sina Meraji U of T

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Basic Page Replacement

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

Chapter 9: Virtual Memory

Basic Memory Management

Module 9: Virtual Memory

Virtual Memory Management

CSE 120 Principles of Operating Systems

Virtual Memory - II. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XVI. University at Buffalo.

Chapter 9: Virtual Memory

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Memory management, part 2: outline

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

CS510 Operating System Foundations. Jonathan Walpole

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual Memory III. Jo, Heeseung

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles

Swapping. Operating Systems I. Swapping. Motivation. Paging Implementation. Demand Paging. Active processes use more physical memory than system has

CMPT 300 Introduction to Operating Systems

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Virtual Memory. ICS332 Operating Systems

Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,

OPERATING SYSTEM. Chapter 9: Virtual Memory

! What is virtual memory and when is it useful? ! What is demand paging? ! What pages should be. ! What is the working set model?

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory

Chapter 10: Virtual Memory. Background

Memory Management Outline. Operating Systems. Motivation. Paging Implementation. Accessing Invalid Pages. Performance of Demand Paging

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

Address spaces and memory management

Virtual Memory Design and Implementation

Past: Making physical memory pretty

Demand Paging. Valid-Invalid Bit. Steps in Handling a Page Fault. Page Fault. Transfer of a Paged Memory to Contiguous Disk Space

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

CS370 Operating Systems

CS370 Operating Systems

Lecture 14 Page Replacement Policies

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

Lecture 14: Cache & Virtual Memory

CS307: Operating Systems

CS370 Operating Systems

Operating Systems Lecture 6: Memory Management II

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

Caching and Demand-Paged Virtual Memory

Chapter 3: Virtual Memory ว ตถ ประสงค. Background สามารถอธ บายข อด ในการท ระบบใช ว ธ การจ ดการหน วยความจ าแบบเสม อนได

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Transcription:

VIRTUAL MEMORY READING: CHAPTER 9 9

MEMORY HIERARCHY Core! Processor! Core! Caching! Main! Memory! (DRAM)!! Caching!! Secondary Storage (SSD)!!!! Secondary Storage (Disk)! L cache exclusive to a single core L slower access than L L shared among multiple cores

ON-DEMAND PAGING Most processes terminate without having accessed their whole address space Code handling rare error conditions,... Other processes go to multiple phases during which they access different parts of their address space Compilers Wasteful to keep the entire address space of a process in memory the whole time Use nd storage: disk swap space

ON-DEMAND PAGING VM systems fetch individual pages on demand when they get accessed the first time Page miss or page fault When memory is full, they expel from memory pages that are not currently in use program A swap out 4 5 6 7 8 9 Advantages: 4 5 Not limited by the physical memory size More efficient use of physical memory program B swap in 6 7 8 9 main memory

KEY QUESTIONS Memory access: ~n sec, disk access: ~m sec Whether a page is in memory? (address translation) Which pages to be put in memory? Which pages to be swapped out? (page replacement) What happens during context switches? (write-out) How to avoid thrashing: A computer's virtual memory subsystem is in a constant state of paging, rapidly exchanging data in memory for data on disk, to the exclusion of most application-level processing

VALID BIT Recall special bits in page table entry Valid bit indicates whether a page is valid and/or in memory 4 A B C D E frame valid invalid bit 4 6 v i v i 4 5 6 A C A B A page fault occurs if invalid 5 6 7 F G H 4 5 9 6 7 i v i i 7 8 9 F C D E F G H logical memory page table 4 5 physical memory 4

STEPS IN HANDLING A PAGE FAULT page is on backing store operating system reference context switch trap load M 6 restart instruction context switch page table reset page table i free frame 5 4 bring in missing page physical memory 5

WHAT HAPPENS DURING PAGE FAULTS. Trap to the operating system.. Save the user registers and process state.. Determine that the interrupt was a page fault. 4. Check that the page reference was legal and determine the location of the page on the disk. 5. Issue a read from the disk to a free frame: Wait in a queue for this device until the read request is serviced. Wait for the device seek and/or latency time. Begin the transfer of the page to a free frame. Page fault is expensive! 6. While waiting, allocate the CPU to some other process (CPU scheduling, optional). 7. Receive an interrupt from the disk I/O subsystem (I/O completed). 8. Save the registers and process state for the other process (if step 6 is executed). 9. Determine that the interrupt was from the disk.. Correct the page table and other tables to show that the desired page is now in memory.. Wait for the CPU to be allocated to this process again.. Restore the user registers, process state, and new page table, and then resume the interrupted instruction.

EFFECTIVE ACCESS TIME (EAT) EAT = Hit Rate x Hit Time + Miss Rate x Miss Time Example: Memory access time = nanoseconds Average page-fault service time = 8 milliseconds Suppose p = Probability of miss, -p = Probably of hit Then, we can compute EAT as follows: EAT = ( p) x ns + p x 8 ms = ( p) x ns + p x 8,,ns = ns + p x 7,999,8ns If one access out of, causes a page fault, then EAT = 8. µs: This is a slowdown by a factor of 4! What if want slowdown by less than %? EAT < ns x. è p <.5 x -6 This is about page fault in 4,! 7

WHAT LEADS TO PAGE FAULT? Capacity Misses: Not enough memory. Must somehow increase size. Can we do this? Increase amount of DRAM (not quick fix!) Reduce the needs for physical memory (copy-on-write) Increase percentage of memory allocated to each one Compulsory Misses: Pages that have never been paged into memory before How might we remove these misses? Prefetching: loading them into memory before needed Need to predict future somehow!. Policy Misses: Caused when pages were in memory, but kicked out prematurely because of the replacement policy How to fix? Better replacement policy 8

COPY-ON-WRITE Recall fork() creates a copy of the parent s address space for the child process Shared until either process modifies the pages process physical memory process page A page B shared page C process physical memory process page A page B copy-on-write page C Copy of page C 9

LOCALITY IN MEMORY REFERENCE Processes access at any time a small fraction of their addressing space ( spatial locality ) and they tend to reference again the pages they have recently referenced ( temporal locality ) How much physical memory needed (working set) What is likely to be accessed next 4

PAGE REPLACEMENT Selecting which page to expel from main memory (cache) when frame valid invalid bit Memory (cache) is full Must bring in a new page Note that two page transfers required if no free page in memory Can be alleviated by the use of dirty bit not need to page out victim if dirty bit = (in page replacement, replace clean pages first) i f v page table 4 change to invalid reset page table for new page f victim physical memory page out victim page page in desired page 4

PAGE REPLACEMENT POLICY Objectives: Select the right page to expel ( victim ) Have a reasonable run-time overhead The more physical memory, the less the page fault 6 number of page faults 4 8 6 4 number of frames 4 5 6 4

PAGE REPLACEMENT POLICY Four classes of page replacement policies Local policies vs globe policies: Local: expel own pages Global: maintain a global pool Fixed sized vs variable sized: each process a fixed vs variable number of frames 4

FIFO (FIRST IN, FIRST OUT) Replace page that has been in for the longest time. Be fair to pages and give them equal time. How to implement FIFO? It s a queue (can use a linked list) Oldest page is at head When a page is brought in, add it to tail. Eject head if list longer than capacity Head(Oldest)" Page 6! Page 7! Page! Page! Tail(Newest) " 44

EXAMPLE Reference string: the string of reference 7,,,,,,, 4,,,,,,,,,, 7,, pages reference string 7 4 7 7 7 7 4 4 4 7 7 7 page frames How many page faults? 45

EXAMPLE Reference strings:,,, 4,,, 5,,,, 4, 5 4 5 4 5 4 5 4 5 4 4 5 4 4 4 5 5 5 4 9 page faults page faults!

BELADY S ANOMALY FIFO suffers from the anomaly Didn t account of page usage: need to give more chances to pages that were likely to be used soon 6 number of page faults 4 8 6 4 number of frames 4 5 6 7 47

OPTIMAL PAGE REPLACEMENT Replace the page that will not be used for the longest period of time Example: reference string 7 4 7 7 7 7 7 4 page frames Unfortunately, we cannot look into the future 48

LEAST RECENTLY USED (LRU) LRU (Least Recently Used): Replace page that hasn t been used for the longest time Programs have locality, so if something not used for a while, unlikely to be used in the near future. Seems like LRU should be a good approximation to OPT. Example reference string 7 4 7 7 7 7 4 4 4 7 page frames 49

Consider the following: A B C D A B C D A B C D" LRU Performs as follows (same as FIFO here):" " " " " " " IS LRU A GOOD APPROXIMATION?" Ref:! Page:!!!! A! B! C! D! A! B! C! D! A! B! C! D! A! B! C! D! Every reference is a page fault!! OPT Does much better:" A! B! C! D! A! B! C! D! Ref:! Page:! A! B! C! D! A! B! C! D! A! B! C! D!! A! B!! B! C!! C! D!

PROPERTIES OF LRU LRU can be as bad as FIFO for some reference strings However, LRU does not suffer from Belady s anomaly 5

IMPLEMENTATION OF LRU LRU page is at head When a page is used for the first time, add it to tail. Eject head if list longer than capacity Head(LRU)" Page 6! Page 7! Page! Page! Tail (MRU)" Different if we access a page that is already loaded:" " When a page is used again, remove from list, add it to tail.! Eject head if list longer than capacity! Head(LRU)" Page 6! Page! Page! Tail (MRU)" 5

IMPLEMENTATION OF LRU Problems with this scheme for paging? Updates are happening on page use, not just swapping List structure requires extra pointers c.f. FIFO, more updates In practice, approximate LRU with simpler implementation Use Reference bits Second chance Clock algorithm 5

IMPLEMENTING LRU & SECOND CHANCE Perfect: Timestamp page on each reference Keep list of pages ordered by time of reference Too expensive to implement in reality for many reasons Second Chance Algorithm: Approximate LRU Replace an old page, not the oldest page FIFO with use bit Details A use bit per physical page set when page accessed On page fault check page at head of queue If use bit= à clear bit, and move page to tail (give the page second chance!) If use bit= à replace page Moving pages to tail still complex

SECOND CHANCE ILLUSTRATION Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives first loaded page B u: last loaded page A u: D u: C u:

SECOND CHANCE ILLUSTRATION Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives first loaded page B u: last loaded page A u: D u: C u:

SECOND CHANCE ILLUSTRATION Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives first loaded page A u: last loaded page D u: C u: F u:

SECOND CHANCE ILLUSTRATION Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D first loaded page A u: last loaded page D u: C u: F u:

SECOND CHANCE ILLUSTRATION Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D Page E arrives first loaded page A u: last loaded page D u: C u: F u:

SECOND CHANCE ILLUSTRATION Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D Page E arrives first loaded page D u: last loaded page C u: F u: A u:

SECOND CHANCE ILLUSTRATION Max page table size 4 Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D Page E arrives first loaded page C u: last loaded page F u: A u: D E u:

CLOCK ALGORITHM Clock Algorithm: more efficient implementation of second chance algorithm Arrange physical pages in circle with single clock hand Details: On page fault: Check use bit: à used recently; clear and leave it alone à selected candidate for replacement Advance clock hand (not real time) Will always find a page or loop forever?

CLOCK REPLACEMENT ILLUSTRATION Max page table size 4 Invariant: point at oldest page Page B arrives B u:

CLOCK REPLACEMENT ILLUSTRATION Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A B u: A u:

CLOCK REPLACEMENT ILLUSTRATION Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A Page D arrives B u: D u: A u:

CLOCK REPLACEMENT ILLUSTRATION Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A Page D arrives Page C arrives C u: B u: A u: D u:

CLOCK REPLACEMENT ILLUSTRATION Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D C u: B u: F u: D u: A u:

CLOCK REPLACEMENT ILLUSTRATION Max page table size 4 Invariant: point at oldest page Page B arrives Page A arrives Access page A Page D arrives Page C arrives Page F arrives Access page D Page E arrives C E u: F u: D u: A u:

CLOCK ALGORITHM: DISCUSSION What if hand moving slowly? Good sign or bad sign? Not many page faults and/or find page quickly What if hand is moving quickly? Lots of page faults and/or lots of reference bits set

N TH CHANCE VERSION OF CLOCK ALGORITHM Nth chance algorithm: Give page N chances OS keeps counter per page: # sweeps On page fault, OS checks use bit: à clear use and also clear counter (used in last sweep) à increment counter; if count=n, replace page Means that clock hand has to sweep by N times without page being used before page is replaced How do we pick N? Why pick large N? Better approx to LRU If N ~ K, really good approximation Why pick small N? More efficient Otherwise might have to look a long way to find free page What about dirty pages? Takes extra overhead to replace a dirty page, so give dirty pages an extra chance before replacing? Common approach: Clean pages, use N= Dirty pages, use N= (and write back to disk when N=)

THRASHING CPU utilization thrashing If a process does not have enough pages, the page-fault rate is very high. This leads to: low CPU utilization operating system spends most of its time swapping to disk Thrashing: a process is busy swapping pages in and out Questions: How do we detect Thrashing? What is best response to Thrashing? degree of multiprogramming

LOCALITY IN A MEMORY- REFERENCE PATTERN Program Memory Access Patterns have temporal and spatial locality Group of Pages accessed along a given time slice called the Working Set Working Set defines minimum number of pages needed for process to behave well Not enough memory for Working Set! Thrashing Better to swap out process?

WORKING-SET MODEL page reference table... 6 5 7 7 7 7 5 6 4 4 4 4 4 4 4 4 4 4 4 4 4 4... t WS(t ) = {,,5,6,7} WS(t ) = {,4} t Δ = working-set window = fixed number of page references Example:, accesses WS i (working set of Process P i ) = total set of pages referenced in the most recent Δ (varies in time) if Δ too small will not encompass entire locality if Δ too large will encompass several localities if Δ=, will encompass entire program D = Σ WS i = total demand frames if D > physical memory!thrashing Policy: if D > physical memory, then suspend/swap out processes This can improve overall system behavior by a lot!

WHAT ABOUT COMPULSORY MISSES? Recall that compulsory misses are misses that occur the first time that a page is seen Pages that are touched for the first time Pages that are touched after process is swapped out/swapped back in Clustering: On a page-fault, bring in multiple pages around the faulting page Since efficiency of disk reads increases with sequential reads, makes sense to read several sequential pages Tradeoff: Prefetching may evict other in-use pages for neverused prefetched pages Working Set Tracking: Use algorithm to try to track working set of application When swapping process back in, swap in working set

POP QUIZ 4: ADDRESS TRANSLATION Q: True _ False _ Paging does not suffer from external fragmentation Q: True _ False _ The segment offset can be larger than the segment size Q: True _ False _ Paging: to compute the physical address, add physical page # and offset Q4: True _ False _ Uni-programming doesn t provide address protection Q5: True _ False _ Virtual address space is always larger than physical address space Q6: True _ False _ Inverted page tables keeps fewer entries than multi-level page tables