Virtual Memory Design and Implementation

Similar documents
Virtual Memory Design and Implementation

Virtual Memory Design and Implementation

Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

CS510 Operating System Foundations. Jonathan Walpole

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Week 2: Tiina Niklander

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

Operating Systems, Fall

Page replacement algorithms OS

Chapter 4 Memory Management

Memory management, part 2: outline

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

CS 153 Design of Operating Systems Winter 2016

Perform page replacement. (Fig 8.8 [Stal05])

Virtual Memory III. Jo, Heeseung

Paging and Page Replacement Algorithms

CS 550 Operating Systems Spring Memory Management: Page Replacement

Chapter 4 Memory Management. Memory Management

Operating Systems Lecture 6: Memory Management II

CS450/550 Operating Systems

Lecture 12: Demand Paging

CSE 120 Principles of Operating Systems

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

CMPT 300 Introduction to Operating Systems. Page Replacement Algorithms

Operating Systems. Operating Systems Sina Meraji U of T

Memory: Paging System

CSE 153 Design of Operating Systems

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

CMPT 300 Introduction to Operating Systems

Clock page algorithm. Least recently used (LRU) NFU algorithm. Aging (NFU + forgetting) Working set. Process behavior

10: Virtual Memory Management

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Last class: Today: Paging. Virtual Memory

Virtual Memory Management

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Chapter 9: Virtual Memory

CS370 Operating Systems

Memory Management Prof. James L. Frankel Harvard University

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Page Replacement Algorithms

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Modeling Page Replacement: Stack Algorithms. Design Issues for Paging Systems

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Chapter 9: Virtual Memory

Lecture 14 Page Replacement Policies

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

! What is virtual memory and when is it useful? ! What is demand paging? ! What pages should be. ! What is the working set model?

CS370 Operating Systems

Virtual Memory Outline

OPERATING SYSTEM. Chapter 9: Virtual Memory

Chapter 9: Virtual Memory

Basic Page Replacement

CS370 Operating Systems

VIRTUAL MEMORY READING: CHAPTER 9

Global Replacement Algorithms (1) Ken Wong Washington University. Global Replacement Algorithms (2) Replacement Example (1)

Last Class: Demand Paged Virtual Memory

Motivation. Memory Management. Memory Management. Computer Hardware Review

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Chapter 10: Virtual Memory. Background

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Chapter 8 Virtual Memory

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

EvicMng the best page #2: FIFO. #1: Belady s Algorithm. CS 537 Lecture 11 Page Replacement Policies 11/1/11. Michael SwiF

Operating System Concepts

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

Swapping. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Swapping. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Virtual Memory COMPSCI 386

Chapter 9: Virtual Memory

Paging Policies, Load control, Page Fault Handling, Case studies January WT 2008/09

1. Background. 2. Demand Paging

Module 9: Virtual Memory

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

Module 9: Virtual Memory

Memory Management. Memory Management. G53OPS: Operating Systems. Memory Management Monoprogramming 11/27/2008. Graham Kendall.

Operating System - Virtual Memory

Chapter 8: Virtual Memory. Operating System Concepts

How to create a process? What does process look like?

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Last Class. Demand Paged Virtual Memory. Today: Demand Paged Virtual Memory. Demand Paged Virtual Memory

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Where are we in the course?

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chapter 9: Virtual Memory

Past: Making physical memory pretty

Memory Management. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

Chapter 9: Virtual Memory

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Operating System Concepts 9 th Edition

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

Transcription:

Virtual Memory Design and Implementation q q q Page replacement algorithms Design and implementation issues Next: Last on virtualization VMMs

Loading pages When should the OS load pages? On demand or ahead of need Demand paging, i.e. only when reference Just the code/data needed ut needs change over time nticipatory paging (prefetching) If you know/can guess the pages a process will need Few systems try to anticipate future needs The OS is not very good at prediction 2

nd if the page is not there If a process references a virtual address in an evicted o never loaded page When page was evicted, OS set PTE as invalid and noted disk location of page In a data structure ~page table but holding disk addresses When process tries to access page, page fault OS runs the page fault handler Handler uses the ~PT data structure to find page on disk reads page in, updates PTE to point to it, set it to valid OS restarts the faulting process there are a million and one details 3

Page replacement algorithms Need room for new page? Page replacement What do you do with the victim page? If modified, save it, otherwise just write over it etter not to choose an often used page How can any of this work?!?! Locality! 4

Locality! Temporal and spatial locality Temporal Recently referenced locations tend to be referenced again soon Spatial Referenced locations tend to be clustered Locality means paging could be infrequent page brought in, gets to be used many times Some issues that may play against this Degree of locality of application Page replacement policy and application reference pattern mount of physical memory and application footprint 5

The best page to evict Goal of the page replacement algorithm Reduce fault rate by selecting best victim page What s your best candidate for removal? The one you will never touch again duh! Never is a long time For elady s algorithm let s say for the longest period Let s look at some algorithms for now, assume that a process pages against itself, using a fixed number of page frames

Optimal algorithm (elady s algorithm) Provably optimal Replace page needed at the farthest point in future Optimal but unrealizable Estimate by Logging page use on previous runs of process lthough impractical, useful for comparison Reference stream Reference D D E D E You ideal victim! Page frames 1 2 3 4 D D D Need room for this one! ompulsory misses (others: capacity misses) 7

FIFO algorithm Keep a linked list of pages in order of arrival Victim is first page of list Maybe the oldest page will not be used again Disadvantage ut maybe it will the fact is, you have no idea! Increasing physical memory might increase page faults (elady s anomaly) Need room for this one! Reference D D E D E 1 E E E E 2 3 First in 4 D D D D D D D 8

Least recently used (LRU) algorithm Pages used recently will be used again soon Throw out page unused for the longest time Idea: past experience as a predictor of future needs LRU looks at the past, elady s wants to look at the future How is LRU different from FIFO? Must keep a linked list of pages Most recently used at front, least recently used last Update list with every memory reference!! $$$ in mem. bandwidth, algorithm execution time, etc 9

Second chance algorithm Simple modification of FIFO void removing heavily used pages check the R bit Second chance Pages sorted in FIFO order If page has been used, gets another chance move it to the end of the list of pages, clear R, update timestamp Page list if fault occurs at time 20, has R bit set (time is loading time) Page ID H G F E D Timestamp 20 18 15 14 12 8 7 3 0 R bit 0 X X X X X X 0 1 Most recently loaded Oldest page 10

lock algorithm Second chance is reasonable but inefficient Quit moving pages around move a pointer? lock ~Second chance but for implementation Keep all pages in a circular list, as a clock, with the hand pointing to the oldest page When page fault Look at page pointed at by hand If R = 0, evict page If R = 1. clear R & move hand R: 0 R: 0 R: 0 L R: 0 R: 10 K R: 1 J D R: 10 Evict this one! I E R: 1 R: 0 G G R: 0 F R: 0 R: 0 11

Not recently used (NRU) algorithm Each page has Reference and Modified bits Set when page is referenced, modified R bit set means recently referenced, so clear it regularly Pages are classified R M lass How can this occur? 0 0 Not referenced, not modified (0,0 0) 0 1 Not referenced, modified (0,1 1) 1 0 Referenced, but not modified (1,0 2) 1 1 Referenced and modified (1,1 3) NRU removes page at random from lowest numbered, non-empty class Easy to understand, relatively efficient to implement and sort-of OK performance 12

nd now a short break xkcd 13

pproximating LRU With some extra help from hardware Keep a counter in PTE Equipped hardware with a counter, ++ after each instruction fter each reference, update PTE counter for the referenced with hardware counter hoose page with lowest value counter In software, Not Frequently Used Software counter associated with each page, started at 0 t clock interrupt add R to counter for each page Need a victim? Lowest counter value Problem - it never forgets! page heavily used early on and never touch again, will keep a high count for a long time and not be evicted 14

pproximating LRU etter, with a small modification ging Push R from the left, drop bit on the right How is this not LRU? One bit per tick & a finite number of bits per counter Precision all pages referenced within the interval are the same Limited past horizon two pages with all 8 bits unset are the same, even if one dropped a set 9th bit just before 0 1 2 3 4 5 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 R 1 0 1 0 1 0 0 1 2 3 4 5 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 R 1 1 0 0 1 0 0 1 2 3 4 5 1 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 15

Working set Purest form of paging Demand paging fter a while, all the pages you need Working set Most programs show locality of reference over a short time, just a few common pages Pages currently needed by a process Working Set What does it mean how much memory does program x need? an you do better knowing a process WS? Intuitively, working set must be in memory, otherwise you ll experience heavy faulting (thrashing) Reducing turnaround time? Pre-paging 16

Working set Working set definition ws(k,t) = {p such that p was referenced in the k most recent memory references at time t} (k is WS window size) learly ws(k i,t) ws(k j,t) for i < j What bounds ws(k, t) as you increase k? ws(k,t) k How would you use this in a replacement algorithm? shift register with k pages, remove duplicates, Victim a page not in WS more practical definition Instead of k reference pages, τ msec of execution (virtual) time Set of pages used during the last τ msec 17

Working set algorithm first algorithm based on WS t each clock interrupt scan the page table R = 1? Write urrent Virtual Time (VT) into Time of Last Use R = 0? VT Time of Last Use > Threshold? Out! Else see if there s some other and evict oldest (w/ R=0) If all are in the WS (all R = 1), random, preferably clean Information about a page R bit 1213 0 1213 0 urrent virtual time Time of last use 2014 1 2020 1 2204 2204 1 2204 1 Page to remove Threshold = 700 2032 1 2204 1 1620 0 1620 0 18

WSlock algorithm Problem Scans the whole table Instead, scan only what you need to find a victim ombine clock & working set If R = 1, unset it If R = 0, if age > T and page clean, out If dirty, schedule write and check next one If loop around, There s 1+ write scheduled you ll have a clean page soon There s none, pick any one urrent virtual time 2204 1620 0 2084 1 2032 1 2003 1 2020 1 1980 1 2014 2204 10 1213 2204 01 R = 0 & 2204 1213 > T 19

leaning policy To avoid having to write pages out when needed paging daemon Keep enough pages free with periodic inspection If we need the page before it s overwritten reclaim it! Two hands for better performance (SD) First one clears R, second checks it If hands are close, only heavily used pages have a chance If back is just ahead of front hand (359 ), original clock Two key parameters, adjusted at runtime Scanrate rate at which hands move through the list Handspread gap between them 2084 1 1620 0 2032 10 2003 1 2020 10 1980 1 2204 10 2204 1 20

Design issues global vs. local policy When you need a page frame, pick a victim from Your own resident pages Local or all pages Global Local algorithms asically every process gets a fixed % of memory Global algorithms Dynamically allocate frames among processes etter, especially if WS size changes at runtime How many page frames per process? Start with basic set & react to Page Fault Frequency (PFF) Most replacement algorithms can work both ways Except those based on working set Why? 21

Load control Despite good designs, system may still thrash Sum of working sets > physical memory How do you know? Page Fault Frequency (PFF) Some processes need more memory ut no process needs less Way out: Swapping So yes, even with paging you still need swapping Reduce number of processes competing for memory ~ two-level scheduling careful with which process to swap out (there s more than just paging to worry about!) What would you like of the remaining processes? 22

Swap area How do we manage swap area? llocate space to process when started Keep offset to process swap area in P Process can be brought entirely when started or as needed Some problems Size Process can grow split text/data/stack segments in swap area Do not allocate anything you may need extra memory to keep track of pages in swap! 23

Page fault handling Hardware traps to kernel Gral registers saved by assembler routine, OS called OS find which virtual page cause the fault OS checks address is valid, seeks page frame If selected frame is dirty, write it to disk (S) Get new page (S), update page table ack up instruction where interrupted Schedule faulting process Load registers & other state and return to user space 24

Instruction backup With a page fault, the current instruction is stopped part way through harder than you think! onsider instruction: MOV.L #6(1), 2(0) One instruction, three memory references (instruction word itself, two offsets for operands 1000 MOVE 1002 6 1004 2 Which one caused the page fault? What s the P then? Worse autodecr/incr as a side-effect of execution? Some PU design include hidden registers to store eginning of instruction Indicate autodecr./autoincr. and amount 25

Last: Separating policy & mechanism How to structure the memory management system for easy separation? (based on Mach) 1. Low-level MMU handler machine dependent 2. Page-fault handler in kernel machine independent, most of paging mechanism 3. External pager running in user space policy is here 3.Requested page User space User process 2. Needed page External 5. Page pager 5. Here it s the page Disk Kernel space 1.Page fault Fault handler 6.Map page in MMU handler 26

Separation of policy & mechanism Where should we put the page replacement algorithm? leanest: external pager, but no access to R and M bits Either pass this info to the pager or Fault handler informs external pager which page is victim User process External pager Disk Pros and cons More modular, flexible Overhead of crossing user-kernel + msg exchange s HW get faster and SW more complex Fault handler MMU handler 27

Next time Virtualize the PU, virtualize memory, Let s virtualize the whole machine 28