Virtual Memory Design and Implementation

Similar documents
Virtual Memory Design and Implementation

Virtual Memory Design and Implementation

Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Chapter 4 Memory Management

Page replacement algorithms OS

Week 2: Tiina Niklander

Operating Systems, Fall

Operating Systems Lecture 6: Memory Management II

Perform page replacement. (Fig 8.8 [Stal05])

CS510 Operating System Foundations. Jonathan Walpole

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Chapter 4 Memory Management. Memory Management

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

Paging and Page Replacement Algorithms

CS 550 Operating Systems Spring Memory Management: Page Replacement

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Memory management, part 2: outline

CS450/550 Operating Systems

Virtual Memory III. Jo, Heeseung

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

Clock page algorithm. Least recently used (LRU) NFU algorithm. Aging (NFU + forgetting) Working set. Process behavior

CMPT 300 Introduction to Operating Systems. Page Replacement Algorithms

CS 153 Design of Operating Systems Winter 2016

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

10: Virtual Memory Management

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

CMPT 300 Introduction to Operating Systems

CS370 Operating Systems

Memory: Paging System

Lecture 12: Demand Paging

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

CSE 153 Design of Operating Systems

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

CSE 120 Principles of Operating Systems

Memory Management Prof. James L. Frankel Harvard University

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Modeling Page Replacement: Stack Algorithms. Design Issues for Paging Systems

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

Operating Systems. Operating Systems Sina Meraji U of T

Page Replacement Algorithms

Basic Page Replacement

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

Chapter 9: Virtual Memory

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Virtual Memory Management

Chapter 9: Virtual Memory

! What is virtual memory and when is it useful? ! What is demand paging? ! What pages should be. ! What is the working set model?

Chapter 8 Virtual Memory

Operating System Concepts

Motivation. Memory Management. Memory Management. Computer Hardware Review

Last class: Today: Paging. Virtual Memory

Virtual Memory COMPSCI 386

OPERATING SYSTEM. Chapter 9: Virtual Memory

Virtual Memory Outline

Lecture 14 Page Replacement Policies

CS370 Operating Systems

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

CS370 Operating Systems

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Chapter 9: Virtual Memory

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

Last Class: Demand Paged Virtual Memory

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

VIRTUAL MEMORY READING: CHAPTER 9

Memory Management. Memory Management. G53OPS: Operating Systems. Memory Management Monoprogramming 11/27/2008. Graham Kendall.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Memory management. Knut Omang Ifi/Oracle 10 Oct, 2012

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 9: Virtual-Memory

Swapping. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Swapping. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Chapter 9: Virtual Memory

First-In-First-Out (FIFO) Algorithm

Chapter 9: Virtual Memory

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Chapter 10: Virtual Memory. Background

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

Global Replacement Algorithms (1) Ken Wong Washington University. Global Replacement Algorithms (2) Replacement Example (1)

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

Module 9: Virtual Memory

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Paging Policies, Load control, Page Fault Handling, Case studies January WT 2008/09

Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University

Chapter 9: Virtual Memory

Operating System Concepts 9 th Edition

Past: Making physical memory pretty

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

Last Class. Demand Paged Virtual Memory. Today: Demand Paged Virtual Memory. Demand Paged Virtual Memory

1. Background. 2. Demand Paging

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Transcription:

Virtual Memory Design and Implementation Today! Page replacement algorithms! Some design and implementation issues Next! Last on virtualization VMMs

How can any of this work?!?!! Locality Temporal locality location recently referenced tend to be referenced again soon Spatial locality locations near recently referenced are more likely to be referenced soon! Locality means paging could be infrequent Once you brought a page in, you ll use it many times Some issues that may play against you Degree of locality of application Page replacement policy and application reference pattern mount of physical memory and application footprint 2

Page replacement algorithms! OS uses main memory as (page) cache If only load when reference demand paging! Page fault cache miss Need room for new page? Page replacement algorithm What s your best candidate for removal? The one you will never touch again duh!! What do you do with victim page? If modified, must be saved, otherwise just overwritten etter not to choose an often used page! Let s look at some For now, assume a process pages against itself, using a fixed number of page frames 3

Optimal algorithm (elady s algorithm)! If you could only tell! best page to replace, the one you ll never need again Replace page needed at the farthest point in future Optimal but unrealizable! Estimate by Logging page use on previous runs of process lthough impractical, useful for comparison Reference Need room stream for this one! Four page frames,,, D,,, E,,,,, E You ideal victim! D ompulsory misses apacity D D D misses E E E E E E 4

FIFO algorithm! Maintain a linked list of all pages in order of arrival! Victim is first page of list Maybe the oldest page will not be used again! Disadvantage ut maybe it will the fact is, you have no idea! Increasing physical memory might increase page faults (elady s anomaly) Which of course you need now!,,, D,,, E,,,,, E D D D E D E D E D Your oldest page nd again 5

Least recently used (LRU) algorithm! Pages used recently will be used again soon Throw out page unused for longest time Idea: past experience is a decent predictor of future behavior LRU looks at the past, elady s wants to look at the future How is LRU different from FIFO?! Must keep a linked list of pages Most recently used at front, least at rear Update this list every memory reference!! Too expensive in mem. bandwidth, algorithm execution time, etc 6

Second chance algorithm! Simple modification of FIFO void throwing out a heavily used page look at the R bit! Operation of second chance Pages sorted in FIFO order If it has been used, gets another chance move it to the end of the list of pages, clear R and update timestamp Page list if fault occurs at time 20, has R bit set (time is loading time) Most recently loaded Oldest page Page Time R H 18 X G 15 X F 14 X E 12 X D 8 X 7 X 3 0 0 1 Page Time R 20 0 H 18 X G 15 X F 14 X E 12 X D 8 X 7 X 3 0 7

lock algorithm! Second chance is reasonable but inefficient Quit moving pages around move a pointer?! Same as Second chance but for implementation Keep all pages in a circular list, as a clock, with the hand pointing to the oldest page When page fault Look at page pointed at by hand If R = 0, evict page If R = 1. clear R & move hand R: 0 K R: 0 L R: 0 R: 0 R: 10 R: 1 J D R: 10 I Evict this one! E R: 1 R: 0 G G R: 0 F R: 0 R: 0 8

Not recently used (NRU) algorithm! Each page has Reference and Modified bits Set when page is referenced, modified R bit set means recently referenced, so you must clear it every now and then! Pages are classified R M lass How can this occur? 0 0 Not referenced, not modified (0,0 0) 0 1 Not referenced, modified (0,1 1) 1 0 Referenced, but not modified (1,0 2) 1 1 Referenced and modified (1,1 3)! NRU removes page at random from lowest numbered, non-empty class! Easy to understand, relatively efficient to implement and sort-of OK performance 9

pproximating LRU! With some extra help from hardware Keep a counter in PTE Equipped hardware with a counter, ++ after each instruction fter each reference, update PTE counter for the referenced with hardware counter hoose page with lowest value counter! In software, Not Frequently Used Software counter associated with each page t clock interrupt add R to counter for each page Problem - it never forgets! 10

pproximating LRU! etter ging Push R from the left, drop bit on the right How is this not LRU? One bit per tick & a finite number of bits per counter 0 1 2 3 4 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 R 1 0 1 0 1 1 0 1 2 3 4 5 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 R 1 1 0 0 1 0 0 1 2 3 4 5 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 11

nd now a short break 12

Working set! Most programs show locality of reference Over a short time, just a few common pages! Working set Models the dynamic locality of a process memory usage i.e. the set of pages currently needed by a process! Intuitively, working set must be in memory, otherwise you ll experience heavy faulting (thrashing) What does it mean how much memory does program x need? what is program x average/worst-case working set size? 13

Working set! Demand paging Simplest strategy, load page when needed! an you do better knowing a process WS? How could you use this to reduce turnaround time? Prepaging! Working set definition ws(k,t) = {pages p such that p was referenced in the k most recent memory references} (k is WS window size) What bounds ws(k, t) as you increase k? ws(k,t) learly ws(k i,t) ws(k j,t) for i < j k more practical definition instead of k reference pages, t msec of execution time 14

Working set algorithm! Working set and page replacement Victim a page not in the working set! t each clock interrupt scan the page table R = 1? Write urrent Virtual Time (VT) into Time of Last Use R = 0? VT Time of Last Use > Threshold? out! else see if there s some other page and evict oldest (w/ R=0) If all are in the WS (all R = 1), random, preferably clean Information about a page R bit 1213 0 1213 0 urrent virtual time Time of last use 2014 1 2020 1 2204 2204 1 2204 1 Page to remove Threshold = 700 2032 1 2204 1 1620 0 1620 0 15

WSlock algorithm! Problem with WS algorithm Scans the whole table! Instead, scan only what you need to find a victim! ombine clock & working set If R = 1, unset it If R = 0, if age > T and page clean, out If dirty, schedule write and check next one If loop around, There s 1+ write scheduled you ll have a clean page soon There s none, pick any one 1620 0 urrent virtual time 2204 2084 1 2032 1 2003 1 2020 1 1980 1 2014 2204 10 1213 2204 01 R = 0 & 2204 1213 > T 16

leaning policy! To avoid having to write pages out when needed paging daemon Periodically inspects state of memory Keep enough pages free If we need the page before it s overwritten reclaim it!! Two hands for better performance (SD) First one clears R, second checks it If hands are close, only heavily used pages have a chance If back is just ahead of front hand (359 ), original clock Two key parameters, adjusted at runtime Scanrate rate at which hands move through the list Handspread gap between them 2084 1 1620 0 2032 10 2003 1 2020 10 1980 1 2204 10 2204 1 17

Design issues global vs. local policy! When you need a page frame, pick a victim from mong your own resident pages Local mong all pages Global! Local algorithms asically every process gets a fixed % of memory! Global algorithms Dynamically allocate frames among processes etter, especially if working set size changes at runtime How many page frames per process? Start with basic set & react to Page Fault Frequency (PFF)! Most replacement algorithms can work both ways except for those based on working set Why not working set based algorithms? 18

Load control! Despite good designs, system may still thrash Sum of working sets > physical memory! Page Fault Frequency (PFF) indicates that Some processes need more memory but no process needs less! Way out: Swapping So yes, even with paging you still need swapping Reduce number of processes competing for memory ~ two-level scheduling careful with which process to swap out (there s more than just paging to worry about!) What would you like of the remaining processes? 19

acking store! How do we manage swap area? llocate space to process when started Keep offset to process swap area in P Process can be brought entirely when started or as needed! Some problems Size process can grow split text/data/stack segments in swap area Do not allocate anything you may need extra memory to keep track of pages in swap! 20

Page fault handling! Hardware traps to kernel! General registers saved by assembler routine, OS called! OS find which virtual page cause the fault! OS checks address is valid, seeks page frame! If selected frame is dirty, write it to disk (S)! Get new page (S), update page table! ack up instruction where interrupted! Schedule faulting process! Routine load registers & other state and return to user space 21

Instruction backup! With a page fault, the current instruction is stopped part way through harder than you think! onsider instruction: MOV.L #6(1), 2(0) One instruction, three memory references (instruction word itself, two offsets for operands 1000 MOVE 1002 6 1004 2 Which one caused the page fault? What s the P then? Worse autodecr/incr as a side-effect of execution?! Some PU design include hidden registers to store eginning of instruction Indicate autodecr./autoincr. and amount 22

Separation of policy & mechanism! How to structure the memory management system for easy separation? (based on Mach) 1. Low-level MMU handler machine dependent 2. Page-fault handler in kernel machine independent, most of paging mechanism 3. External pager running in user space policy is here 3.Requested page User space User process 2. Needed page External 5. Page pager 5. Here it s the page Disk Kernel space 1.Page fault Fault handler 6.Map page in MMU handler 23

Separation of policy & mechanism! Where do you put the page replacement algorithm? In external pager? No access to R and M bits Either pass it to the pager or Fault handler informs external pager which page is victim! Pros and cons More modular, flexible Overhead of crossing user-kernel boundary and msg exchange s computers get faster and software more complex 24

Next time! Virtualize the PU, virtualize memory,! Let s virtualize the whole machine 25