Lecture 14: Cache & Virtual Memory

Similar documents
Caching and Demand-Paged Virtual Memory

Caching and Demand- Paged Virtual Memory

Demand- Paged Virtual Memory

CIS Operating Systems Memory Management Page Replacement. Professor Qiang Zeng Spring 2018

CIS Operating Systems Memory Management Cache Replacement & Review. Professor Qiang Zeng Fall 2017

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Lecture 13: Address Translation

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

VIRTUAL MEMORY READING: CHAPTER 9

Basic Memory Management

CS370 Operating Systems

Last Class. Demand Paged Virtual Memory. Today: Demand Paged Virtual Memory. Demand Paged Virtual Memory

Operating System Concepts

Module 9: Virtual Memory

Where are we in the course?

Principles of Operating Systems

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

Chapter 6: Demand Paging

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Chapter 10: Virtual Memory. Background

Module 9: Virtual Memory

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

Chapter 9: Virtual Memory

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

Lecture#16: VM, thrashing, Replacement, Cache state

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

Address spaces and memory management

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

CS450/550 Operating Systems

Lecture 15: I/O Devices & Drivers

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory

CS370 Operating Systems

Chapter 9: Virtual Memory

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

CIS Operating Systems Memory Management Cache. Professor Qiang Zeng Fall 2015

Lecture 17. Edited from slides for Operating System Concepts by Silberschatz, Galvin, Gagne

Recap: Memory Management

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Chapter 9: Virtual Memory

Chapter 9: Virtual-Memory

Chapter 8: Virtual Memory. Operating System Concepts

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Demand Paging. Valid-Invalid Bit. Steps in Handling a Page Fault. Page Fault. Transfer of a Paged Memory to Contiguous Disk Space

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

Basic Page Replacement

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

Paging and Page Replacement Algorithms

Virtual Memory Management

CS 153 Design of Operating Systems Winter 2016

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

Operating Systems, Fall

CS3350B Computer Architecture

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

Lecture 4: Memory Management & The Programming Interface

CS420: Operating Systems

Week 2: Tiina Niklander

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

CIS Operating Systems Memory Management Cache. Professor Qiang Zeng Fall 2017

CS307: Operating Systems

Virtual Memory Outline

Operating Systems. Operating Systems Sina Meraji U of T

OPERATING SYSTEM. Chapter 9: Virtual Memory

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Memory management, part 2: outline

Chapter 10: Virtual Memory

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

CS 550 Operating Systems Spring Memory Management: Page Replacement

First-In-First-Out (FIFO) Algorithm

Page Replacement Algorithms

Background. Virtual Memory (2/2) Demand Paging Example. First-In-First-Out (FIFO) Algorithm. Page Replacement Algorithms. Performance of Demand Paging

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Past: Making physical memory pretty

Virtual Memory. Virtual Memory

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

Memory: Paging System

Virtual memory. Virtual memory - Swapping. Large virtual memory. Processes

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,

Transcription:

CS 422/522 Design & Implementation of Operating Systems Lecture 14: Cache & Virtual Memory Zhong Shao Dept. of Computer Science Yale University Acknowledgement: some slides are taken from previous versions of the CS422/522 lectures taught by Prof. Bryan Ford and Dr. David Wolinsky, and also from the official set of slides accompanying the OSPP textbook by Anderson and Dahlin. Definitions Cache Copy of data that is faster to access than the original Hit: if cache has copy Miss: if cache does not have copy Cache block Unit of cache storage (multiple memory locations) Temporal locality Programs tend to reference the same memory locations multiple times Example: instructions in a loop Spatial locality Programs tend to reference nearby locations Example: data in a loop 1

Cache concept (read) Cache Fetch Address Address In Cache? No Fetch Address Yes Store Value in Cache Cache concept (write) Cache Store Value at Address Store Value at Address Address In Cache? No Fetch Address WriteBuffer Write through: changes sent immediately to next level of storage Write back: changes stored in cache until cache block is replaced Yes Store Value in Cache If Write Through Store Value at Address 2

Memory hierarchy i7 has 8MB as shared 3 rd level cache; 2 nd level cache is per-core Main points Can we provide the illusion of near infinite memory in limited physical memory? Demand-paged virtual memory Memory-mapped files How do we choose which page to replace? FIFO, MIN, LRU, LFU, Clock What types of workloads does caching work for, and how well? Spatial/temporal locality vs. Zipf workloads 3

Hardware address translation is a power tool Kernel trap on read/write to selected addresses Copy on write Fill on reference Zero on use Demand paged virtual memory Memory mapped files Modified bit emulation Use bit emulation Demand paging (before) Page Table Physical Memory Page Frames Disk Frame Access Page A Virtual Page B Frame for B Invalid Page B Page A Virtual Page A Frame for A R/W 4

Demand paging (after) Page Table Physical Memory Page Frames Disk Frame Access Page A Virtual Page B Frame for B R/W Page B Page B Virtual Page A Frame for A Invalid Demand paging 1. TLB miss 2. Page table walk 3. Page fault (page invalid in page table) 4. Trap to kernel 5. Convert virtual address to file + offset 6. Allocate page frame Evict page if needed 7. Initiate disk block read into page frame 8. Disk interrupt when DMA complete 9. Mark page as valid 10. Resume process at faulting instruction 11. TLB miss 12. Page table walk to fetch translation 13. Execute instruction 5

Demand paging on MIPS (software TLB) 1. TLB miss 2. Trap to kernel 3. Page table walk 4. Find page is invalid 5. Convert virtual address to file + offset 6. Allocate page frame Evict page if needed 7. Initiate disk block read into page frame 8. Disk interrupt when DMA complete 9. Mark page as valid 10. Load TLB entry 11. Resume process at faulting instruction 12. Execute instruction Allocating a page frame Select old page to evict Find all page table entries that refer to old page If page frame is shared Set each page table entry to invalid Remove any TLB entries Copies of now invalid page table entry Write changes on page back to disk, if necessary 6

How do we know if page has been modified? Every page table entry has some bookkeeping Has page been modified? * Set by hardware on store instruction * In both TLB and page table entry Has page been recently used? * Set by hardware in page table entry on every TLB miss Bookkeeping bits can be reset by the OS kernel When changes to page are flushed to disk To track whether page is recently used Keeping track of page modifications (before) TLB Frame Access Dirty Page Table Physical Memory Page Frames Disk R/W No Frame Access Dirty Old Page A Old Page B Virtual Page B Frame for B Invalid Page A Virtual Page A Frame for A R/W No 7

Keeping track of page modifications (after) TLB Frame Access Dirty Page Table Physical Memory Page Frames Disk R/W Yes Frame Access Dirty Old Page A Old Page B Virtual Page B Frame for B Invalid New Page A Virtual Page A Frame for A R/W Yes Virtual or physical dirty/use bits Most machines keep dirty/use bits in the page table entry Physical page is modified if any page table entry that points to it is modified recently used if any page table entry that points to it is recently used On MIPS, simpler to keep dirty/use bits in the core map Core map: map of physical page frames 8

Emulating a modified bit (Hardware Loaded TLB) Some processor architectures do not keep a modified bit per page Extra bookkeeping and complexity Kernel can emulate a modified bit: Set all clean pages as read-only On first write to page, trap into kernel Kernel sets modified bit, marks page as read-write Resume execution Kernel needs to keep track of both Current page table permission (e.g., read-only) True page table permission (e.g., writeable, clean) Emulating a recently used bit (Hardware Loaded TLB) Some processor architectures do not keep a recently used bit per page Extra bookkeeping and complexity Kernel can emulate a recently used bit: Set all recently unused pages as invalid On first read/write, trap into kernel Kernel sets recently used bit Marks page as read or read/write Kernel needs to keep track of both Current page table permission (e.g., invalid) True page table permission (e.g., read-only, writeable) 9

Emulating modified/use bits w/ MIPS software-loaded TLB MIPS TLB entries have an extra bit: modified/unmodified Trap to kernel if no entry in TLB, or if write to an unmodified page On a TLB read miss: If page is clean, load TLB entry as read-only; if dirty, load as rd/wr Mark page as recently used On a TLB write to an unmodified page: Kernel marks page as modified in its page table Reset TLB entry to be read-write Mark page as recently used On TLB write miss: Kernel marks page as modified in its page table Load TLB entry as read-write Mark page as recently used Models for application file I/O Explicit read/write system calls Data copied to user process using system call Application operates on data Data copied back to kernel using system call Memory-mapped files Open file as a memory segment Program uses load/store instructions on segment memory, implicitly operating on the file Page fault if portion of file is not yet in memory Kernel brings missing blocks into memory, restarts process 10

Advantages to memory-mapped Files Programming simplicity, esp for large files Operate directly on file, instead of copy in/copy out Zero-copy I/O Data brought from disk directly into page frame Pipelining Process can start working before all the pages are populated Interprocess communication Shared memory segment vs. temporary file From memory-mapped files to demand-paged virtual memory Every process segment backed by a file on disk Code segment -> code portion of executable Data, heap, stack segments -> temp files Shared libraries -> code file and temp data file Memory-mapped files -> memory-mapped files When process ends, delete temp files Unified memory management across file buffer and process memory 11

Cache replacement policy On a cache miss, how do we choose which entry to replace? Assuming the new entry is more likely to be used in the near future In direct mapped caches, not an issue! Policy goal: reduce cache misses Improve expected case performance Also: reduce likelihood of very poor performance A simple policy Random? Replace a random entry FIFO? Replace the entry that has been in the cache the longest time What could go wrong? 12

FIFO in action Worst case for FIFO is if program strides through memory that is larger than the cache MIN, LRU, LFU MIN Replace the cache entry that will not be used for the longest time into the future Optimality proof based on exchange: if evict an entry used sooner, that will trigger an earlier cache miss Least Recently Used (LRU) Replace the cache entry that has not been used for the longest time in the past Approximation of MIN Least Frequently Used (LFU) Replace the cache entry used the least often (in the recent past) 13

LRU/MIN for sequential scan 14

More page frames fewer page faults? Consider the following reference string with 3 page frames FIFO replacement A, B, C, D, A, B, E, A, B, C, D, E 9 page faults! Consider the same reference string with 4 page frames FIFO replacement A, B, C, D, A, B, E, A, B, C, D, E 10 page faults This is called Belady s anomaly Belady s anomaly (cont d) 15

Clock algorithm Approximate LRU Replace some old page, not the oldest unreferenced page Arrange physical pages in a circle with a clock hand Hardware keeps use bit per physical page frame Hardware sets use bit on each reference If use bit isn t set, means not referenced in a long time 1 On page fault: 0 1 Advance clock hand 1 1 Check use bit If 1 clear, go on 0 0 If 0, replace page 1 0 1 1 1 Nth chance: Not Recently Used Instead of one bit per page, keep an integer notinusesince: number of sweeps since last use Periodically sweep through all page frames if (page is used) { notinusesince = 0; } else if (notinusesince < N) { notinusesince++; } else { reclaim page; } 16

Implementation note Clock and Nth Chance can run synchronously In page fault handler, run algorithm to find next page to evict Might require writing changes back to disk first Or asynchronously Create a thread to maintain a pool of recently unused, clean pages Find recently unused dirty pages, write mods back to disk Find recently unused clean pages, mark as invalid and move to pool On page fault, check if requested page is in pool! If not, evict that page Recap MIN is optimal replace the page or cache entry that will be used farthest into the future LRU is an approximation of MIN For programs that exhibit spatial and temporal locality Clock/Nth Chance is an approximation of LRU Bin pages into sets of not recently used 17

How many pages allocated to each process? Each process needs minimum number of pages. Example: IBM 370 6 pages to handle SS MOVE instruction: instruction is 6 bytes, might span 2 pages. 2 pages to handle from. 2 pages to handle to. Two major allocation schemes. fixed allocation priority allocation Fixed allocation Equal allocation e.g., if 100 frames and 5 processes, give each 20 pages. Proportional allocation Allocate according to the size of process. s = size of process p i S = si m = total number of frames si ai = allocation for pi = m S i m = 64 s 1 = 10 s = 127 2 10 a 1 = 64 5 137 127 a 2 = 64 59 137 18

Priority allocation Use a proportional allocation scheme using priorities rather than size. If process P i generates a page fault, select for replacement one of its frames. select for replacement a frame from a process with lower priority number. Global vs. local allocation Global replacement process selects a replacement frame from the set of all frames; one process can take a frame from another. Local replacement each process selects from only its own set of allocated frames. 19

What to do when not enough memory? Thrashing: processes on system require more memory than it has. P1 P2 P3 Real mem * Each time one page is brought in, another page, whose contents will soon be referenced, is thrown out. * Processes will spend all of their time blocked, waiting for pages to be fetched from disk * I/O devices at 100% utilization but system not getting much useful work done What we wanted: virtual memory the size of disk with access time of physical memory What we have: memory with access time = disk access Thrashing Process(es) frequently reference page not in mem Spend more time waiting for I/O then getting work done Three different reasons process doesn t reuse memory, so caching doesn t work process does reuse memory, but it does not fit P1 mem individually, all processes fit and reuse memory, but too many for system. Which can we actually solve? 20

Making the best of a bad situation Single process thrashing? If process does not fit or does not reuse memory, OS can do nothing except contain damage. System thrashing? If thrashing arises because of the sum of several processes then adapt: * figure out how much memory each process needs * change scheduling priorities to run processes in groups whose memory needs can be satisfied (shedding load) * if new processes try to start, can refuse (admission control) Working set model Working Set: set of memory locations that need to be cached for reasonable cache hit rate Size of working set = the important threshold The size may change even during execution of the same program. 21

Cache working set 100% 75% Hit Rate 50% 25% 0% 1 2 4 8 16 Cache Size (KB) Phase change behavior 100% 75% Hit Rate 50% 25% 0% Time 22

Question What happens to system performance as we increase the number of processes? If the sum of the working sets > physical memory? Zipf distribution Caching behavior of many systems are not well characterized by the working set model An alternative is the Zipf distribution Popularity ~ 1/k^c, for kth most popular item, 1 < c < 2 frequency inversely proportional to its rank in the frequency table (e.g., frequency word in English natural language) * Rank 1: the 7% (69,971 out of 1 million in Brown Corpus ) * Rank 2: of 3.5% (36,411 out of 1 million) * Rank 3: and 2.3% (28,852 out of 1 million) 23

Zipf distribution Popularity 1 k Rank Zipf examples Web pages Movies Library books Words in text Salaries City population Common thread: popularity is self-reinforcing 24

Zipf and caching 1 Cache Hit Rate 0.001%.01%.1% 1% 10% all Cache Size (Log Scale) Increasing the cache size continues to improve cache hit rates, but with diminishing returns Cache lookup: fully associative address =? =? =? =? address value match at any address? yes return value 25

Cache lookup: direct mapped address value hash(address) =? match at hash(address)? yes return value Cache lookup: set associative address value address value hash(address) 0x0053 0x120d =? match at hash(address)? yes =? match at hash(address)? yes return value return value 26

Page coloring What happens when cache size >> page size? Direct mapped or set associative Multiple pages map to the same cache line OS page assignment matters! Example: 8MB cache, 4KB pages 1 of every 2K pages lands in same place in cache What should the OS do? Page coloring Processors Virtual Address Address Mod K Cache Memory O K 2K 3K 27