Operating Systems. Memory: replacement policies

Similar documents
Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

Locality of Reference

Lecture 14: Cache & Virtual Memory

Caching and Demand-Paged Virtual Memory

Least Recently Frequently Used Caching Algorithm with Filtering Policies

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Operating Systems. Operating Systems Sina Meraji U of T

Virtual Memory - II. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XVI. University at Buffalo.

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

CS370 Operating Systems

Demand- Paged Virtual Memory

Basic Page Replacement

Virtual Memory Outline

Last Class. Demand Paged Virtual Memory. Today: Demand Paged Virtual Memory. Demand Paged Virtual Memory

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

CS 5523 Operating Systems: Memory Management (SGG-8)

CIS Operating Systems Memory Management Page Replacement. Professor Qiang Zeng Spring 2018

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

Page Replacement Algorithms

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

Operating System Concepts

Virtual Memory COMPSCI 386

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

CS370 Operating Systems

Virtual Memory. Yannis Smaragdakis, U. Athens

Chapter 9: Virtual-Memory

Real-time integrated prefetching and caching

Page Replacement. (and other virtual memory policies) Kevin Webb Swarthmore College March 27, 2018

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Memory Management 3/29/14 21:38

CSE 410 Final Exam Sample Solution 6/08/10

Page 1. Multilevel Memories (Improving performance using a little cash )

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

Last Class: Demand Paged Virtual Memory

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

Caching and Demand- Paged Virtual Memory

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Unit 2 Buffer Pool Management

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

COMPUTER SCIENCE 4500 OPERATING SYSTEMS

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Memory. Objectives. Introduction. 6.2 Types of Memory

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Chapter 8 Memory Management

Module 9: Virtual Memory

Recall from Tuesday. Our solution to fragmentation is to split up a process s address space into smaller chunks. Physical Memory OS.

Lecture 17. Edited from slides for Operating System Concepts by Silberschatz, Galvin, Gagne

CSCI 4500/8506 Operating Systems

B-Trees and External Memory

EvicMng the best page #2: FIFO. #1: Belady s Algorithm. CS 537 Lecture 11 Page Replacement Policies 11/1/11. Michael SwiF

Q. How can we remove, or at least reduce, the relationship between program size and physical memory size?

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

How to create a process? What does process look like?

Perform page replacement. (Fig 8.8 [Stal05])

Principles of Operating Systems

B-Trees and External Memory

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory

SF-LRU Cache Replacement Algorithm

Unit 2 Buffer Pool Management

CIS Operating Systems Memory Management Cache Replacement & Review. Professor Qiang Zeng Fall 2017

Where are we in the course?

CS370 Operating Systems

Week 2: Tiina Niklander

Algorithmen II. Peter Sanders, Christian Schulz, Simon Gog. Übungen: Michael Axtmann. Institut für Theoretische Informatik, Algorithmik II.

LBM: A Low-power Buffer Management Policy for Heterogeneous Storage in Mobile Consumer Devices

Chapter 9: Virtual Memory

Karma: Know-it-All Replacement for a Multilevel cache

Lecture 14 Page Replacement Policies

Virtual Memory III. Jo, Heeseung

VIRTUAL MEMORY READING: CHAPTER 9

Applied Algorithm Design Lecture 3

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

Operating Systems, Fall

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

22. Swaping: Policies

First-In-First-Out (FIFO) Algorithm

Virtual Memory. Chapter 8

Background. Virtual Memory (2/2) Demand Paging Example. First-In-First-Out (FIFO) Algorithm. Page Replacement Algorithms. Performance of Demand Paging

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Memory management, part 2: outline

Lecture 12: Demand Paging

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Switching Between Page Replacement Algorithms Based on Work Load During Runtime in Linux Kernel

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

Basic Memory Management

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

CS Operating Systems

CS Operating Systems

CSE 153 Design of Operating Systems

Question Points Score Total 100

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

MEMORY SENSITIVE CACHING IN JAVA

Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes *

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

Transcription:

Operating Systems Memory: replacement policies

Last time caching speeds up data lookups if the data is likely to be re-requested again data structures for O(1)-lookup data source set-associative (hardware) hash tables (software) cache coherence with 1 cache: buffer writes back to memory slow with multiple caches: things get complicated today: replacement policy data user fast cache

THE PROBLEM input: a sequence of requests for objects the cache? GOAL minimize number of cache misses

clairvoyant LRU LFU FIFO BASIC ALGORITHMS

clairvoyant BASIC ALGORITHMS LRU LFU FIFO

clairvoyant BASIC ALGORITHMS LRU LFU FIFO

clairvoyant BASIC ALGORITHMS LRU LFU FIFO

clairvoyant BASIC ALGORITHMS LRU LFU FIFO

clairvoyant BASIC ALGORITHMS LRU LFU FIFO

clairvoyant BASIC ALGORITHMS LRU LFU FIFO

clairvoyant BASIC ALGORITHMS LRU LFU FIFO

clairvoyant BASIC ALGORITHMS LRU LFU FIFO

clairvoyant BASIC ALGORITHMS LRU LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO cache hit!

BASIC ALGORITHMS LRU clairvoyant LFU FIFO cache hit!

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

BASIC ALGORITHMS LRU clairvoyant LFU FIFO

clairvoyant BASIC ALGORITHMS LRU LFU FIFO x frequency(x) = number of requests evict object with smallest frequency

Model of virtual memory p p p disk: slow memory slow input: request sequence p0, p1, p2, p3,., pn fast RAM cache of size k k much smaller than # of pages in disk

Model of virtual memory p p p goal minimize total cost of slow bringing pages from disk input: request sequence p0, p1, p2, p3,., pn fast RAM

Paging problem p p p input s = p0, p1, p2, p3, pn output slow for each pi, which p to evict from cache goal input: request sequence p0, p1, p2, p3,., pn fast RAM minimize # misses

LFD is optimal proposed by Belady LFD: longest forward distance synonyms: OPT, MIN, clairvoyant, Belady s optimal alg proof given in class

LFD proof of optimality let ALG be any paging algorithm, s input seq. Will construct a seq of algorithms: ALG = ALG0, ALG1, ALG2, ALGn = LFD st cost(alg0, s) >= cost(alg1, s) >= cost(alg2, s) >= cost(algn) = LFD, n = len(s) how? by induction: suppose ALGi is determined construct ALG(i+1) as follows: for the first i-1 requests, do the same eviction decisions as ALGi. On request i, ALGi evicts the page currently in its cache that will be request furthest into the future, call this page v. ALG(i) may or may not evict this same page. If it does, let ALG(i+1) = ALG(i) for the future too, so then done. Otherwise ALG(i) evicts page u!= v. So after processing i, both caches are the same, but ALG(i) has v and ALG(i+1) has u, where u is requested before v. On future requests, ALG(i+1) mirrors the same decisions as ALG(i) except if ALG(i) evicts v, then ALG(i+1) evicts u. Since u will be requested first, ALG(i) will incur a miss when u is requested, at which point both may or may not be identical (off by 1). If v is requested in the future, then the costs will be identical.

Belady s anomaly increasing the cache size may result in more misses hw: FIFO suffers from this phenomenon hw: LRU doesn t

Competitive analysis Which replacement policy is better? competitive ratio of ALG = max sequences s misses(alg, s) misses(opt, s) ALG is said to be k-competitive if its competitive ratio is k

LRU is k-competitive for any input sequence s: misses(lru, s) k misses(opt, s) Proof sketch: divide s into k-phases, where each phase consists of references to exactly k distinct pages then misses(lru, s) k and misses(opt, s) 1

FIFO is k-competitive for any input sequence s: misses(fifo, s) k misses(opt, s) Proof: homework

LFU is not competitive there exists a sequence of input sequences s1, s2, s3, misses(lfu, si) lim i!1 = 1 misses(opt, si) Proof: let si = p0 (x i+1), p1 (x i+1), p(k-1) (x i+1), [pk, p(k+1)] (x i) then misses(lfu, si) = k-1 + 2i and misses(opt, si) = k

LIFO is not competitive there exists a sequence of input sequences s1, s2, s3, misses(lifo, si) lim i!1 = 1 misses(opt, si) Proof: homework

A timeline of paging analysis Belady s paper shows that MIN is optimal competitive analysis, Sleator and Tarjan 1995: 1st analysis of paging with locality of reference, Borodin et al. using access graphs 1996: strongly competitiveness, Irani et al. 1999: LRU is better than FIFO 2000: markov paging, Karlin et al. 2007: relative worst order, Boyar et al over the years: alternative performance measures proposed various frameworks for locality of reference proposed 2009: relative dominance, Dorrigiv et al 2010: parameterized analysis, Dorrigiv et al 2012: access graph model under worst order analysis, Boyar et al 2013: access graph model under relative interval analysis, Boyar et al. 2015: competitiveness in terms of locality parameters, Albers et al.

LRU LFU FIFO STRENGTHS AND WEAKNESSES

good if popularity changes quickly unpopular objects stay too long LRU LFU FIFO STRENGTHS AND WEAKNESSES

good if popularity changes quickly unpopular objects stay too long LRU LFU FIFO good if popularity is static suffers from cache pollution over time STRENGTHS AND WEAKNESSES

good if popularity changes quickly unpopular objects stay too long good when fragmentation is an issue LRU LFU FIFO good if popularity is static suffers from cache pollution over time STRENGTHS AND WEAKNESSES

LRU FIXING IT admission control find a middle ground between LRU and LFU

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K priority = k-th most recent reference 1 2 1 LRU common LRFU FIFO k LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC enter here LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC evict LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC evict LRU-K cache hit! LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K cache hit! LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC aka Segmented LRU 80% 20% LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K S4LRU LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K second hit and more enter here LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC dynamically adjust ghost ghost LRU-K second hit and more enter here LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

combined recency and frequency CRF(x) = P i t(x,i) Q ARC time since last i-th request to x LRU-K LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

combined recency and frequency CRF(x) = P i t(x,i) Q ARC time since last i-th request to x LRU-K 1 2 LFU LRU LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

combined recency and frequency CRF(x) = P i t(x,i) Q ARC time since last i-th request to x LRU-K 1 2 LFU in practice LRU LRFU LRU-LFU HYBRID EVICTION ALGORITHMS

Q ARC W-LFU geometric fading LRU-K LRFU HYBRID SOLUTIONS TEND TO DO WELL Evaluation of Caching Strategies Based on Access Statistics of Past Requests, Hasslinger, Ntougias, MMB & DFT 2014

LRU CACHING POLICIES LFU exact TOWARDS PRACTICALITY approximate UNDERSTANDING THE WORKLOAD

LRU handles bursts in popularity well the colder can push out the warmer LFU suffers from cache pollution optimal if popularity is static

2nd hit caching score-based LRU SIZE LRU handles bursts in popularity well the colder can push out the warmer LFU suffers from cache pollution optimal if popularity is static Q ARC LRU-K LRFU

x e f d t m z a... evict this guy first what a caching algorithm boils down to

x e f d t m z a... evict this guy first r v l o m r i a... priority 5.1 14 99.99 3 2.8 e 2000 what a caching algorithm boils down to

x e f d t m z a... evict this guy first r v l o m r i a... priority 5.1 14 99.99 3 2.8 e 2000 evict this guy what a caching algorithm boils down to

caching algorithm template r v l o m r i a... 5.1 14 99.99 3 2.8 e 2000 handle-request(object o): if o is in cache: update priority(o) (for all p in cache: update priority(p)) else: while there is not enough room in cache for o: find and evict object in cache with smallest priority bring o in cache compute priority(o) (for all p in cache: update priority(p))

caching algorithm template r v l o m r i a... 5.1 14 99.99 3 2.8 e 2000 handle-request(object o): if o is in cache: update priority(o) (for all p in cache: update priority(p)) else: while there is not enough room in cache for o: find and evict object in cache with smallest priority bring o in cache compute priority(o) (for all p in cache: update priority(p))

caching algorithm template r v l o m r i a... 5.1 14 99.99 3 2.8 e 2000 handle-request(object o): if o is in cache: update priority(o) (for all p in cache: update priority(p)) else: while there is not enough room in cache for o: find and evict object in cache with smallest priority bring o in cache compute priority(o) (for all p in cache: update priority(p))

caching algorithm template r v l o m r i a... LRU 5.1 14 99.99 3 2.8 e 2000 handle-request(object o): if o is in cache: update priority(o) (for all p in cache: update priority(p)) O(1) else: while there is not enough room in cache for o: find and evict object in cache with smallest priority O(1) bring o in cache compute priority(o) (for all p in cache: update priority(p)) O(1)

clock: an approximation of LRU a 0 g 0 b 0 f 0 c 0 e 0 d 0 e d a e b h

clock: an approximation of LRU a 0 g 0 b 0 f 0 c 0 e 1 d 0 e d a e b h

clock: an approximation of LRU a 0 g 0 b 0 f 0 c 0 e 1 d 1 e d a e b h

clock: an approximation of LRU a 1 g 0 b 0 f 0 c 0 e 1 d 1 e d a e b h

clock: an approximation of LRU a 1 g 0 b 0 f 0 c 0 e 1 d 1 e d a e b h

clock: an approximation of LRU a 1 g 0 b 1 f 0 c 0 e 1 d 1 e d a e b h

clock: an approximation of LRU a 1 g 0 b 1 f 0 c 0 e 1 d 1 e d a e b h

clock: an approximation of LRU a 0 g 0 b 1 f 0 c 0 e 1 d 1 e d a e b h

clock: an approximation of LRU a 0 g 0 b 0 f 0 c 0 e 1 d 1 e d a e b h

clock: an approximation of LRU a 0 g 0 b 0 f 0 ch 0 e 1 d 1 e d a e b h

clock: an approximation of LRU a 0 g 0 b 0 f 0 ch 0 e 1 d 1 e d a e b h

handle-request(object o): if o is in cache: update priority(o) else: while there is not enough room in cache for o: find and evict object in cache with smallest priority bring o in cache compute priority(o)

LRU handle-request(object o): if o is in cache: update priority(o) O(1) else: while there is not enough room in cache for o: find and evict object in cache with smallest priority O(1) bring o in cache compute priority(o) O(1)

LRU clock handle-request(object o): if o is in cache: update priority(o) O(1) move to front bit =1 else: while there is not enough room in cache for o: find and evict object in cache with smallest priority O(1) remove from tail move hand bring o in cache set bits to 0 compute priority(o) O(1) place in front bit = 0

handle-request(object o): if o is in cache: update priority(o) else: while there is not enough room in cache for o: find and evict object in cache with smallest priority bring o in cache compute priority(o) what about in general

using a priority queue (heap) handle-request(object o): if o is in cache: update priority(o) O(log n) else: while there is not enough room in cache for o: find and evict object in cache with smallest priority O(log n) bring o in cache compute priority(o) O(log n) n = nb objects in cache

exact algorithm approximate algorithm LRU clock

exact algorithm approximate algorithm LRU clock your favorite algorithm pre-sorting as much as possible coarse graining/bucketing/rounding priorities good enough algorithm

The popularity of objects follows Zipf s law. # objects heavy tail of unpopular objects popular popularity UNDERSTANDING THE WORKLOAD

The object size follows a log-normal distribution # objects # objects size 1K 2K 3K 4K 5K 6K 1 10 10 2 10 3 10 4 10 5 size UNDERSTANDING THE WORKLOAD

FIGURES OF MERIT origin server edge server client reduce load maximize byte hit ratio sum of sizes of objects that were request hits reduce perceived delay maximize hit ratio # request hits

opularity popularity size size prefer to keep small objects large objects prefer to keep small objects large objects better hit ratio worse worse hit ratio better same byte hit ratio same worse byte hit ratio better fferent popularity-size correlations will yield different performances

Recap LFD, LRU, FIFO, LFU provable results: LFD is optimal LRU and FIFO are k-competitive, LFU is not practice: LRU-LFU hybrids are preferred because workloads tend to exhibit locality of reference practice: clock approximates LRU with low overhead practice: best algorithm will depend on the workload

CACHE REPLACEMENT CONSIDERED LESS IMPORTANT large caches: decreasing storage cost reduction of cacheable traffic and rate of change (changes to objects over time reduce value of having a large cache to store them longer) plenty of good enough algorithms

Extra

Locality of reference An input sequence of references exhibits locality of reference if a page that is referenced is likely to be referenced again in the near future.

Modeling locality of reference Characteristic vector of an input sequence s C =(c 0,...,c p 1 ) p: number of distinct pages referenced c_i: number of distance-i requests distance-i request: a request to a page such that the number of distinct pages requested since the last time this page was requested is i Ref: Quantifying Competitiveness in Paging with Locality of Reference, Albers, Frascaria, 2015

LRU beats FIFO in the presence of locality of reference Characteristic vector of an input sequence s C =(c 0,...,c p 1 ) What does the characteristic vector of a sequence with high locality of reference look like? Ref: Quantifying Competitiveness in Paging with Locality of Reference, Albers, Frascaria, 2015