Locality of Reference

Similar documents
ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

SMD149 - Operating Systems - VM Management

CS370 Operating Systems

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

CS370 Operating Systems

CS370 Operating Systems

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

CS370 Operating Systems

First-In-First-Out (FIFO) Algorithm

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Chapter 8 Memory Management

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Operating System Principles: Memory Management Swapping, Paging, and Virtual Memory CS 111. Operating Systems Peter Reiher

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Role of OS in virtual memory management

Chapter 8 Virtual Memory

Chapter 8: Virtual Memory. Operating System Concepts

VIRTUAL MEMORY READING: CHAPTER 9

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Chapter 9: Virtual-Memory

Memory. Objectives. Introduction. 6.2 Types of Memory

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

Caching and Demand-Paged Virtual Memory

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Practice Exercises 449

Virtual Memory Outline

CMPSC 311- Introduction to Systems Programming Module: Caching

Chapter 9: Virtual Memory

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache

Chapter 9: Virtual Memory

CMPSC 311- Introduction to Systems Programming Module: Caching

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

Page Replacement Algorithms

Virtual Memory COMPSCI 386

Memory Design. Cache Memory. Processor operates much faster than the main memory can.

Chapter 8 Virtual Memory

Principles of Operating Systems

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

Reminder: Mechanics of address translation. Paged virtual memory. Reminder: Page Table Entries (PTEs) Demand paging. Page faults

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles

Lecture 17. Edited from slides for Operating System Concepts by Silberschatz, Galvin, Gagne

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

V. Primary & Secondary Memory!

Memory Management: Virtual Memory and Paging CS 111. Operating Systems Peter Reiher

Basic Page Replacement

OPERATING SYSTEM. Chapter 9: Virtual Memory

Operating System Concepts

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

MIPS) ( MUX

Page replacement algorithms OS

Memory Hierarchies &

Chapter 6 Objectives

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

Chapter 10: Virtual Memory

Operating System Concepts 9 th Edition

! What is virtual memory and when is it useful? ! What is demand paging? ! What pages should be. ! What is the working set model?

Virtual memory. Virtual memory - Swapping. Large virtual memory. Processes

Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

CS307: Operating Systems

Managing Storage: Above the Hardware

Lecture 2: Memory Systems

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

Demand Paging. Valid-Invalid Bit. Steps in Handling a Page Fault. Page Fault. Transfer of a Paged Memory to Contiguous Disk Space

Frequently asked questions from the previous class survey

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

ECE 341. Lecture # 18

Chapters 9 & 10: Memory Management and Virtual Memory

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Lecture 14: Cache & Virtual Memory

Chapter 9: Virtual Memory

CPE300: Digital System Architecture and Design

Q.1 Explain Computer s Basic Elements

CIS Operating Systems Memory Management Page Replacement. Professor Qiang Zeng Spring 2018

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

DAT (cont d) Assume a page size of 256 bytes. physical addresses. Note: Virtual address (page #) is not stored, but is used as an index into the table

Memory Management Virtual Memory

Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Chapter 8. Virtual Memory

Operating Systems. Memory: replacement policies

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Design Issues 1 / 36. Local versus Global Allocation. Choosing

Transcription:

Locality of Reference 1 In view of the previous discussion of secondary storage, it makes sense to design programs so that data is read from and written to disk in relatively large chunks but there is more. Temporal Locality of Reference In many cases, if a program accesses one part of a file, there is a high probability that the program will access the same part of the file again in the near future. Moral: once you ve grabbed a chunk, keep it around. Spatial Locality of Reference In many cases, if a program accesses one part of a file, there is a high probability that the program will access nearby parts of the file in the near future. Moral: grab a larger chunk than you immediately need.

Naïve Record Retrieval from Disk 2 A program that retrieves records from disk in response to search requests would (naively) have interactions like this: Executing process receives a search request for a record matching some criterion C. Each disk retrieval will delay the execution of the process by hundreds of thousands of clock cycles, or worse. Look up C in an index structure and retrieve file offset(s) of the matching record(s). Disk Controller Retrieve matching record(s) from disk. disk read request served disk data Display record data to user.

Naïve Record Retrieval from Disk 3 Not only does this hurt performance when a record is retrieved, we pay the same time cost if that same record is requested again Executing process receives a search request for a record matching some criterion C. Look up C in an index structure and retrieve file offset(s) of the matching record(s). Disk Controller Retrieve matching record(s) from disk. disk read request served disk data Display record data to user. So, if the searches exhibit temporal locality, this adds insult to injury

4 buffer pool a series of buffers (memory locations) used by a program to cache disk data Basically, the buffer pool is just a collection records, stored in RAM. When a record is requested, the program first checks to see if the record is in the pool. If so, there s no need to go to disk to get the record, and time is saved. When the program does retrieve a record from disk, the newly-read record is copied into the pool, replacing a currently-stored record if necessary.

Retrieving Records using a Buffer Pool 5 The interaction of the rest of the process with the disk is now mediated by the pool. When we get a hit, we don t go to disk: Executing process receives a search request for a record matching some criterion C. 1056 3172: record data 1056: record data 5000: record data Buffer Pool 9803: record data Look up C in an index structure and retrieve file offset(s) of the matching record(s). Disk Controller Ask the buffer pool for the record, by sending a file offset 1056: record data Display record data to user.

Retrieving Records using a Buffer Pool 6 The interaction of the rest of the process with the disk is now mediated by the pool. When we get a miss, the pool goes to disk, updates itself, and serves up the record: Executing process receives a search request for a record matching some criterion C. 546 3172: record data 1056: record data 5000: record data Buffer Pool Look up C in an index structure and retrieve file offset(s) of the matching record(s). Ask the buffer pool for the record, by sending a file offset 546: record data disk read request 9803: record data 546: record data record data Disk Controller Display record data to user.

Replacement Strategies The buffer pool must be organized physically and logically. The physical organization is generally an ordered list of some sort. The logical organization depends upon how the buffer pool deals with the issue of replacement if a new record must be added to the pool and all the buffers are currently full, one of the current records must be replaced. If the replaced element has been modified, it (usually) must be written back to disk or the changes will be lost. Thus, some replacement strategies may include a consideration of which buffer elements have been modified in choosing one to replace. 7 Some common buffer replacement strategies: FIFO LFU LRU (first-in is first-out) organize buffers as a queue (least frequently used) replace the least-accessed buffer (least recently used) replace the longest-idle buffer

FIFO Replacement Logically the buffer pool is treated as a queue: 8 655: 655 miss 289: 655 289 miss 586: 655 289 586 miss 289: 655 289 586 hit 694: 655 289 586 694 miss 586: 655 289 586 694 hit 655: 655 289 586 694 hit 138: 655 289 586 694 138 miss 289: 655 289 586 694 138 hit 694: 655 289 586 694 138 hit 289: 655 289 586 694 138 hit 694: 655 289 586 694 138 hit 851: 289 586 694 138 851 miss 586: 289 586 694 138 851 hit 330: 586 694 138 851 330 miss 289: 694 138 851 330 289 miss 694: 694 138 851 330 289 hit 331: 138 851 330 289 331 miss 289: 138 851 330 289 331 hit 694: 851 330 289 331 694 miss Number of accesses: 20 Number of hits: 10 Number of misses: 10 Hit rate: 50.00 Takes no notice of the access pattern exhibited by the program. Consider what would happen with the sequence: 655 289 655 393 655 127 655 781...

LFU Replacement For LFU we must maintain an access count for each element of the buffer pool. It is also useful to keep the elements sorted by that count. 655: (655, 1) miss 289: (655, 1) (289, 1) miss 586: (655, 1) (289, 1) (586, 1) miss 289: (289, 2) (655, 1) (586, 1) hit 694: (289, 2) (655, 1) (586, 1) (694, 1) miss 586: (289, 2) (586, 2) (655, 1) (694, 1) hit 655: (289, 2) (586, 2) (655, 2) (694, 1) hit 138: (289, 2) (586, 2) (655, 2) (694, 1) (138, 1) miss 289: (289, 3) (586, 2) (655, 2) (694, 1) (138, 1) hit 694: (289, 3) (586, 2) (655, 2) (694, 2) (138, 1) hit 289: (289, 4) (586, 2) (655, 2) (694, 2) (138, 1) hit 694: (289, 4) (694, 3) (586, 2) (655, 2) (138, 1) hit 851: (289, 4) (694, 3) (586, 2) (655, 2) (851, 1) miss 586: (289, 4) (694, 3) (586, 3) (655, 2) (851, 1) 100hit 330: (289, 4) (694, 3) (586, 3) (655, 2) (330, 1) miss 289: (289, 5) (694, 3) (586, 3) (655, 2) (330, 1) 101hit 694: (289, 5) (694, 4) (586, 3) (655, 2) (330, 1) hit 331: (289, 5) (694, 4) (586, 3) (655, 2) (331, 1) 102miss 289: (289, 6) (694, 4) (586, 3) (655, 2) (331, 1) hit 694: (289, 6) (694, 5) (586, 3) (655, 2) (331, 1) 103hit Number of accesses: 20 Number of hits: 12... Number of misses: 8 Hit rate: 60.00 Aside from cost of storing and maintaining counter values, and searching for least value, consider the sequence: 655 (500 times) 289 (500 times) 9

LRU Replacement With LRU, we may use a simple list structure. On an access, we move the targeted element to the front of the list. That puts the least recently used element at the tail of the list. 655: 655 miss 289: 289 655 miss 586: 586 289 655 miss 289: 289 586 655 hit 694: 694 289 586 655 miss 586: 586 694 289 655 hit 655: 655 586 694 289 hit 138: 138 655 586 694 289 miss 289: 289 138 655 586 694 hit 694: 694 289 138 655 586 hit 289: 289 694 138 655 586 hit 694: 694 289 138 655 586 hit 851: 851 694 289 138 655 miss 586: 586 851 694 289 138 miss 330: 330 586 851 694 289 miss 289: 289 330 586 851 694 hit 694: 694 289 330 586 851 hit 331: 331 694 289 330 586 miss 289: 289 331 694 330 586 hit 694: 694 289 331 330 586 hit Number of accesses: 20 Number of hits: 11 Number of misses: 9 Hit rate: 55.00 Consider what would happen with the sequence: 655 289 655 301 302 303 304 289... 10

Aside: Belady s Anomaly You would (perhaps) expect that if you increased the number of slots in the pool, then for the same sequence of record references you d get fewer misses (or at least not get more misses). You may be disappointed, at least if you use FIFO replacement: 11 Record Pool of size 3 X X X 1 1 X X 2 1 2 X 3 1 2 3 4 2 3 4 1 3 4 1 2 4 1 2 5 1 2 5 1 1 2 5 hit! 2 1 2 5 hit! 3 2 5 3 4 5 3 4 5 5 3 4 hit! Record Pool of size 4 X X X X 1 1 X X X 2 1 2 X X 3 1 2 3 X 4 1 2 3 4 1 1 2 3 4 hit! 2 1 2 3 4 hit! 5 2 3 4 5 1 3 4 5 1 2 4 5 1 2 3 5 1 2 3 4 1 2 3 4 5 2 3 4 5 L A Belady, R A Nelson, G S Shedler An anomaly in space-time characteristics of certain programs running in a paging machine CACM Volume 12, Issue 6, June 1969

Measuring Performance 12 The performance of a replacement strategy is commonly measured by its fault rate, i.e., the percentage of requests that require a new element to be loaded into the pool. Some observations: - misses will occur unless the pool contains the entire collection of data objects that are needed (the working set) - which data objects are needed tends to change over time as the program runs, so the working set varies over time - if the buffer pool is too small, it may be impossible to keep the current working set resident (in the buffer pool) - if the buffer pool is too large, the program will waste memory

Comparison None of these replacement strategies, or any other feasible one, is best in all cases. All are used with some frequency. Intuitively, LRU and LFU make more sense than FIFO. The performance you get is determined by the access pattern exhibited by the running program, and that is often impossible to predict. 13 Belady s optimal replacement strategy: replace the element whose next access lies furthest in the future Sometimes stated as replace the element with the maximal forward distance. Requires knowing the future, and so is impossible to implement. Does suggest considering predictive strategies.

Estimating Forward Distance Ideal replacement strategy: Replace an element whose forward distance is maximal. 14 QTPs: By what logic does FIFO estimate forward distance? By what logic does LFU estimate forward distance? By what logic does LRU estimate forward distance?

What about Spatial Locality? A buffer pool can service temporal locality: Keep the fetched record around in RAM for awhile 15 How could buffer pool service spatial locality?

Buffer Pool Design There are some general properties a good buffer pool will have: 16 - the buffer size and number of buffers should be client-configurable - the buffer pool may deal only in "raw bytes"; i.e., not know anything at all about the internals of the data record format used by the client code OR the buffer pool may deal in interpreted data records, parsed from the file and transformed into an object - if records are fixed-length then each buffer should hold an integer number of records; for variable-length records, things are more complex and it is often necessary for buffers to allow some internal fragmentation - empirically, a program using a buffer pool is considered to be achieving good performance if less than 10% of the record references require loading a new record into the buffer pool