Comparison and Analysis of Various Buffer Cache Management Strategies for Database Management System

Size: px
Start display at page:

Download "Comparison and Analysis of Various Buffer Cache Management Strategies for Database Management System"

Transcription

1 Comparison and Analysis of Various Buffer Cache Management Strategies for Database Management System Priti M Tailor 1, Prof. Rustom D. Morena 2 1 Assistant Professor, Sutex Bank College of Computer App. & Science, Amroli, Surat, Gujarat, India 2 Professor, Department of Computer Science, Veer Narmad South Gujarat University, Surat, Gujarat, India 1 first.author@xyx.com, 2second.author@xyx.com Abstract: The focus of this paper is to compare selected database buffer cache management strategy for databases. It includes comparative study among database buffer cache management strategies used by various databases like LRU, LFU, Modified LRU, and touch count algorithm based on the simulation results. Keywords: Database Buffer Cache Management, Page Replacement policies, LRU, LFU, Touch Count. 155 I. INTRODUCTION Mass data is stored on disks; Data can only be manipulated in the main memory of the computer and fetching objects from hard disk is costlier compared to RAM [5]. Therefore, part of the database has to be loaded into the main memory for processing and written back to disk after processing. Database buffer cache is a place in main memory used for caching the data and index [8]. The purpose of database buffer cache is to reduce disk IO by keeping frequently used object memory resident [5]. In their Five Minute Rule, Gray and Putzolu stated We are willing to pay more for memory buffers up to a certain point, in order to reduce the cost of disk arms for a system [1]. Database Buffer Cache Management is Key to provide efficient access to data and optimal use of main memory. The problem of buffer management in database management systems is concerned with the efficient main memory allocation and management for answering database queries [11]. A major factor for increasing overall performance is improving the database buffer cache management [5]. Read latency can be reduced and distinct write operations can be accumulated using database buffer cache. Database Buffer Cache reduces physical reads and writes, thus assists to overcome speed gap between processor and storage devices. Good use of the buffer can significantly improve the throughput and response time of any data intensive system [2]. The critical buffering decision arises when a new buffer slot is needed for a page about to be read in from disk, and all current buffers are in use. At this juncture the question that arises is which current page should be dropped from buffer. This is known as the page replacement policy and the different buffering algorithms take their names from the type of replacement policy they impose [1]. Cache replacement policies can be categorized broadly into three categories as specified by [4]. Recency-Based Policies: This type of policies evicts pages based on the recency-time of last reference of object i.e. reference time. Cache replacement algorithms in traditional memory systems deal with uniform cost/size objects. LRU is the most widely used cache replacement algorithm. Frequency Based Policies: This type of policies evicts pages based on frequency-number of times the object has been referenced. The basic frequency-aware replacement algorithm is LFU. Frequency based strategies use a property of request streams known as spatial locality, the likelihood that an object will appear again based on how often it s been seen before [12]. Recency/Frequency Based Policies: This type of policies evicts pages based on both recency and frequency. They may have more implementation overhead. If this type of strategy is designed properly then the problem of recency and frequency can be removed. Example of this type of strategy is LRU with touch count algorithm. A. LRU: LRU has been applied successfully in many different areas [1]. Gemstone and Versant uses LRU for replacing object in object buffer. Oracle also used standard LRU for database buffer cache replacement which is recency based algorithm; Most of recency based algorithms are more or less ex- tensions of the well-known LRU strategy. It is simple to implement and fast. Any time buffer was touched or brought into the cache, it was promoted to the head of the LRU list. Replacement takes place at the tail of the list. In LRU When a new object is needed, the object in the buffer that has not been accessed for the longest time is replaced i.e. the object at the tail of list is removed [5]. So, insertion and replacement of object is simple and have very low overhead. In this searching can be supported by various hashing techniques. In LRU cache can be polluted by arbitrary bursts of accesses to an infrequently accessed data set. For example large index scan or full table scan will fill the cache completely and remove the entire popular buffer [5]. After usage of LRU, oracle shifted to Modified LRU. Blocks brought into the cache from a single block read are placed at the head of the LRU. Blocks brought into the cache from a multi block read are placed near the

2 end of the LRU (the LRU end of the LRU). The good: a full table scan will not replace all cached buffers. The bad: A large index range scan (which can read many B*-Tree leaf blocks) can be single block reads, which can replace all the popular buffers. LRU is a simple algorithm with low overhead. LRU is likely to perform better in case of batched references i.e. a particular object is referred many times over a short period of time and then not at all. LRU is also better for random references or for the references in which some objects are referred more than others. In modified LRU full table scan will not replace all cached buffers but a large index range scan (which can read many B*-Tree leaf blocks) can be single block reads, which can replace all the popular buffers [5]. B. LRU with Mid-Point Insertion: This algorithm is similar to LRU strategy but the difference is Mid-Point insertion. One Mid Pointer is maintained to partition the list into Hot and Cold. In this type of strategy the list is divided into two parts hot portion and cold portion. Elements in the list before mid-point are considered in hot portion and elements starting from Mid-Point come within cold portion. Instead of inserting the new element at the head of the LRU list as in simple LRU, new element is inserted at the end of hot portion of the list. The element will be promoted to the hot end if it is referred frequently. If the element is not referred for long period of time again then the element is demoted to the cold end. Whenever a new element is referred which is not in the LRU list, and there is no free place in the buffer, an element from the end of the cold end is removed to make the space for new element [5]. Because of Mid-Point insertion the replacement of hot popular buffer is avoided. Here buffer in the hot list will not be replaced due to a large index range search [5]. C. LFU: LFU, MFU etc. are the example of frequency based algorithm. Frequency based page replacement algorithms uses reference count. Whenever a page is referred its reference count is incremented. The object will be replaced based on value of reference count. LFU replaces the page with minimum reference count [5]. LFU works well if small number of objects is referred frequently out of a large number of items. LFU suffers from the problem of counter overflow, certain pages building up high reference counts and never being replaced even though it will not be used again for a decent amount of time. This leaves other blocks which may actually be used more frequently to be replaced. LFU can replace new pages just entered into cache which have lower reference count which are going to be referred in near future. To protect cache from this pollution, aging can be used. For aging, reference counter is reset after it reaches to a predefined thresh hold limit i.e. maximum frequency value [5]. D. LRU with Touch Count: As indicated in [3] after modified LRU, LRU with touch count evolved which is the good mixture of recency and frequency based algorithms. The immense change in this algorithm was mid-point insertion. This change alone stopped the entire cache (actually a single LRU) from being substituted in almost any situation. The LRU list is divided into two parts hot region and cold region separated by a mid-pointer. The new buffer is added at mid-point. Each buffer has a touch count associated with it to indicate its popularity. Touch count is incremented when the buffer is touched after a specified time limit (3 seconds by default) [5]. When there is a need for buffer replacement then search is started from cold region, if the touch count of buffer is greater than _db_aging_hot_criteria(by default 2) then block is moved to hotregion and its touch count is reset to _db_aging_stay_count(by default 0). In this process some buffers may move from hot region to cold region in order to maintain hot and cold region ratio. If the buffer is moved from hot to cold region then its touch count is set to _db_aging_cool_count (1 by default). Because of all these criteria it is very difficult for a buffer to remain in hot region of LRU list [3]. Most computer systems use a global page replacement policy based on the LRU principle to approximately select a Least Recently Used page for a replacement in the entire user memory space [10]. II. LITERATURE SURVEY Chirag A. Shallahamer introduced touch count based data buffer management algorithm to address the growing size, performance requirements, and complexities of relational database management systems [3]. This algorithm reduced latch contention. This paper details oracle s touch count algorithm, how to monitor its performance, and how to manage for optimal performance. Two Main parts of this algorithm is midpoint insertion and touch count incrementing [13]. Stefan Podlipnig and Laszlo Boszormenyi have given an exhaustive survey of cache replacement strategies proposed for Web caches in. They concentrated on proposals for proxy caches that manage the cache replacement process at one specific proxy. A simple classification scheme for these replacement strategies was given and used for the description and general critique of the described replacement strategies. Although cache replacement is considered as a solved problem, they showed that there are still numerous areas for interesting research [6]. 156

3 In [7] authors discussed Least Recently / Frequently Used page replacement policy subsuming both the LRU and LFU policies. The LRU policy does block replacement by attaining the recency of block references while the LFU policy considers the frequency of block references. These respective policies are inherently assuming that future behaviour of the workload will be dominated by the recency or frequency factors of past behaviour. The LRFU policy associates a value with each block. This value is called the CRF (Combined Recency and Frequency) value and quantifies the likelihood that the block will be referenced in the near future. In [8] authors discuss spectrum of possible strategies for searching the buffer. Hash techniques on buffer information tables with overflow chaining are recommended as the most efficient implementation alternative for the buffer search function. Authors had shown the optimization potential of some of the new algorithms. Since they are parameterized, they can be tailored to a specific DBMS and application environment. The basic trade-off is the conceptual simplicity of the old algorithms versus a potential improvement in performance with the new algorithms. In [9] authors discussed various replacement strategies to consider when designing web server. Authors discussed comprehensive study of recency, frequency, and recency/frequency based strategies. Authors have provided several explanations of the results detailing various performance issues of the strategies individually and compared to other strategies. Authors also demonstrated that commonly used methods are generally outperformed by their derivative strategies. By combining web object characteristics together, the cache replacement strategies chose better victims in their decision process. III. Page Faults with 20% Repeated Pages EXPERIMENTAL ANALYSIS AND RESULTS This experimental analysis is done to compare various page replacement algorithms so, that better algorithm can be used in designing database buffer cache management. To compare various page replacement policies simulators have been developed using C language for page replacement policies like LRU, LRU with mid-point insertion, LRU Mid-Point insertion with touch count, and LFU. To test all the four algorithms on same environment one page reference string generator is developed to generate page reference string of specified criteria. Page reference string is divided into four groups based on the number of repeated pages i.e. group of 20% repeated pages, 30% repeated pages, 35% repeated pages and 40% repeated pages. 20% repeated pages mean out of 100, 20 pages are referenced again i.e. referenced second time. Page reference strings were generated using random number generator to get closer to the actual environment. Three different page reference strings were generated for this experiment for each group for pages 500, 1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500, 5000, 5500, 6000, 6500, 7000, 7500, 8000, 8500, 9000, 9500 and pages, each time using a different seed for the random number generator. The result of the conducted experiment is displayed in table and chart form. Result table displays average number of page-faults for all the three generated reference strings for 25%, 30%, 35%, and 40% cache sizes. 157

4 Page Faults with 30% Repeated Pages Page Faults with 35% Repeated Pages 158

5 Page Faults with 40% Repeated Pages 159

6 IV. CONCLUSION LRU Mid-point insertion with touch count is a very good combination of recency and frequency based policies. It performs better than LFU, LRU, and LRU with mid-point insertion. It is simple to implement and also does not create too much overhead. Optimal option to implement page replacement policy for database buffer cache management is LRU mid-point insertion with touch-count. V. REFERENCES [1] Elizabeth J. O Neil, Patrick E. O Neil, Gerhard Weikum, The LRU-K Page Replacement Algorithm For Database Disk Buffering, In Proc. Of the 1993 ACM SIGMOD international conference on Management of data, Washington D.C., USA, August 1993, pp [2] Theodore Johnson, Dennis Shasha, 2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm, Proceedings of the 20th VLDB Conference Santiago, Chile, 1994, ISBN: , pp [3] Chirag A. Shallahamer, All about Oracle s Touch Count Data Block Buffer Cache Algorithm, Version 4a, January 5, OraPub, [4] Jin, Shudong, Bestavros, Azer, GreedyDual* Web Caching Algorithm: Exploiting the Two Sources of Temporal Locality in Web Request Streams, Computer Communications, Volume 24 Issue 2, February, 2001,pp [5] Priti Tailor, Dr. R. D. Morena, A Survey of Database Buffer Cache Management Approaches, International Journal of Advanced Research in Computer Science,Volume 8, No. 3,March April 2017, ISSN NO: [6] Stefan Podlipnig, Laszlo Boszormenyi, A Survey of Web Cache Replacement Strategies, ACM Computing Surveys, Vol. 35, No. 4, December 2003, pp [7] Donghee Lee, Jongmoo Choi, Jong Hun Kim, Sam H. Noh, Sang Lyul Min, yookun Cho, Chong Sang Kim, On the Existence of a Spectrum of Policies that Subsumes theleast Recently Used (LRU) and Least Frequently Used (LFU) Policies, Newsletter, ACM SIGMETRICS Performance Evaluation Review, Vol 27, Issue 1, June 1999, New York, NY, USA, pp ,. [8] W. Effelsberg, T.Haerder, Principles of Database Buffer Management,ACM Transactions on Database Systems, Vol 9, No 4, December 1984, pp [9] S.Ramano, H.ElAarag, A Quantative Study of Recency and Frequency Based Web Cache Replacement Strategies, CNS, 2008, Pages 70-78,ISBN: [10] Song Jiang, Xiaodong Zhang, Token-ordered LRU: an effective page replacement policy and its implementation in Linux systems,performance Evaluation, Vol 60, Issues 1 4, May 2005, pp [11] C. Faloutsos, R. Ng, and T. Sellis, Flexible and Adaptable Buffer Management Techniques for Database Management Systems, IEEE Transactions on Computers, vol. 44, no. 4, 1995, pp [12] B. Davison, A Web Caching Primer, IEEE Internet Computing, vol. 5, no. 4, pp , [13] Optimal Buffer Management Strategy for Object Oriented Databases, vol. 4, no. 1, pp , Jul

CRC: Protected LRU Algorithm

CRC: Protected LRU Algorithm CRC: Protected LRU Algorithm Yuval Peress, Ian Finlayson, Gary Tyson, David Whalley To cite this version: Yuval Peress, Ian Finlayson, Gary Tyson, David Whalley. CRC: Protected LRU Algorithm. Joel Emer.

More information

SF-LRU Cache Replacement Algorithm

SF-LRU Cache Replacement Algorithm SF-LRU Cache Replacement Algorithm Jaafar Alghazo, Adil Akaaboune, Nazeih Botros Southern Illinois University at Carbondale Department of Electrical and Computer Engineering Carbondale, IL 6291 alghazo@siu.edu,

More information

1 Introduction The gap between processor and disk speed is becoming wider as VLSI technologies keep advancing at an enormous rate. Though the density

1 Introduction The gap between processor and disk speed is becoming wider as VLSI technologies keep advancing at an enormous rate. Though the density LRFU (Least Recently/Frequently Used) Replacement Policy: A Spectrum of Block Replacement Policies yz Donghee Lee Jongmoo Choi Jong-Hun Kim Sam H. Noh Sang Lyul Min Yookun Cho Chong Sang Kim March 1 1996

More information

Least Recently Frequently Used Caching Algorithm with Filtering Policies

Least Recently Frequently Used Caching Algorithm with Filtering Policies VLSI Project Least Recently Frequently Used Caching Algorithm with Filtering Policies Alexander Zlotnik Marcel Apfelbaum Supervised by: Michael Behar, Winter 2005/2006 VLSI Project Winter 2005/2006 1 Introduction

More information

Trace Driven Simulation of GDSF# and Existing Caching Algorithms for Web Proxy Servers

Trace Driven Simulation of GDSF# and Existing Caching Algorithms for Web Proxy Servers Proceeding of the 9th WSEAS Int. Conference on Data Networks, Communications, Computers, Trinidad and Tobago, November 5-7, 2007 378 Trace Driven Simulation of GDSF# and Existing Caching Algorithms for

More information

Cache Controller with Enhanced Features using Verilog HDL

Cache Controller with Enhanced Features using Verilog HDL Cache Controller with Enhanced Features using Verilog HDL Prof. V. B. Baru 1, Sweety Pinjani 2 Assistant Professor, Dept. of ECE, Sinhgad College of Engineering, Vadgaon (BK), Pune, India 1 PG Student

More information

A Low-Overhead High-Performance Unified Buffer Management Scheme that Exploits Sequential and Looping References

A Low-Overhead High-Performance Unified Buffer Management Scheme that Exploits Sequential and Looping References A ow-overhead High-Performance Unified Buffer Management that Exploits Sequential and ooping References Jong Min Kim Sang yul Min Jongmoo Choi Yookun Cho School of Computer Science and Engineering Seoul

More information

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I) ECE7995 Caching and Prefetching Techniques in Computer Systems Lecture 8: Buffer Cache in Main Memory (I) 1 Review: The Memory Hierarchy Take advantage of the principle of locality to present the user

More information

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Operating Systems: Internals and Design Principles You re gonna need a bigger boat. Steven

More information

Virtual Memory III. Jo, Heeseung

Virtual Memory III. Jo, Heeseung Virtual Memory III Jo, Heeseung Today's Topics What if the physical memory becomes full? Page replacement algorithms How to manage memory among competing processes? Advanced virtual memory techniques Shared

More information

Visualizing Page Replacement Techniques based on Page Frequency and Memory Access Pattern

Visualizing Page Replacement Techniques based on Page Frequency and Memory Access Pattern Visualizing Page Replacement Techniques based on Page Frequency and Memory Access Pattern Ruchin Gupta, Narendra Teotia Information Technology, Ajay Kumar Garg Engineering College Ghaziabad, India Abstract

More information

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo PAGE REPLACEMENT Operating Systems 2015 Spring by Euiseong Seo Today s Topics What if the physical memory becomes full? Page replacement algorithms How to manage memory among competing processes? Advanced

More information

Anti-Caching: A New Approach to Database Management System Architecture. Guide: Helly Patel ( ) Dr. Sunnie Chung Kush Patel ( )

Anti-Caching: A New Approach to Database Management System Architecture. Guide: Helly Patel ( ) Dr. Sunnie Chung Kush Patel ( ) Anti-Caching: A New Approach to Database Management System Architecture Guide: Helly Patel (2655077) Dr. Sunnie Chung Kush Patel (2641883) Abstract Earlier DBMS blocks stored on disk, with a main memory

More information

INTRODUCTION NEW ALGORITHM NEEDED HISTORY OF ORACLE BUFFER CACHE MANAGEMENT

INTRODUCTION NEW ALGORITHM NEEDED HISTORY OF ORACLE BUFFER CACHE MANAGEMENT Reviewed by Oracle Certified Master Korea Community ( http://www.ocmkorea.com http://cafe.daum.net/oraclemanager ) ALL ABOUT ORACLE S TOUCH-COUNT DATA BLOCK BUFFER ALGORITHM INTRODUCTION Oracle introduced

More information

Lecture 14: Large Cache Design II. Topics: Cache partitioning and replacement policies

Lecture 14: Large Cache Design II. Topics: Cache partitioning and replacement policies Lecture 14: Large Cache Design II Topics: Cache partitioning and replacement policies 1 Page Coloring CACHE VIEW Bank number with Page-to-Bank Tag Set Index Bank number with Set-interleaving Block offset

More information

Pattern Based Cache Management Policies

Pattern Based Cache Management Policies International Journal of Computer Science and Engineering Open Access Technical Paper Volume-2, Issue-2 E-ISSN: 2347-2693 Pattern Based Cache Management Policies Namrata Dafre 1*, Urmila Shrawankar 2 and

More information

SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies

SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 11, No 4 Sofia 2011 SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies Geetha Krishnan 1,

More information

Using Fuzzy Logic to Improve Cache Replacement Decisions

Using Fuzzy Logic to Improve Cache Replacement Decisions 182 IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.3A, March 26 Using Fuzzy Logic to Improve Cache Replacement Decisions Mojtaba Sabeghi1, and Mohammad Hossein Yaghmaee2,

More information

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part I: Operating system overview: Memory Management 1 Hardware background The role of primary memory Program

More information

CMPSC 311- Introduction to Systems Programming Module: Caching

CMPSC 311- Introduction to Systems Programming Module: Caching CMPSC 311- Introduction to Systems Programming Module: Caching Professor Patrick McDaniel Fall 2016 Reminder: Memory Hierarchy L0: Registers CPU registers hold words retrieved from L1 cache Smaller, faster,

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 18 Lecture 18/19: Page Replacement Memory Management Memory management systems Physical and virtual addressing; address translation Techniques: Partitioning,

More information

A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES

A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES F.J. González-Cañete, E. Casilari, A. Triviño-Cabrera Department of Electronic Technology, University of Málaga, Spain University of Málaga,

More information

Improving object cache performance through selective placement

Improving object cache performance through selective placement University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2006 Improving object cache performance through selective placement Saied

More information

Swapping. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Swapping. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University Swapping Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Swapping Support processes when not enough physical memory User program should be independent

More information

Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes *

Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes * Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes * Christoph Lindemann and Oliver P. Waldhorst University of Dortmund Department of Computer Science

More information

CMPSC 311- Introduction to Systems Programming Module: Caching

CMPSC 311- Introduction to Systems Programming Module: Caching CMPSC 311- Introduction to Systems Programming Module: Caching Professor Patrick McDaniel Fall 2014 Lecture notes Get caching information form other lecture http://hssl.cs.jhu.edu/~randal/419/lectures/l8.5.caching.pdf

More information

LAST: Locality-Aware Sector Translation for NAND Flash Memory-Based Storage Systems

LAST: Locality-Aware Sector Translation for NAND Flash Memory-Based Storage Systems : Locality-Aware Sector Translation for NAND Flash Memory-Based Storage Systems Sungjin Lee, Dongkun Shin, Young-Jin Kim and Jihong Kim School of Information and Communication Engineering, Sungkyunkwan

More information

Adaptive Replacement Policy with Double Stacks for High Performance Hongliang YANG

Adaptive Replacement Policy with Double Stacks for High Performance Hongliang YANG International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015) Adaptive Replacement Policy with Double Stacks for High Performance Hongliang YANG School of Informatics,

More information

AMP: Program Context Specific Buffer Caching

AMP: Program Context Specific Buffer Caching AMP: Program Context Specific Buffer Caching Feng Zhou, Rob von Behren and Eric Brewer Computer Science Division University of California, Berkeley {zf,jrvb,brewer}@cs.berkeley.edu We present Adaptive

More information

Advances in Natural and Applied Sciences. Performance Evaluation and Characterization of IO Workload Analysis using Data Mining Algorithms

Advances in Natural and Applied Sciences. Performance Evaluation and Characterization of IO Workload Analysis using Data Mining Algorithms AENSI Journals Advances in Natural and Applied Sciences ISSN:1995-0772 EISSN: 1998-1090 Journal home page: www.aensiweb.com/anas Performance Evaluation and Characterization of IO Workload Analysis using

More information

Flexible Cache Cache for afor Database Management Management Systems Systems Radim Bača and David Bednář

Flexible Cache Cache for afor Database Management Management Systems Systems Radim Bača and David Bednář Flexible Cache Cache for afor Database Management Management Systems Systems Radim Bača and David Bednář Department ofradim Computer Bača Science, and Technical David Bednář University of Ostrava Czech

More information

CS 153 Design of Operating Systems Winter 2016

CS 153 Design of Operating Systems Winter 2016 CS 153 Design of Operating Systems Winter 2016 Lecture 18: Page Replacement Terminology in Paging A virtual page corresponds to physical page/frame Segment should not be used anywhere Page out = Page eviction

More information

Efficient Page Caching Algorithm with Prediction and Migration for a Hybrid Main Memory

Efficient Page Caching Algorithm with Prediction and Migration for a Hybrid Main Memory Efficient Page Caching Algorithm with Prediction and Migration for a Hybrid Main Memory Hyunchul Seok, Youngwoo Park, Ki-Woong Park, and Kyu Ho Park KAIST Daejeon, Korea {hcseok, ywpark, woongbak}@core.kaist.ac.kr

More information

CFDC A Flash-aware Replacement Policy for Database Buffer Management

CFDC A Flash-aware Replacement Policy for Database Buffer Management CFDC A Flash-aware Replacement Policy for Database Buffer Management Yi Ou University of Kaiserslautern Germany Theo Härder University of Kaiserslautern Germany Peiquan Jin University of Science and Technology

More information

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles Page Replacement page 1 Page Replacement Algorithms Want lowest page-fault rate Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number

More information

Virtual Memory Management. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

Virtual Memory Management. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science Virtual Memory Management DCS COMSTS Institute of Information Technology Rab Nawaz Jadoon ssistant Professor COMSTS, Lahore Pakistan Operating System Concepts VM Management Strategies VM Strategies Fetch

More information

A Proxy Caching Scheme for Continuous Media Streams on the Internet

A Proxy Caching Scheme for Continuous Media Streams on the Internet A Proxy Caching Scheme for Continuous Media Streams on the Internet Eun-Ji Lim, Seong-Ho park, Hyeon-Ok Hong, Ki-Dong Chung Department of Computer Science, Pusan National University Jang Jun Dong, San

More information

Virtual Memory Outline

Virtual Memory Outline Virtual Memory Outline Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations Operating-System Examples

More information

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Hyunchul Seok Daejeon, Korea hcseok@core.kaist.ac.kr Youngwoo Park Daejeon, Korea ywpark@core.kaist.ac.kr Kyu Ho Park Deajeon,

More information

Page Replacement Algorithms

Page Replacement Algorithms Page Replacement Algorithms MIN, OPT (optimal) RANDOM evict random page FIFO (first-in, first-out) give every page equal residency LRU (least-recently used) MRU (most-recently used) 1 9.1 Silberschatz,

More information

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs Optimal Algorithm Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs page 1 Least Recently Used (LRU) Algorithm Reference string: 1, 2, 3,

More information

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28 Outline 1 Paging 2 Eviction policies 3 Thrashing 1 / 28 Paging Use disk to simulate larger virtual than physical mem 2 / 28 Working set model # of accesses virtual address Disk much, much slower than memory

More information

CSE 120 Principles of Operating Systems

CSE 120 Principles of Operating Systems CSE 120 Principles of Operating Systems Fall 2016 Lecture 11: Page Replacement Geoffrey M. Voelker Administrivia Lab time This week: Thu 4pm, Sat 2pm Next week: Tue, Wed At Washington University in St.

More information

Multilevel Cache Management Based on Application Hints

Multilevel Cache Management Based on Application Hints Multilevel Cache Management Based on Application Hints Gala Golan Supervised by Assoc. Prof. Assaf Schuster, Dr. Michael Factor, Alain Azagury Computer Science Department, Technion Haifa 32000, ISRAEL

More information

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management CSE 120 July 18, 2006 Day 5 Memory Instructor: Neil Rhodes Translation Lookaside Buffer (TLB) Implemented in Hardware Cache to map virtual page numbers to page frame Associative memory: HW looks up in

More information

RACE: A Robust Adaptive Caching Strategy for Buffer Cache

RACE: A Robust Adaptive Caching Strategy for Buffer Cache RACE: A Robust Adaptive Caching Strategy for Buffer Cache Yifeng Zhu Hong Jiang Electrical and Computer Engineering Computer Science and Engineering University of Maine University of Nebraska - Lincoln

More information

CIS Operating Systems Memory Management Page Replacement. Professor Qiang Zeng Spring 2018

CIS Operating Systems Memory Management Page Replacement. Professor Qiang Zeng Spring 2018 CIS 3207 - Operating Systems Memory Management Page Replacement Professor Qiang Zeng Spring 2018 Previous class What is Demand Paging? Page frames are not allocated until pages are really accessed. Example:

More information

Virtual Memory. Chapter 8

Virtual Memory. Chapter 8 Virtual Memory 1 Chapter 8 Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time E.g., process may be swapped in and out of main memory

More information

Lecture 12: Large Cache Design. Topics: Shared vs. private, centralized vs. decentralized, UCA vs. NUCA, recent papers

Lecture 12: Large Cache Design. Topics: Shared vs. private, centralized vs. decentralized, UCA vs. NUCA, recent papers Lecture 12: Large ache Design Topics: Shared vs. private, centralized vs. decentralized, UA vs. NUA, recent papers 1 Shared Vs. rivate SHR: No replication of blocks SHR: Dynamic allocation of space among

More information

Proceedings of the 2002 USENIX Annual Technical Conference

Proceedings of the 2002 USENIX Annual Technical Conference USENIX Association Proceedings of the 2002 USENIX Annual Technical Conference Monterey, California, USA June -5, 2002 THE ADVANCED COMPUTING SYSTEMS ASSOCIATION 2002 by The USENIX Association All Rights

More information

Lecture 12: Demand Paging

Lecture 12: Demand Paging Lecture 1: Demand Paging CSE 10: Principles of Operating Systems Alex C. Snoeren HW 3 Due 11/9 Complete Address Translation We started this topic with the high-level problem of translating virtual addresses

More information

Operating Systems. Operating Systems Sina Meraji U of T

Operating Systems. Operating Systems Sina Meraji U of T Operating Systems Operating Systems Sina Meraji U of T Recap Last time we looked at memory management techniques Fixed partitioning Dynamic partitioning Paging Example Address Translation Suppose addresses

More information

Page Mapping Scheme to Support Secure File Deletion for NANDbased Block Devices

Page Mapping Scheme to Support Secure File Deletion for NANDbased Block Devices Page Mapping Scheme to Support Secure File Deletion for NANDbased Block Devices Ilhoon Shin Seoul National University of Science & Technology ilhoon.shin@snut.ac.kr Abstract As the amount of digitized

More information

Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching

Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching Kefei Wang and Feng Chen Louisiana State University SoCC '18 Carlsbad, CA Key-value Systems in Internet Services Key-value

More information

Role of Aging, Frequency, and Size in Web Cache Replacement Policies

Role of Aging, Frequency, and Size in Web Cache Replacement Policies Role of Aging, Frequency, and Size in Web Cache Replacement Policies Ludmila Cherkasova and Gianfranco Ciardo Hewlett-Packard Labs, Page Mill Road, Palo Alto, CA 9, USA cherkasova@hpl.hp.com CS Dept.,

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (8 th Week) (Advanced) Operating Systems 8. Virtual Memory 8. Outline Hardware and Control Structures Operating

More information

Frequency-based NCQ-aware disk cache algorithm

Frequency-based NCQ-aware disk cache algorithm LETTER IEICE Electronics Express, Vol.11, No.11, 1 7 Frequency-based NCQ-aware disk cache algorithm Young-Jin Kim a) Ajou University, 206, World cup-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-749, Republic

More information

Software optimization technique for the reduction page fault to increase the processor performance

Software optimization technique for the reduction page fault to increase the processor performance Software optimization technique for the reduction page fault to increase the processor performance Jisha P.Abraham #1, Sheena Mathew *2 # Department of Computer Science, Mar Athanasius College of Engineering,

More information

OS and Hardware Tuning

OS and Hardware Tuning OS and Hardware Tuning Tuning Considerations OS Threads Thread Switching Priorities Virtual Memory DB buffer size File System Disk layout and access Hardware Storage subsystem Configuring the disk array

More information

Swapping. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Swapping. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University Swapping Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu EEE0: Introduction to Operating Systems, Fall 07, Jinkyu Jeong (jinkyu@skku.edu) Swapping

More information

Role of OS in virtual memory management

Role of OS in virtual memory management Role of OS in virtual memory management Role of OS memory management Design of memory-management portion of OS depends on 3 fundamental areas of choice Whether to use virtual memory or not Whether to use

More information

Roadmap DB Sys. Design & Impl. Reference. Detailed Roadmap. Motivation. Outline of LRU-K. Buffering - LRU-K

Roadmap DB Sys. Design & Impl. Reference. Detailed Roadmap. Motivation. Outline of LRU-K. Buffering - LRU-K 15-721 DB Sys. Design & Impl. Buffering - LRU-K Christos Faloutsos www.cs.cmu.edu/~christos Roadmap 1) Roots: System R and Ingres 2) Implementation: buffering, indexing, q-opt 3) Transactions: locking,

More information

OS and HW Tuning Considerations!

OS and HW Tuning Considerations! Administração e Optimização de Bases de Dados 2012/2013 Hardware and OS Tuning Bruno Martins DEI@Técnico e DMIR@INESC-ID OS and HW Tuning Considerations OS " Threads Thread Switching Priorities " Virtual

More information

Week 2: Tiina Niklander

Week 2: Tiina Niklander Virtual memory Operations and policies Chapters 3.4. 3.6 Week 2: 17.9.2009 Tiina Niklander 1 Policies and methods Fetch policy (Noutopolitiikka) When to load page to memory? Placement policy (Sijoituspolitiikka

More information

Locality of Reference

Locality of Reference Locality of Reference 1 In view of the previous discussion of secondary storage, it makes sense to design programs so that data is read from and written to disk in relatively large chunks but there is

More information

A Memory Management Scheme for Hybrid Memory Architecture in Mission Critical Computers

A Memory Management Scheme for Hybrid Memory Architecture in Mission Critical Computers A Memory Management Scheme for Hybrid Memory Architecture in Mission Critical Computers Soohyun Yang and Yeonseung Ryu Department of Computer Engineering, Myongji University Yongin, Gyeonggi-do, Korea

More information

ECE7995 (3) Basis of Caching and Prefetching --- Locality

ECE7995 (3) Basis of Caching and Prefetching --- Locality ECE7995 (3) Basis of Caching and Prefetching --- Locality 1 What s locality? Temporal locality is a property inherent to programs and independent of their execution environment. Temporal locality: the

More information

Operating Systems, Fall

Operating Systems, Fall Policies and methods Virtual memory Operations and policies Chapters 3.4. 3.6 Week 2: 17.9.2009 Tiina Niklander 1 Fetch policy (Noutopolitiikka) When to load page to memory? Placement policy (Sijoituspolitiikka

More information

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle Operating Systems Virtual Memory Lecture 11 Michael O Boyle 1 Paged virtual memory Allows a larger logical address space than physical memory All pages of address space do not need to be in memory the

More information

Memory. Objectives. Introduction. 6.2 Types of Memory

Memory. Objectives. Introduction. 6.2 Types of Memory Memory Objectives Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured. Master the concepts

More information

First-In-First-Out (FIFO) Algorithm

First-In-First-Out (FIFO) Algorithm First-In-First-Out (FIFO) Algorithm Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 3 frames (3 pages can be in memory at a time per process) 15 page faults Can vary by reference string:

More information

Design of Flash-Based DBMS: An In-Page Logging Approach

Design of Flash-Based DBMS: An In-Page Logging Approach SIGMOD 07 Design of Flash-Based DBMS: An In-Page Logging Approach Sang-Won Lee School of Info & Comm Eng Sungkyunkwan University Suwon,, Korea 440-746 wonlee@ece.skku.ac.kr Bongki Moon Department of Computer

More information

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College CISC 7310X C08: Virtual Memory Hui Chen Department of Computer & Information Science CUNY Brooklyn College 3/22/2018 CUNY Brooklyn College 1 Outline Concepts of virtual address space, paging, virtual page,

More information

Efficient Resource Management for the P2P Web Caching

Efficient Resource Management for the P2P Web Caching Efficient Resource Management for the P2P Web Caching Kyungbaek Kim and Daeyeon Park Department of Electrical Engineering & Computer Science, Division of Electrical Engineering, Korea Advanced Institute

More information

BPCLC: An Efficient Write Buffer Management Scheme for Flash-Based Solid State Disks

BPCLC: An Efficient Write Buffer Management Scheme for Flash-Based Solid State Disks BPCLC: An Efficient Write Buffer Management Scheme for Flash-Based Solid State Disks Hui Zhao 1, Peiquan Jin *1, Puyuan Yang 1, Lihua Yue 1 1 School of Computer Science and Technology, University of Science

More information

Storage Architecture and Software Support for SLC/MLC Combined Flash Memory

Storage Architecture and Software Support for SLC/MLC Combined Flash Memory Storage Architecture and Software Support for SLC/MLC Combined Flash Memory Soojun Im and Dongkun Shin Sungkyunkwan University Suwon, Korea {lang33, dongkun}@skku.edu ABSTRACT We propose a novel flash

More information

LRU-WSR: Integration of LRU and Writes Sequence Reordering for Flash Memory

LRU-WSR: Integration of LRU and Writes Sequence Reordering for Flash Memory H. Jung et al.: LRU-WSR: Integration of LRU and Writes Sequence Reordering for Flash Memory LRU-WSR: Integration of LRU and Writes Sequence Reordering for Flash Memory 1215 Hoyoung Jung, Hyoki Shim, Sungmin

More information

Eastern Mediterranean University School of Computing and Technology CACHE MEMORY. Computer memory is organized into a hierarchy.

Eastern Mediterranean University School of Computing and Technology CACHE MEMORY. Computer memory is organized into a hierarchy. Eastern Mediterranean University School of Computing and Technology ITEC255 Computer Organization & Architecture CACHE MEMORY Introduction Computer memory is organized into a hierarchy. At the highest

More information

SFS: Random Write Considered Harmful in Solid State Drives

SFS: Random Write Considered Harmful in Solid State Drives SFS: Random Write Considered Harmful in Solid State Drives Changwoo Min 1, 2, Kangnyeon Kim 1, Hyunjin Cho 2, Sang-Won Lee 1, Young Ik Eom 1 1 Sungkyunkwan University, Korea 2 Samsung Electronics, Korea

More information

Chapter 9: Virtual-Memory

Chapter 9: Virtual-Memory Chapter 9: Virtual-Memory Management Chapter 9: Virtual-Memory Management Background Demand Paging Page Replacement Allocation of Frames Thrashing Other Considerations Silberschatz, Galvin and Gagne 2013

More information

Query Processing: A Systems View. Announcements (March 1) Physical (execution) plan. CPS 216 Advanced Database Systems

Query Processing: A Systems View. Announcements (March 1) Physical (execution) plan. CPS 216 Advanced Database Systems Query Processing: A Systems View CPS 216 Advanced Database Systems Announcements (March 1) 2 Reading assignment due Wednesday Buffer management Homework #2 due this Thursday Course project proposal due

More information

Operating System - Virtual Memory

Operating System - Virtual Memory Operating System - Virtual Memory Virtual memory is a technique that allows the execution of processes which are not completely available in memory. The main visible advantage of this scheme is that programs

More information

Memory Management Virtual Memory

Memory Management Virtual Memory Memory Management Virtual Memory Part of A3 course (by Theo Schouten) Biniam Gebremichael http://www.cs.ru.nl/~biniam/ Office: A6004 April 4 2005 Content Virtual memory Definition Advantage and challenges

More information

Operating Systems. Memory: replacement policies

Operating Systems. Memory: replacement policies Operating Systems Memory: replacement policies Last time caching speeds up data lookups if the data is likely to be re-requested again data structures for O(1)-lookup data source set-associative (hardware)

More information

Clustered Page-Level Mapping for Flash Memory-Based Storage Devices

Clustered Page-Level Mapping for Flash Memory-Based Storage Devices H. Kim and D. Shin: ed Page-Level Mapping for Flash Memory-Based Storage Devices 7 ed Page-Level Mapping for Flash Memory-Based Storage Devices Hyukjoong Kim and Dongkun Shin, Member, IEEE Abstract Recent

More information

Memory Management! How the hardware and OS give application pgms:" The illusion of a large contiguous address space" Protection against each other"

Memory Management! How the hardware and OS give application pgms: The illusion of a large contiguous address space Protection against each other Memory Management! Goals of this Lecture! Help you learn about:" The memory hierarchy" Spatial and temporal locality of reference" Caching, at multiple levels" Virtual memory" and thereby " How the hardware

More information

Switching Between Page Replacement Algorithms Based on Work Load During Runtime in Linux Kernel

Switching Between Page Replacement Algorithms Based on Work Load During Runtime in Linux Kernel San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-22-2017 Switching Between Page Replacement Algorithms Based on Work Load During Runtime in Linux

More information

Hyperbolic Caching: Flexible Caching for Web Applications

Hyperbolic Caching: Flexible Caching for Web Applications Hyperbolic Caching: Flexible Caching for Web Applications Aaron Blankstein Princeton University (now @ Blockstack Inc.) Siddhartha Sen Microsoft Research NY Michael J. Freedman Princeton University Modern

More information

CPE300: Digital System Architecture and Design

CPE300: Digital System Architecture and Design CPE300: Digital System Architecture and Design Fall 2011 MW 17:30-18:45 CBC C316 Virtual Memory 11282011 http://www.egr.unlv.edu/~b1morris/cpe300/ 2 Outline Review Cache Virtual Memory Projects 3 Memory

More information

Improving the Performance of Browsers Using Fuzzy Logic

Improving the Performance of Browsers Using Fuzzy Logic International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 3, Issue 1 (August 2012), PP. 01-08 Improving the Performance of Browsers Using

More information

CIS Operating Systems Memory Management Cache Replacement & Review. Professor Qiang Zeng Fall 2017

CIS Operating Systems Memory Management Cache Replacement & Review. Professor Qiang Zeng Fall 2017 CIS 5512 - Operating Systems Memory Management Cache Replacement & Review Professor Qiang Zeng Fall 2017 Previous class What is the rela+on between CoW and sharing page frames? CoW is built on sharing

More information

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1 Paging algorithms CS 241 February 10, 2012 Copyright : University of Illinois CS 241 Staff 1 Announcements MP2 due Tuesday Fabulous Prizes Wednesday! 2 Paging On heavily-loaded systems, memory can fill

More information

Karma: Know-it-All Replacement for a Multilevel cache

Karma: Know-it-All Replacement for a Multilevel cache Karma: Know-it-All Replacement for a Multilevel cache Gala Yadgar Computer Science Department, Technion Assaf Schuster Computer Science Department, Technion Michael Factor IBM Haifa Research Laboratories

More information

Marten van Dijk Syed Kamran Haider, Chenglu Jin, Phuong Ha Nguyen. Department of Electrical & Computer Engineering University of Connecticut

Marten van Dijk Syed Kamran Haider, Chenglu Jin, Phuong Ha Nguyen. Department of Electrical & Computer Engineering University of Connecticut CSE 5095 & ECE 4451 & ECE 5451 Spring 2017 Lecture 5a Caching Review Marten van Dijk Syed Kamran Haider, Chenglu Jin, Phuong Ha Nguyen Department of Electrical & Computer Engineering University of Connecticut

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2016 Lecture 33 Virtual Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ How does the virtual

More information

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory Chapter 9: Virtual Memory Chapter 9: Virtual Memory 9.1 Background 9.2 Demand Paging 9.3 Copy-on-Write 9.4 Page Replacement 9.5 Allocation of Frames 9.6 Thrashing 9.7 Memory-Mapped Files 9.8 Allocating

More information

Perform page replacement. (Fig 8.8 [Stal05])

Perform page replacement. (Fig 8.8 [Stal05]) Virtual memory Operations and policies Chapters 3.4. 3.7 1 Policies and methods Fetch policy (Noutopolitiikka) When to load page to memory? Placement policy (Sijoituspolitiikka ) Where to place the new

More information

TLB Recap. Fast associative cache of page table entries

TLB Recap. Fast associative cache of page table entries Virtual Memory II 1 TLB Recap Fast associative cache of page table entries Contains a subset of the page table What happens if required entry for translation is not present (a TLB miss)? 2 TLB Recap TLB

More information

A New Web Cache Replacement Algorithm 1

A New Web Cache Replacement Algorithm 1 A New Web Cache Replacement Algorithm Anupam Bhattacharjee, Biolob Kumar Debnath Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka-, Bangladesh

More information

Chapter 8. Virtual Memory

Chapter 8. Virtual Memory Operating System Chapter 8. Virtual Memory Lynn Choi School of Electrical Engineering Motivated by Memory Hierarchy Principles of Locality Speed vs. size vs. cost tradeoff Locality principle Spatial Locality:

More information