Feng Chen and Xiaodong Zhang Dept. of Computer Science and Engineering The Ohio State University
|
|
- Georgiana Oliver
- 5 years ago
- Views:
Transcription
1 Caching for Bursts (C-Burst): Let Hard Disks Sleep Well and Work Energetically Feng Chen and Xiaodong Zhang Dept. of Computer Science and Engineering The Ohio State University
2 Power Management in Hard Disk Power management is a requirement in computer system design Maintenance cost Reliability & Durability Environmental effects Battery life Hard disk drive is a big energy consumer, for I/O intensive jobs e.g. hard disk drives account for 86% of total energy consumption in EMC Symmetrix 3000 storage systems. As multi-core CPUs become more energy efficient, disks are less. 2
3 Standard Power Management Dynamic Power Management (DPM) When disk is idle, spin it down to save energy When a request arrives, spin it up to service the request Frequently spin up/down a disk incurs substantial penalty: high latency and energy Disk energy consumption is highly hl dependent d on the pattern of disk accesses (periodically sleep and work) 3
4 Ideal Access Patterns for Power Saving Ideal Disk Power Saving Condition Requests to disk form a periodic and burst pattern Hard disk can sleep well and work energetically Disk accesses Long Disk Sleep Interval Increasing Burstiness of disk accesses is the key to saving disk energy time Disk Accesses in bursts 4
5 Buffer Caches Affect Disk Access Patterns Buffer caches in DRAM are part of hard disk service Disk data are cached in main memory y( (buffer cache) Hits in buffer caches are fast, and avoid disk accesses The buffer cache is able to filter and change disk access streams Applications requests Buffer Cache Prefetching Caching disk accesses Hard Disk 5
6 Existing Solution for Burst in Disks Forming Burst Disk Accesses with Prefetching Predict the data that are likely to be accessed in the future Preload the to-be-used data into memory Directly condense disk accesses into a sequence of I/O bursts Both energy efficiency and performance could be improved Limitations Buffer caching share the same buffer space. Energy-unaware replacement can easily change burst patterns created by prefetching Aggressive prefetching shrinks available caching space and demands highly effective caching Energy-aware caching policy can effectively complement prefetching No work has been done. Reference e Papathanasiou a as and Scott, USENIX 04 6
7 Caching can Easily Affect Access Patterns An example Original i Disk Accesses time Buffer Cache Solely relying on prefetching is sub-optimal, Bursty Disk Accesses organized by Prefetching Energy-aware caching policyis needed to create burst disk accesses Disk still cannot sleep well time a b c a b c 7 3 blocks to be evicted by Unfortunately, these Energy-Unaware blocks Caching to be accessed in a non-bursty way
8 Caching Policy is Designed for Locality Standard Caching Policies Identify the data that are unlikely to be accessed in the future (LRU) Evict the not-to-be-used data out from memory They are performance-oriented and energy-unaware Most are locality-based algorithms, e.g. LRU (clock), LIRS (clock-pro), Designed for reducing the number of disk accesses No consideration of creating burst disk access pattern C-Burst (Caching for Bursts) Our objectives: effective buffer caching To create burst disk accesses for disk energy saving To retain the performance (high hit ratios in buffer cache) 8
9 Outlines Motivation Scheme Design History-based C-Burst (HC-Burst) Prediction-based C-Burst (PC-Burst) Memory Regions Performance Loss Control Performance Evaluation vauato Programming Multimedia Multi-role l Server Conclusion 9
10 Restructuring Buffer Caches Buffer cache is segmented into two regions Priority region (PR) Hot blocks are managed using LRU-based scheme Blocks w/ strong locality are protected in PR Overwhelming memory misses can be avoided Retain the performance Energy-aware region (EAR) Cached blocks are managed using C-burst schemes Non-burst accessed blocks are kept here Re-accessed block (strong locality) is promoted into PR Buffer Cache Priority Region (LRU) Buffer Cache (LRU) Region size is dynamically adjusted Both performance and energy saving are considered Energy Aware Region (C-Burst) 10 Our focus in this talk
11 History-based C-Burst (HC-Burst) Distinguish different streams of disk accesses Multiple tasks run simultaneously in practice History record can help us to distinguish them Various tasks feature very different access patterns Burst e.g. grep, CVS, etc. Non-Burst e.g. make, mplayer, etc. Accesses reaching the hard disk is a mixture of both burst and non-burst accesses In aggregate, the disk access pattern is determined by the most non-burst one 11
12 Basic Idea of History-based C-Burst (HC-Burst) grep make bursty access long disk Idle intervals time A B C D E F G H I J K L non-bursty access short disk Idle Period To cache the blocks being accessed in a non-burst pattern, to reshape disk accesses to a burst pattern a b c d time e f g Buffer Cache Grep + make A B C D a 12 Aggregate Results: non-bursty access b c d E time F G H f g I J K L e
13 History-based C-Burst (HC-Burst) Identifying an application s access pattern I/O Context (IOC) An application s access history must be tracked across many runs Some tasks life times are very short Each task is associate with an IOC to track its data access history IOCs are maintained in a hash table for quick access Each IOC is identified by an I/O context ID (IOC ID) a hash value of executable s path or kernel thread s name 13
14 History-based C-Burst (HC-Burst) Epoch Application s access pattern may change over time Execution is broken into epochs, T seconds for each Too small or too large are both undesired Our choice T = (Disk Time-out Threshold ) / 2 No disk spin-down happens during one epoch with disk accesses Distribution of disk accesses during one epoch can be ignored grep epoch time A B C D E F G H I J K L make 14 a b c d time e f g
15 History-based C-Burst (HC-Burst) Block Group Blocks accessed during one epoch by the same IOC are grouped into a block group Each block group is identified by an process ID and an epoch time The size of a block group indicates the burtiness of data access pattern of one application The larger a block group is, the more bursty disk accesses are grep epoch Block Groups time A B C D E F G H I J K L make 15 a b c d time e f g
16 History-based C-Burst (HC-Burst) HC-Burst Replacement Policy Two types of blocks should be evicted Data blocks that are unlikely to be re-accessed Blocks with weak locality (e.g. LRU blocks) Data blocks that can be re-accessed with little l energy Blocks being accessed in bursty pattern Victim block group the largest block group Blocks that are frequently accessed would be promoted into PR Large block group often holds infrequently accessed blocks Blocks that are accessed in a bursty pattern stay in a large BG Large block group holds blocks being accessed in bursts 16
17 History-based C-Burst (HC-Burst) Multi-level Queues of Block Groups 32-level queues of block groups A block group of N blocks stays in queue Block groups on one queue are linked in the order of their epoch times Block groups may move upwards/downwards, if # of blocks changes The victim block group is always the LRU block group on the top queue w/ valid block groups Most Bursty (MB) Level 10 Victim block group LRU + MB Grep # of blks= Epoch ID = 10 make Make # of blks= Epoch # 8 Block is promoted to PR Block is demoted to EAR Burtiness Level 9 Level 1 Level 0 Least Bursty (LRU) Least Recent Used (LRU) 17 Epoch Time Most Recent Used (MRU)
18 Prediction-based C-Burst (PC-Burst) Main idea Certain disk access events are known and can be predicted. Evicting a block that is to be accessed during a short interval and close to a deterministic disk access With deterministic disk accesses and block reaccess time, selectively evicting blocks to be accessed in a short intervals and holding blocks to be accessed in long idle intervals short disk idle interval long disk idle interval deterministic disk accesses di d time predicted block accesses Block A Block B Holding Block B can avoid Breaking a long idle interval 18
19 Prediction-based C-Burst (PC-Burst) Prediction of Deterministic Disk Accesses Many well predictable periodic disk accesses exist in systems Timer-controlled OS events (e.g. pdflush) Multi-media workloads with steady consumption rate In practice, constant disk accesses may fluctuate sometimes System dynamics may affect disk I/O occasionally Real I/O pattern change may happen over time Challenge how to accommodate occasional system dynamics while responding quickly to real access pattern shift 19
20 Prediction-based C-Burst (PC-Burst) Prediction of Deterministic Disk Accesses Track each task s access history Each task is offered credits of [ -32, 32 ] 32 Feedback-based Prediction Compare observed interval with predicted interval Wrong prediction, reduce a task s credits Correct prediction Correct prediction, increase a task s credits Task w/ credit less than 0 is unpredictable Correct prediction Repeated mis-prediction increases the charge of credits exponentially Occasional dynamics only charge a task s credit slightly Real patter change quickly decreases a task s credits Wrong prediction credits
21 Prediction-based C-Burst (PC-Burst) Prediction of Block Re-Access Time Efficient Data Structure Block table maintains i 4 epoch times for all blocks, including non-resident blocks Conservative Prediction Only blocks having constant access intervals are believed predictable Logic Block Number (LBN)
22 Prediction-based C-Burst (PC-Burst) Multi-level Queues of Block Groups in PC-Burst 32-level queues of block groups each level has two queues Prediction Block Group p( (PBG) blocks to be accessed in the same future epoch time History Block Group (HBG) blocks being accessed in the same history epoch time Reference Points (RP) - resent deterministic disk accesses Victim block group The PBG on the top level,, in the shortest interval,, closest to a RP,, to be accessed in the furthest future If no PBG is found, search the same level queue of HBG Victim block group long Interval Shortest Interval Level 10 Level 0 22
23 Performance Loss Control Why there is performance loss? Increase of memory misses due to energy-oriented caching policy How to control performance loss Basic Rule Control the size of Energy Aware Region Perof. Perf. Loss Loss > < Tolerable Tolerable Rate Rate Shrink Enlarge Region Region Size size Main Memory Estimating gperformance loss Ghost buffer (GB) LRU replacement policy The increase of memory misses (M) Blocks not found in EAR, but found in GB The average memory miss penalty (P) Observed average I/O latency Performance loss L = M x P Priority Priority Region Region A memory miss Check Ghost Buffer Automatic tune Energy Aware Region size L < Tolerable Performance Loss Enlarge EAR region size L > Tolerable Performance Loss Shrink EAR region size Energy Ghost Aware Buffer Energy Region (LRU) Aware Region Hit in Ghost Buffer Perf. Loss ++ 23
24 Performance Evaluation Implementation Linux Kernel ,500 lines of code in buffer cache management and generic block layer Experimental Setup Intel Pentium 4 3.0GHz 1024 MB memory Western Digital WD160GB 7200RPM hard disk drive RedHat Linux WS4 Linux kernel Ext3 file system 24
25 Performance Evaluation Methodology Workloads run on experiment machine Disk activities are collected in Kernel on experiment machine Disk events are sent via netconsole to an monitor machine Disk energy consumption is calculated based on collected log of disk events using disk power models off line disk activities log Experiment machine Gigabit LAN Monitoring machine 25
26 Performance Evaluation Emulated Disk Models Hitachi DK23DA Laptop Disk IBM UltraStar 36Z15 SCSI Disk Hitachi DK23DA IBM UltraStar 36Z15 Capacity 30GB 18.4GB Cache 2MB 4MB RPM B/W 35MB/sec 53MB/sec Active Power 2 watt 13.5 watt Idle Power 1.6 watt 10.2 watt Standby Power 0.15 watt 2.5 watt Spin up 1.6 sec / 5 J 10.9 sec / 135 J Spin down 2.3 sec / 2.94 J 1.5 sec / 13 J 26
27 Performance Evaluation 27 Eight applications 3 applications w/ bursty data accesses 5 applications w/ non-bursty data accesses Three Case Studies Programming Multi-media Processing Multi-role servers Name Description MB/ epoch Request/epoch Make Linux kernel builder Vim Text editor Mpg123 Mp3 player Transcode Video converter TPC-H Database query # Grep Textual search tool Scp Remote copy tool CVS Version control tool
28 Performance Evaluation Case Study I programming Applications: grep, make, and vim Grep bursty workload Make, vim non-bursty workload C-Burst schemes protects data set of make from being evicted by grep Disk idle intervals are effectively extended 28
29 Performance Evaluation Case Study I programming Applications: grep, make, and vim Grep bursty workload Make, vim non-bursty workload C-Burst schemes protects data set of make from being evicted by grep Disk idle intervals are effectively extended 29 over 30% energy saving Nearly 0% intervals > 16 sec Over 50% intervals > 16 sec
30 Performance Evaluation Case Study II Multi-media Processing Applications: transcode, mpg123, and scp Mpg123 its disk accesses serve as deterministic accesses Scp bursty disk accesses Transcode non-bursty accesses PC-Burst achieves better performance than HC-Burst PC-Burst can accommodate deeper prefetching by efficiently using caching space Around 30% energy saving 30 Over 70% intervals > 16 sec
31 Performance Evaluation Case Study III Multi-role Server Applications: TPC-H # 17, and CVS TPC-H non-bursty disk accesses ( very random I/O ) CVS bursty disk accesses Dataset of TPC-H is protected in memory Performance of TPC-H is significantly improved No improvement on disk idle interval Reduced I/O latency over 35% energy saving 31
32 Our Contributions Design a set of comprehensive energy-aware caching policies, called C-Burst, which leverages the filtering effect of buffer cache to manipulate disk accesses Our scheme does not rely on complicated disk power models and requires no disk specification data, which means high compatibility to different hardware Our scheme does not assume using any specific disk hardware, such as multi-speed disks, which means our scheme is beneficial to existing hardware Our scheme provides flexible performance guarantees to avoid unacceptable performance degradation Our scheme is fully implemented in Linux kernel , and experiments under realistic scenarios show up to 35% energy saving with minimal performance loss 32
33 Conclusion Energy efficiency is a critical issue for computer system design Increasing disk access burstiness is the key to achieving disk energy conservation Leveraging filtering effect of buffer cache can effectively shape the disk accesses to an expected pattern HC-Burst scheme can distinguish different access pattern of tasks and create a bursty stream of disk accesses PC-burst scheme can further predict blocks re-access time and manipulate the timing of future disk accesses Our implementation of C-Burst schemes in Linux kernel and experiments show that C-Burst schemes can achieve up to 35% energy saving with minimal performance loss 33
34 34
35 References [USENIX04] A. E. Papathanasiou and M. L. Scott, Energy Efficient prefetching and caching. In Proc. of USENIX 04 [EMC99] EMC Symmetrix 3000 and 5000 enterprise storage systems product description guide
36 Memory Regions Buffer cache is segmented into two regions Priority region (PR) Hot blocks are managed using LRU-based scheme Blocks w/ strong locality are protected Overwhelming memory misses can be avoided Buffer Cache Priority Region 36 Energy-aware region (EAR) Cold blocks are managed using C-burst schemes Victim blocks are always reclaimed from EAR Re-accessed block is promoted into PR Accordingly, the coldest block in PR is demoted Promote a cold block Region size is tuned on line Both performance and energy saving are considered Insert a new block Energy Aware Region Demote a hot block Evict a cold block
37 Motivation Limitations Energy-unaware caching policy can significantly affect periodic bursty patternscreated by prefetching Improperly evicting a block may easily break a long disk idle interval Aggressive prefetching shrinks available caching space and demands highly effective caching Effective prefetching needs large volume of memory space, which raises high memory contention for caching Energy-aware caching policy can effectively complement prefetching When prefetching works unsatisfactorily, caching can give a hand by carefully selecting blocks for eviction 37
38 Motivation Prefetching Predict the data that are likely to be accessed in the future Preload the to-be-used data into memory Directly condense disk accesses into a sequence of I/O bursts Both energy efficiency and performance could be improved Caching Identify the data that are unlikely to be accessed in the future Evict the not-to-be-used data out from memory Traditional caching policies are performance-oriented rmance riented Designed for reducing the number of disk accesses No consideration of creating bursty disk access pattern 38
39 Prediction-based C-Burst (PC-Burst) Prediction of Deterministic Disk Accesses Track each task s access history and offer each task credits of [-32, 32] Feed-back based Prediction Compare observed interval with predicted interval If prediction is proved wrong, reduce a task s credits If prediction is proved right, increase a task s credits Task w/ credit less than 0 is unpredictable Repeated mis-prediction increases the charge of credits exponentially Occasional system dynamics only charge a task s credit slightly Real patter change quickly decreases a task s credits 39
Caching for Bursts (C-Burst): Let Hard Disks Sleep Well and Work Energetically
Caching for Bursts (C-Burst): Let Hard Disks Sleep Well and Work Energetically Feng Chen and Xiaodong Zhang Dept. of Computer Science & Engineering The Ohio State University Columbus, OH 4321, USA {fchen,zhang}@cse.ohio-state.edu
More informationCaching for Bursts (C-Burst): Let Hard Disks Sleep Well and Work Energetically
Caching for Bursts (C-Burst): Let Hard Disks Sleep Well and Work Energetically ABSTRACT High energy consumption has become a critical challenge in all kinds of computer systems. Hardware-supported Dynamic
More informationSmartSaver: Turning Flash Drive into a Disk Energy Saver for Mobile Computers
SmartSaver: Turning Flash Drive into a Disk Energy Saver for Mobile Computers Feng Chen 1 Song Jiang 2 Xiaodong Zhang 1 The Ohio State University, USA Wayne State University, USA Disks Cost High Energy
More informationPS-BC: Power-saving Considerations in Design of Buffer Caches Serving Heterogeneous Storage Devices
PS-BC: Power-saving Considerations in Design of Buffer Caches Serving Heterogeneous Storage Devices Feng Chen and Xiaodong Zhang Department of Computer Science & Engineering The Ohio State University Columbus,
More informationCascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching
Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching Kefei Wang and Feng Chen Louisiana State University SoCC '18 Carlsbad, CA Key-value Systems in Internet Services Key-value
More informationSRM-Buffer: An OS Buffer Management SRM-Buffer: An OS Buffer Management Technique toprevent Last Level Cache from Thrashing in Multicores
SRM-Buffer: An OS Buffer Management SRM-Buffer: An OS Buffer Management Technique toprevent Last Level Cache from Thrashing in Multicores Xiaoning Ding The Ohio State University dingxn@cse.ohiostate.edu
More informationFlexFetch: A History-Aware Scheme for I/O Energy Saving in Mobile Computing
: A History-Aware Scheme for I/O Energy Saving in Mobile Computing Feng Chen 1, Song Jiang 2, Weisong Shi 3, and Weikuan Yu 4 1 Dept. of Computer Science & Engineering 2 Dept. of Electrical & Computer
More informationCFDC A Flash-aware Replacement Policy for Database Buffer Management
CFDC A Flash-aware Replacement Policy for Database Buffer Management Yi Ou University of Kaiserslautern Germany Theo Härder University of Kaiserslautern Germany Peiquan Jin University of Science and Technology
More informationASEP: An Adaptive Sequential Prefetching Scheme for Second-level Storage System
ASEP: An Adaptive Sequential Prefetching Scheme for Second-level Storage System Xiaodong Shi Email: shixd.hust@gmail.com Dan Feng Email: dfeng@hust.edu.cn Wuhan National Laboratory for Optoelectronics,
More informationSmartSaver: Turning Flash Drive into a Disk Energy Saver for Mobile Computers
SmartSaver: Turning Flash Drive into a Disk Energy Saver for Mobile Computers Feng Chen The Ohio State University Columbus, OH 4321, USA fchen@cse.ohiostate.edu Song Jiang Wayne State University Detroit,
More informationPresented by: Nafiseh Mahmoudi Spring 2017
Presented by: Nafiseh Mahmoudi Spring 2017 Authors: Publication: Type: ACM Transactions on Storage (TOS), 2016 Research Paper 2 High speed data processing demands high storage I/O performance. Flash memory
More informationChapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition
Chapter 7: Main Memory Operating System Concepts Essentials 8 th Edition Silberschatz, Galvin and Gagne 2011 Chapter 7: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure
More informationand data combined) is equal to 7% of the number of instructions. Miss Rate with Second- Level Cache, Direct- Mapped Speed
5.3 By convention, a cache is named according to the amount of data it contains (i.e., a 4 KiB cache can hold 4 KiB of data); however, caches also require SRAM to store metadata such as tags and valid
More informationCS307: Operating Systems
CS307: Operating Systems Chentao Wu 吴晨涛 Associate Professor Dept. of Computer Science and Engineering Shanghai Jiao Tong University SEIEE Building 3-513 wuct@cs.sjtu.edu.cn Download Lectures ftp://public.sjtu.edu.cn
More informationManaging Prefetch Memory for Data-Intensive Online Servers
Managing Prefetch Memory for Data-Intensive Online Servers Chuanpeng Li and Kai Shen Department of Computer Science, University of Rochester {cli, kshen}@cs.rochester.edu Abstract Data-intensive online
More informationOutline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28
Outline 1 Paging 2 Eviction policies 3 Thrashing 1 / 28 Paging Use disk to simulate larger virtual than physical mem 2 / 28 Working set model # of accesses virtual address Disk much, much slower than memory
More informationFlash Memory Based Storage System
Flash Memory Based Storage System References SmartSaver: Turning Flash Drive into a Disk Energy Saver for Mobile Computers, ISLPED 06 Energy-Aware Flash Memory Management in Virtual Memory System, islped
More informationCS6401- Operating System UNIT-III STORAGE MANAGEMENT
UNIT-III STORAGE MANAGEMENT Memory Management: Background In general, to rum a program, it must be brought into memory. Input queue collection of processes on the disk that are waiting to be brought into
More information8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management
Part Four - Memory Management 8.1 Background Chapter 8: Memory-Management Management Strategies Program must be brought into memory and placed within a process for it to be run Input queue collection of
More informationSRM-Buffer: An OS Buffer Management Technique to Prevent Last Level Cache from Thrashing in Multicores
SRM-Buffer: An OS Buffer Management Technique to Prevent Last Level Cache from Thrashing in Multicores Xiaoning Ding et al. EuroSys 09 Presented by Kaige Yan 1 Introduction Background SRM buffer design
More informationIEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL 2006 243 Segment-Based Streaming Media Proxy: Modeling Optimization Songqing Chen, Member, IEEE, Bo Shen, Senior Member, IEEE, Susie Wee, Xiaodong
More informationIntroduction to Energy-Efficient Software 2 nd life talk
Introduction to Energy-Efficient Software 2 nd life talk Intel Software and Solutions Group Bob Steigerwald Nov 8, 2007 Taylor Kidd Nov 15, 2007 Agenda Demand for Mobile Computing Devices What is Energy-Efficient
More informationMemory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts
Memory management Last modified: 26.04.2016 1 Contents Background Logical and physical address spaces; address binding Overlaying, swapping Contiguous Memory Allocation Segmentation Paging Structure of
More informationECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part I: Operating system overview: Memory Management 1 Hardware background The role of primary memory Program
More informationLECTURE 5: MEMORY HIERARCHY DESIGN
LECTURE 5: MEMORY HIERARCHY DESIGN Abridged version of Hennessy & Patterson (2012):Ch.2 Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology is more expensive
More informationMemory Hierarchy. Slides contents from:
Memory Hierarchy Slides contents from: Hennessy & Patterson, 5ed Appendix B and Chapter 2 David Wentzlaff, ELE 475 Computer Architecture MJT, High Performance Computing, NPTEL Memory Performance Gap Memory
More informationProceedings of the 2002 USENIX Annual Technical Conference
USENIX Association Proceedings of the 2002 USENIX Annual Technical Conference Monterey, California, USA June -5, 2002 THE ADVANCED COMPUTING SYSTEMS ASSOCIATION 2002 by The USENIX Association All Rights
More informationUsing Transparent Compression to Improve SSD-based I/O Caches
Using Transparent Compression to Improve SSD-based I/O Caches Thanos Makatos, Yannis Klonatos, Manolis Marazakis, Michail D. Flouris, and Angelos Bilas {mcatos,klonatos,maraz,flouris,bilas}@ics.forth.gr
More informationCFLRU:A A Replacement Algorithm for Flash Memory
CFLRU:A A Replacement Algorithm for Flash Memory CASES'06, October 23 25, 2006, Seoul, Korea. Copyright 2006 ACM 1-59593-543-6/06/0010 Yen-Ting Liu Outline Introduction CFLRU Algorithm Simulation Implementation
More informationChapter 8 & Chapter 9 Main Memory & Virtual Memory
Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array
More informationSTLAC: A Spatial and Temporal Locality-Aware Cache and Networkon-Chip
STLAC: A Spatial and Temporal Locality-Aware Cache and Networkon-Chip Codesign for Tiled Manycore Systems Mingyu Wang and Zhaolin Li Institute of Microelectronics, Tsinghua University, Beijing 100084,
More informationChapter 8: Memory-Management Strategies
Chapter 8: Memory-Management Strategies Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and
More informationCopyright 2012, Elsevier Inc. All rights reserved.
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1 Introduction Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology
More informationComputer Architecture. A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1 Programmers want unlimited amounts of memory with low latency Fast memory technology is more expensive per
More informationChapter 8: Main Memory
Chapter 8: Main Memory Operating System Concepts 8 th Edition,! Silberschatz, Galvin and Gagne 2009! Chapter 8: Memory Management Background" Swapping " Contiguous Memory Allocation" Paging" Structure
More informationComputer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1 Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology is more
More informationChapter 8: Main Memory
Chapter 8: Main Memory Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium 8.2 Silberschatz, Galvin
More informationCS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2017 Lecture 15
CS24: INTRODUCTION TO COMPUTING SYSTEMS Spring 2017 Lecture 15 LAST TIME: CACHE ORGANIZATION Caches have several important parameters B = 2 b bytes to store the block in each cache line S = 2 s cache sets
More informationRow Buffer Locality Aware Caching Policies for Hybrid Memories. HanBin Yoon Justin Meza Rachata Ausavarungnirun Rachael Harding Onur Mutlu
Row Buffer Locality Aware Caching Policies for Hybrid Memories HanBin Yoon Justin Meza Rachata Ausavarungnirun Rachael Harding Onur Mutlu Executive Summary Different memory technologies have different
More informationCopyright 2012, Elsevier Inc. All rights reserved.
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1 Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology is more
More informationECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)
ECE7995 Caching and Prefetching Techniques in Computer Systems Lecture 8: Buffer Cache in Main Memory (I) 1 Review: The Memory Hierarchy Take advantage of the principle of locality to present the user
More informationMigration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM
Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Hyunchul Seok Daejeon, Korea hcseok@core.kaist.ac.kr Youngwoo Park Daejeon, Korea ywpark@core.kaist.ac.kr Kyu Ho Park Deajeon,
More informationChapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition
Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation
More informationChapter 8: Memory- Management Strategies
Chapter 8: Memory Management Strategies Chapter 8: Memory- Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and
More informationCHAPTER 8 - MEMORY MANAGEMENT STRATEGIES
CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES OBJECTIVES Detailed description of various ways of organizing memory hardware Various memory-management techniques, including paging and segmentation To provide
More informationWhy memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho
Why memory hierarchy? L1 cache design Sangyeun Cho Computer Science Department Memory hierarchy Memory hierarchy goals Smaller Faster More expensive per byte CPU Regs L1 cache L2 cache SRAM SRAM To provide
More informationAdaptive SMT Control for More Responsive Web Applications
Adaptive SMT Control for More Responsive Web Applications Hiroshi Inoue and Toshio Nakatani IBM Research Tokyo University of Tokyo Oct 27, 2014 IISWC @ Raleigh, NC, USA Response time matters! Peak throughput
More informationOperating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy
Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference
More informationCHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.
CHAPTER 8: MEMORY MANAGEMENT By I-Chen Lin Textbook: Operating System Concepts 9th Ed. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the
More informationCS420: Operating Systems
Main Memory James Moscola Department of Engineering & Computer Science York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne Background Program must
More informationChapter 8: Main Memory. Operating System Concepts 9 th Edition
Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel
More informationPart Three - Memory Management. Chapter 8: Memory-Management Strategies
Part Three - Memory Management Chapter 8: Memory-Management Strategies Chapter 8: Memory-Management Strategies 8.1 Background 8.2 Swapping 8.3 Contiguous Memory Allocation 8.4 Segmentation 8.5 Paging 8.6
More informationCSE 153 Design of Operating Systems
CSE 153 Design of Operating Systems Winter 18 Lecture 18/19: Page Replacement Memory Management Memory management systems Physical and virtual addressing; address translation Techniques: Partitioning,
More informationMemory hierarchy review. ECE 154B Dmitri Strukov
Memory hierarchy review ECE 154B Dmitri Strukov Outline Cache motivation Cache basics Six basic optimizations Virtual memory Cache performance Opteron example Processor-DRAM gap in latency Q1. How to deal
More informationWALL: A Writeback-Aware LLC Management for PCM-based Main Memory Systems
: A Writeback-Aware LLC Management for PCM-based Main Memory Systems Bahareh Pourshirazi *, Majed Valad Beigi, Zhichun Zhu *, and Gokhan Memik * University of Illinois at Chicago Northwestern University
More informationChapter 8: Main Memory
Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel
More informationStaged Memory Scheduling
Staged Memory Scheduling Rachata Ausavarungnirun, Kevin Chang, Lavanya Subramanian, Gabriel H. Loh*, Onur Mutlu Carnegie Mellon University, *AMD Research June 12 th 2012 Executive Summary Observation:
More informationA Row Buffer Locality-Aware Caching Policy for Hybrid Memories. HanBin Yoon Justin Meza Rachata Ausavarungnirun Rachael Harding Onur Mutlu
A Row Buffer Locality-Aware Caching Policy for Hybrid Memories HanBin Yoon Justin Meza Rachata Ausavarungnirun Rachael Harding Onur Mutlu Overview Emerging memories such as PCM offer higher density than
More informationPowernightmares: The Challenge of Efficiently Using Sleep States on Multi-Core Systems
Powernightmares: The Challenge of Efficiently Using Sleep States on Multi-Core Systems Thomas Ilsche, Marcus Hähnel, Robert Schöne, Mario Bielert, and Daniel Hackenberg Technische Universität Dresden Observation
More informationEI338: Computer Systems and Engineering (Computer Architecture & Operating Systems)
EI338: Computer Systems and Engineering (Computer Architecture & Operating Systems) Chentao Wu 吴晨涛 Associate Professor Dept. of Computer Science and Engineering Shanghai Jiao Tong University SEIEE Building
More informationPower Measurement Using Performance Counters
Power Measurement Using Performance Counters October 2016 1 Introduction CPU s are based on complementary metal oxide semiconductor technology (CMOS). CMOS technology theoretically only dissipates power
More informationChapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition
Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation
More informationMULTIPROCESSORS AND THREAD-LEVEL. B649 Parallel Architectures and Programming
MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM B649 Parallel Architectures and Programming Motivation behind Multiprocessors Limitations of ILP (as already discussed) Growing interest in servers and server-performance
More informationMULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming
MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM B649 Parallel Architectures and Programming Motivation behind Multiprocessors Limitations of ILP (as already discussed) Growing interest in servers and server-performance
More informationA Prefetching Scheme Exploiting both Data Layout and Access History on Disk
A Prefetching Scheme Exploiting both Data Layout and Access History on Disk SONG JIANG, Wayne State University XIAONING DING, New Jersey Institute of Technology YUEHAI XU, Wayne State University KEI DAVIS,
More informationA Memory Management Scheme for Hybrid Memory Architecture in Mission Critical Computers
A Memory Management Scheme for Hybrid Memory Architecture in Mission Critical Computers Soohyun Yang and Yeonseung Ryu Department of Computer Engineering, Myongji University Yongin, Gyeonggi-do, Korea
More informationBig and Fast. Anti-Caching in OLTP Systems. Justin DeBrabant
Big and Fast Anti-Caching in OLTP Systems Justin DeBrabant Online Transaction Processing transaction-oriented small footprint write-intensive 2 A bit of history 3 OLTP Through the Years relational model
More informationI/O Systems (4): Power Management. CSE 2431: Introduction to Operating Systems
I/O Systems (4): Power Management CSE 2431: Introduction to Operating Systems 1 Outline Overview Hardware Issues OS Issues Application Issues 2 Why Power Management? Desktop PCs Battery-powered Computers
More informationI.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY
I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY MANAGEMENT Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the
More informationEnlightening the I/O Path: A Holistic Approach for Application Performance
Enlightening the I/O Path: A Holistic Approach for Application Performance Sangwook Kim 13, Hwanju Kim 2, Joonwon Lee 3, and Jinkyu Jeong 3 Apposha 1 Dell EMC 2 Sungkyunkwan University 3 Data-Intensive
More informationVirtual to physical address translation
Virtual to physical address translation Virtual memory with paging Page table per process Page table entry includes present bit frame number modify bit flags for protection and sharing. Page tables can
More informationProgram Counter Based Pattern Classification in Pattern Based Buffer Caching
Purdue University Purdue e-pubs ECE Technical Reports Electrical and Computer Engineering 1-12-2004 Program Counter Based Pattern Classification in Pattern Based Buffer Caching Chris Gniady Y. Charlie
More informationMemory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)
Computing Systems & Performance Memory Hierarchy MSc Informatics Eng. 2011/12 A.J.Proença Memory Hierarchy (most slides are borrowed) AJProença, Computer Systems & Performance, MEI, UMinho, 2011/12 1 2
More informationSwapping. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University
Swapping Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Swapping Support processes when not enough physical memory User program should be independent
More informationPage Replacement for Write References in NAND Flash Based Virtual Memory Systems
Regular Paper Journal of Computing Science and Engineering, Vol. 8, No. 3, September 2014, pp. 1-16 Page Replacement for Write References in NAND Flash Based Virtual Memory Systems Hyejeong Lee and Hyokyung
More informationUsing Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD
More informationRef: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1
Ref: Chap 12 Secondary Storage and I/O Systems Applied Operating System Concepts 12.1 Part 1 - Secondary Storage Secondary storage typically: is anything that is outside of primary memory does not permit
More informationUnderstanding The Effects of Wrong-path Memory References on Processor Performance
Understanding The Effects of Wrong-path Memory References on Processor Performance Onur Mutlu Hyesoon Kim David N. Armstrong Yale N. Patt The University of Texas at Austin 2 Motivation Processors spend
More informationMemory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)
Computing Systems & Performance Memory Hierarchy MSc Informatics Eng. 2012/13 A.J.Proença Memory Hierarchy (most slides are borrowed) AJProença, Computer Systems & Performance, MEI, UMinho, 2012/13 1 2
More informationSwapping. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University
Swapping Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu EEE0: Introduction to Operating Systems, Fall 07, Jinkyu Jeong (jinkyu@skku.edu) Swapping
More information6. Results. This section describes the performance that was achieved using the RAMA file system.
6. Results This section describes the performance that was achieved using the RAMA file system. The resulting numbers represent actual file data bytes transferred to/from server disks per second, excluding
More informationLAST: Locality-Aware Sector Translation for NAND Flash Memory-Based Storage Systems
: Locality-Aware Sector Translation for NAND Flash Memory-Based Storage Systems Sungjin Lee, Dongkun Shin, Young-Jin Kim and Jihong Kim School of Information and Communication Engineering, Sungkyunkwan
More informationEnergy Management Issue in Ad Hoc Networks
Wireless Ad Hoc and Sensor Networks - Energy Management Outline Energy Management Issue in ad hoc networks WS 2010/2011 Main Reasons for Energy Management in ad hoc networks Classification of Energy Management
More informationOperating Systems. Operating Systems Sina Meraji U of T
Operating Systems Operating Systems Sina Meraji U of T Recap Last time we looked at memory management techniques Fixed partitioning Dynamic partitioning Paging Example Address Translation Suppose addresses
More informationChapter 6: CPU Scheduling. Operating System Concepts 9 th Edition
Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time
More informationData Processing on Modern Hardware
Data Processing on Modern Hardware Jens Teubner, TU Dortmund, DBIS Group jens.teubner@cs.tu-dortmund.de Summer 2014 c Jens Teubner Data Processing on Modern Hardware Summer 2014 1 Part V Execution on Multiple
More informationChapter 8: Memory Management Strategies
Chapter 8: Memory- Management Strategies, Silberschatz, Galvin and Gagne 2009 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table
More informationLSbM-tree: Re-enabling Buffer Caching in Data Management for Mixed Reads and Writes
27 IEEE 37th International Conference on Distributed Computing Systems LSbM-tree: Re-enabling Buffer Caching in Data Management for Mixed Reads and Writes Dejun Teng, Lei Guo, Rubao Lee, Feng Chen, Siyuan
More informationSecond-Tier Cache Management Using Write Hints
Second-Tier Cache Management Using Write Hints Xuhui Li University of Waterloo Aamer Sachedina IBM Toronto Lab Ashraf Aboulnaga University of Waterloo Shaobo Gao University of Waterloo Kenneth Salem University
More informationCOL862 Programming Assignment-1
Submitted By: Rajesh Kedia (214CSZ8383) COL862 Programming Assignment-1 Objective: Understand the power and energy behavior of various benchmarks on different types of x86 based systems. We explore a laptop,
More informationCPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)
1/32 CPU Scheduling The scheduling problem: - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) When do we make decision? 2/32 CPU Scheduling Scheduling decisions may take
More informationAn Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic
An Oracle White Paper September 2011 Oracle Utilities Meter Data Management 2.0.1 Demonstrates Extreme Performance on Oracle Exadata/Exalogic Introduction New utilities technologies are bringing with them
More informationPage Replacement Algorithms
Page Replacement Algorithms MIN, OPT (optimal) RANDOM evict random page FIFO (first-in, first-out) give every page equal residency LRU (least-recently used) MRU (most-recently used) 1 9.1 Silberschatz,
More informationCS Project Report
CS7960 - Project Report Kshitij Sudan kshitij@cs.utah.edu 1 Introduction With the growth in services provided over the Internet, the amount of data processing required has grown tremendously. To satisfy
More informationCharacterizing Data-Intensive Workloads on Modern Disk Arrays
Characterizing Data-Intensive Workloads on Modern Disk Arrays Guillermo Alvarez, Kimberly Keeton, Erik Riedel, and Mustafa Uysal Labs Storage Systems Program {galvarez,kkeeton,riedel,uysal}@hpl.hp.com
More informationEnergy Management Issue in Ad Hoc Networks
Wireless Ad Hoc and Sensor Networks (Energy Management) Outline Energy Management Issue in ad hoc networks WS 2009/2010 Main Reasons for Energy Management in ad hoc networks Classification of Energy Management
More informationPerformance Modeling and Analysis of Flash based Storage Devices
Performance Modeling and Analysis of Flash based Storage Devices H. Howie Huang, Shan Li George Washington University Alex Szalay, Andreas Terzis Johns Hopkins University MSST 11 May 26, 2011 NAND Flash
More informationSHANDONG UNIVERSITY 1
Chapter 8 Main Memory SHANDONG UNIVERSITY 1 Contents Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium SHANDONG UNIVERSITY 2 Objectives
More informationOptimizing Flash-based Key-value Cache Systems
Optimizing Flash-based Key-value Cache Systems Zhaoyan Shen, Feng Chen, Yichen Jia, Zili Shao Department of Computing, Hong Kong Polytechnic University Computer Science & Engineering, Louisiana State University
More information10/1/ Introduction 2. Existing Methods 3. Future Research Issues 4. Existing works 5. My Research plan. What is Data Center
Weilin Peng Sept. 28 th 2009 What is Data Center Concentrated clusters of compute and data storage resources that are connected via high speed networks and routers. H V A C Server Network Server H V A
More information