Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Similar documents
CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

Chapter Seven Morgan Kaufmann Publishers

Chapter Seven. SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors)

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

Virtual Memory. Main memory is a CACHE for disk Advantages: illusion of having more physical memory program relocation protection.

LECTURE 10: Improving Memory Access: Direct and Spatial caches

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

Chapter 7 Large and Fast: Exploiting Memory Hierarchy. Memory Hierarchy. Locality. Memories: Review

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Advanced Memory Organizations

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

EECS 322 Computer Architecture Superpipline and the Cache

EN1640: Design of Computing Systems Topic 06: Memory System

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

COMPUTER ORGANIZATION AND DESIGN

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

EN1640: Design of Computing Systems Topic 06: Memory System

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies


Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

Handout 4 Memory Hierarchy

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Memory Technology

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

5. Memory Hierarchy Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3. Emil Sekerinski, McMaster University, Fall Term 2015/16

Background. Memory Hierarchies. Register File. Background. Forecast Memory (B5) Motivation for memory hierarchy Cache ECC Virtual memory.

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

Memory systems. Memory technology. Memory technology Memory hierarchy Virtual memory

Memory Hierarchy Y. K. Malaiya

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved.

CHAPTER 4 MEMORY HIERARCHIES TYPICAL MEMORY HIERARCHY TYPICAL MEMORY HIERARCHY: THE PYRAMID CACHE PERFORMANCE MEMORY HIERARCHIES CACHE DESIGN

Recap: Machine Organization

Page 1. Multilevel Memories (Improving performance using a little cash )

Transistor: Digital Building Blocks

Sarah L. Harris and David Money Harris. Digital Design and Computer Architecture: ARM Edition Chapter 8 <1>

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Memory Technologies. Technology Trends

Lecture 2: Memory Systems

CPSC 330 Computer Organization

V. Primary & Secondary Memory!

Lecture-14 (Memory Hierarchy) CS422-Spring

Copyright 2012, Elsevier Inc. All rights reserved.

Chapter 7-1. Large and Fast: Exploiting Memory Hierarchy (part I: cache) 臺大電機系吳安宇教授. V1 11/24/2004 V2 12/01/2004 V3 12/08/2004 (minor)

LECTURE 5: MEMORY HIERARCHY DESIGN

The University of Adelaide, School of Computer Science 13 September 2018

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory

Computer Systems Architecture I. CSE 560M Lecture 18 Guest Lecturer: Shakir James

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 7: Memory Organization Part II

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS3350B Computer Architecture

Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

CSE 2021: Computer Organization

ECE232: Hardware Organization and Design

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

and data combined) is equal to 7% of the number of instructions. Miss Rate with Second- Level Cache, Direct- Mapped Speed

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

www-inst.eecs.berkeley.edu/~cs61c/

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

Adapted from David Patterson s slides on graduate computer architecture

A Cache Hierarchy in a Computer System

EE 4683/5683: COMPUTER ARCHITECTURE

CSE 2021: Computer Organization

Memory. Lecture 22 CS301

Lecture 11. Virtual Memory Review: Memory Hierarchy

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

Memory hierarchy and cache

Memory Hierarchy: Caches, Virtual Memory

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

Virtual to physical address translation

The Memory Hierarchy & Cache

Memory. Objectives. Introduction. 6.2 Types of Memory

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface

MEMORY. Objectives. L10 Memory

Memory Hierarchy: Motivation

Processes and Tasks What comprises the state of a running program (a process or task)?

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Memory Hierarchy. Mehran Rezaei

Chapter 6 Objectives

ELE 375 Final Exam Fall, 2000 Prof. Martonosi

Transcription:

Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY 1 Memories: Review SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored as a charge on capacitor (must be refreshed) very small but slower than SRAM (factor of 5 to 10) 2 Exploiting Memory Hierarchy Users want large and fast memories! 1997 SRAM access times are 2-25ns at cost of $100 to $250 per Mbyte. DRAM access times are 60-120 ns at cost of $5 to $10 per Mbyte. Disk access times are 10 to 20 million ns at cost of $.10 to $.20 per Mbyte. 2005 SRAM access times are 1.25 ns at cost of $1000 per Gbyte. DRAM access times are 2.5 ns at cost of $100 per Gbyte. Disk access times are 200,000 ns at cost of $1 per Gbyte. Try and give it to them anyway - build a memory hierarchy Levels in memory hierarchy CPU CPU Cache (100M) RAM (1G) Disk Cache (.5G) Disk (100 GB) Size of memory Increasing distance from CPU in access time. 3

Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality: it will tend to be referenced again soon spatial locality: nearby items will tend to be referenced soon. Why does code have locality? Our initial focus: two level model (upper, lower) block: minimum unit of data hit: data requested is in the upper level miss: data requested is not in the upper level 4 Cache Two issues: How do we know if a data item is in the cache? If it is, how do we find it? Our first example: block size is one word of data "direct mapped" For each item of data at the lower level, there is exactly one location in the cache where it might be. e.g., lots of items at the lower level share locations in the upper level 5 Direct Mapped Cache Mapping: address is modulo the number of blocks in the cache 6

7 Direct Mapped Cache For MIPS: Address, showing bit positions Hit Tag Data Index Valid Tag Data What kind of locality are we taking advantage of? 8 Direct Mapped Cache Taking advantage of spatial locality: Address, showing bit positions V (valid) Tag Hit Tag Block Offset Data 9

Hits vs. Misses Read hits this is what we want! Read misses stall the CPU, fetch block from memory, deliver to cache, restart Write hits: can replace data in cache and memory (write-through) write the data only into the cache (write-back the cache later) Write misses: read the entire block into the cache, then write the word 10 Hardware Issues Make reading multiple words easier by using banks of memory b. Wide memory organization. c. Interleaved memory organization. a. One-word-wide memory organization. It can get a lot more complicated... 11 Performance Increasing the block size tends to decrease miss rate: Miss Rate (%) Cache Size (kbyte) 1 8 16 64 128 4 16 64 256 Block Size (bytes) Use split caches because there is more spatial locality in code: Program Block size in words Instruction miss rate Data miss rate Effective combined miss rate gcc 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% spice 1 1.2% 1.3% 1.2% 4 0.3% 0.6% 0.4% 12

Performance Simplified model: execution time = (execution cycles + stall cycles) cycle time stall cycles = # of instructions miss ratio miss penalty Two ways of improving performance: decreasing the miss ratio decreasing the miss penalty What happens if we increase block size? 13 Decreasing miss ratio with associativity 1-way Set Associative (direct mapping) 2-way Set Associative) 4-way Set Associative 8-way Set Associative (fully associative) Compared to direct mapped, give a series of references that: results in a lower miss ratio assuming we use the least recently used replacement strategy 14 An implementation 15

Performance 15% Miss Rate (%) 1-way 2-way 4-way 8-way Associativity Cache Size (kbyte) 1 2 4 8 16 32 64 128 16 Decreasing miss penalty with multilevel caches Add a second level cache: often primary cache is on the same chip as the processor use SRAMs to add another cache above primary memory (DRAM) miss penalty goes down if data is in 2nd level cache Example: CPI of 1.0 on a 500Mhz machine with a 5% miss rate, 200ns DRAM access Adding 2nd level cache with 20ns access time decreases miss rate to 2% Using multilevel caches: try and optimize the hit time on the 1st level cache try and optimize the miss rate on the 2nd level cache 17 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: illusion of having more physical memory program relocation protection 18

Pages: virtual memory blocks Page faults: the data is not in memory, retrieve it from disk huge miss penalty, thus pages should be fairly large (e.g., 4KB) reducing page faults is important (LRU is worth the price) can handle the faults in software instead of hardware using write-through is too expensive so we use writeback 19 Page Tables 20 Page Tables 21

Making Address Translation Fast A cache for address translations: translation lookaside buffer 22 TLBs and caches 23 Modern Systems Very complicated memory systems: Characteristic Intel Pentium Pro PowerPC 604 Virtual address 32 bits 52 bits Physical address32 bits 32 bits Page size 4 KB, 4 MB 4 KB, selectable, and 256 MB TLB organizationa TLB for instructions and a TLB for data A TLB for instructions and a TLB for data Both four-way set associative Both two-way set associative Pseudo-LRU replacement LRU replacement Instruction TLB: 32 entries Instruction TLB: 128 entries Data TLB: 64 entries Data TLB: 128 entries TLB misses handled in hardware TLB misses handled in hardware Characteristic Intel Pentium Pro PowerPC 604 Cache organization Split instruction and data caches Split intruction and data caches Cache size 8 KB each for instructions/data 16 KB each for instructions/data Cache associativity Four-way set associative Four-way set associative Replacement Approximated LRU replacement LRU replacement Block size 32 bytes 32 bytes Write policy Write-back Write-back or write-through 24

Some Issues Processor speeds continue to increase very fast much faster than either DRAM or disk access times Design challenge: dealing with this growing disparity Trends: synchronous SRAMs (provide a burst of data) redesign DRAM chips to provide higher bandwidth or processing restructure code to increase locality use prefetching (make cache visible to ISA) 25