CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Similar documents
CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

EE 660: Computer Architecture Advanced Caches

Page 1. Multilevel Memories (Improving performance using a little cash )

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

ECE 552 / CPS 550 Advanced Computer Architecture I. Lecture 13 Memory Part 2

ECE 252 / CPS 220 Advanced Computer Architecture I. Lecture 13 Memory Part 2

Lecture 11 Cache. Peng Liu.

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Memory Hierarchy. 2/18/2016 CS 152 Sec6on 5 Colin Schmidt

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

Advanced Computer Architecture

Agenda. EE 260: Introduction to Digital Design Memory. Naive Register File. Agenda. Memory Arrays: SRAM. Memory Arrays: Register File

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

CSC 631: High-Performance Computer Architecture

Improving Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Highly-Associative Caches

CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. Lecture 7 Memory III

CS 152 Computer Architecture and Engineering. Lecture 6 - Memory

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

ECE 4750 Computer Architecture, Fall 2014 T05 FSM and Pipelined Cache Memories

EC 513 Computer Architecture

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Memory Hierarchy. Slides contents from:

Memory Hierarchy. Slides contents from:

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

Lecture 11 Cache and Virtual Memory

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o

CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory

Mo Money, No Problems: Caches #2...

CS252 Spring 2017 Graduate Computer Architecture. Lecture 11: Memory

Lecture 6 - Memory. Dr. George Michelogiannakis EECS, University of California at Berkeley CRD, Lawrence Berkeley National Laboratory

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Lecture 9 - Virtual Memory

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Show Me the $... Performance And Caches

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

Lecture 13: Cache Hierarchies. Today: cache access basics and innovations (Sections )

Memory hierarchy review. ECE 154B Dmitri Strukov

CS 61C: Great Ideas in Computer Architecture Caches Part 2

Page 1. Memory Hierarchies (Part 2)

EE 4683/5683: COMPUTER ARCHITECTURE

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

CS3350B Computer Architecture

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

Chapter 2: Memory Hierarchy Design Part 2

Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance

CS 152 Computer Architecture and Engineering. Lecture 19: Directory-Based Cache Protocols

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CACHE MEMORIES ADVANCED COMPUTER ARCHITECTURES. Slides by: Pedro Tomás

Advanced Memory Organizations

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Computer Architecture ELEC3441

Caching Basics. Memory Hierarchies

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Lecture notes for CS Chapter 2, part 1 10/23/18

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

Lecture 14: Cache Innovations and DRAM. Today: cache access basics and innovations, DRAM (Sections )

Course Administration

CS 152 Computer Architecture and Engineering. Lecture 13 - VLIW Machines and Statically Scheduled ILP

CS 152 Computer Architecture and Engineering. Lecture 19: Directory-Based Cache Protocols

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

Spring 2016 :: CSE 502 Computer Architecture. Caches. Nima Honarmand

CS252 Spring 2017 Graduate Computer Architecture. Lecture 12: Cache Coherence

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Advanced Caching Techniques (2) Department of Electrical Engineering Stanford University

Memory Hierarchy and Caches

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

CS 152 Computer Architecture and Engineering. Lecture 12 - Advanced Out-of-Order Superscalars

CS 152 Computer Architecture and Engineering. Lecture 13 - Out-of-Order Issue and Register Renaming

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Computer Architecture and System Software Lecture 09: Memory Hierarchy. Instructor: Rob Bergen Applied Computer Science University of Winnipeg

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Computer Architecture Spring 2016

14:332:331. Week 13 Basics of Cache

Introduction to OpenMP. Lecture 10: Caches

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Caches (Writing) P & H Chapter 5.2 3, 5.5. Hakim Weatherspoon CS 3410, Spring 2013 Computer Science Cornell University

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 2: Memory Hierarchy Design Part 2

A Cache Hierarchy in a Computer System

Transcription:

CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste! http://inst.eecs.berkeley.edu/~cs152! February 9, 2010 CS152, Spring 2010

Last time in Lecture 6 Dynamic RAM (DRAM) is main form of main memory storage in use today Holds values on small capacitors, need refreshing (hence dynamic) Slow multi-step access: precharge, read row, read column Static RAM (SRAM) is faster but more expensive Used to build on-chip memory for caches Cache holds small set of values in fast memory (SRAM) close to processor Need to develop search scheme to find values in cache, and replacement policy to make space for newly accessed locations Caches exploit two forms of predictability in memory reference streams Temporal locality, same location likely to be accessed again soon Spatial locality, neighboring location likely to be accessed soon February 9, 2010 CS152, Spring 2010 2

Placement Policy Block Number 0 1 2 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1 1 0 1 2 3 4 5 6 7 8 9 2 2 2 2 2 2 2 2 2 2 0 1 2 3 4 5 6 7 8 9 3 3 0 1 Memory Set Number 0 1 2 3 0 1 2 3 4 5 6 7 Cache block 12 can be placed Fully (2-way) Set Direct Associative Associative Mapped anywhere anywhere in only into set 0 block 4 (12 mod 4) (12 mod 8) February 9, 2010 CS152, Spring 2010 3

Direct-Mapped Cache Tag Index Block Offset t V Tag k Data Block b 2 k lines = t HIT Data Word or Byte February 9, 2010 CS152, Spring 2010

2-Way Set-Associative Cache t Tag Index k V Tag Data Block Block Offset b V Tag Data Block t = = Data Word or Byte HIT February 9, 2010 CS152, Spring 2010

Fully Associative Cache V Tag Data Block t = Block Offset Tag b t = = Data Word or Byte HIT February 9, 2010 CS152, Spring 2010

Replacement Policy In an associative cache, which block from a set should be evicted when the set becomes full? Random Least Recently Used (LRU) LRU cache state must be updated on every access true implementation only feasible for small sets (2-way) pseudo-lru binary tree often used for 4-8 way First In, First Out (FIFO) a.k.a. Round-Robin used in highly associative caches Not Most Recently Used (NMRU) FIFO with exception for most recently used block or blocks This is a second-order effect. Why? Replacement only happens on misses February 9, 2010 CS152, Spring 2010 7

Block Size and Spatial Locality Block is unit of transfer between the cache and memory Tag Word0 Word1 Word2 Word3 4 word block, b=2 Split CPU address block address offset b 32-b bits b bits 2 b = block size a.k.a line size (in bytes) Larger block size has distinct hardware advantages less tag overhead exploit fast burst transfers from DRAM exploit fast burst transfers over wide busses What are the disadvantages of increasing block size? Fewer blocks => more conflicts. Can waste bandwidth. February 9, 2010 CS152, Spring 2010 8

CPU-Cache Interaction (5-stage pipeline) PCen PC 0x4 Add addr nop inst hit? Primary Instruction Cache IR D Decode, Register Fetch E A B MD1 ALU M Y MD2 we addr Primary Data rdata Cache wdata wdata hit? R Stall entire CPU on data cache miss To Memory Control Cache Refill Data from Lower Levels of Memory Hierarchy February 9, 2010 CS152, Spring 2010 9

Improving Cache Performance Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: reduce the hit time reduce the miss rate reduce the miss penalty What is the simplest design strategy? Biggest cache that doesn t increase hit time past 1-2 cycles (approx 8-32KB in modern technology) [ design issues more complex with out-of-order superscalar processors ] February 9, 2010 CS152, Spring 2010 10

Serial-versus-Parallel Cache and Memory access α is HIT RATIO: Fraction of references in cache 1 - α is MISS RATIO: Remaining references Processor Addr Data CACHE Average access time for serial search: Addr Data Main Memory t cache + (1 - α) t mem Processor Addr Data CACHE Data Main Memory Average access time for parallel search: α t cache + (1 - α) t mem Savings are usually small, t mem >> t cache, hit ratio α high High bandwidth required for memory path Complexity of handling parallel paths can slow t cache February 9, 2010 CS152, Spring 2010

Causes for Cache Misses Compulsory: first-reference to a block a.k.a. cold start misses - misses that would occur even with infinite cache Capacity: cache is too small to hold all data needed by the program - misses that would occur even under perfect replacement policy Conflict: misses that occur because of collisions due to block-placement strategy - misses that would not occur with full associativity February 9, 2010 CS152, Spring 2010 12

Effect of Cache Parameters on Performance Larger cache size + reduces capacity and conflict misses - hit time will increase Higher associativity + reduces conflict misses - may increase hit time Larger block size + reduces compulsory and capacity (reload) misses - increases conflict misses and miss penalty February 9, 2010 CS152, Spring 2010 13

Write Policy Choices Cache hit: write through: write both cache & memory» Generally higher traffic but simpler pipeline & cache design write back: write cache only, memory is written only when the entry is evicted» A dirty bit per block further reduces write-back traffic» Must handle 0, 1, or 2 accesses to memory for each load/store Cache miss: no write allocate: only write to main memory write allocate (aka fetch on write): fetch into cache Common combinations: write through and no write allocate write back with write allocate February 9, 2010 CS152, Spring 2010 14

CS152 Administrivia PS1/Lab1 due at start of class on Thursday Krste s office hours now 2-3pm, but Monday is a UCB holiday so no office hours. Quiz 1 on Tuesday. February 9, 2010 CS152, Spring 2010 15

Write Performance t Tag V Tag Index k Block Offset b Data 2 k lines = t WE HIT Data Word or Byte February 9, 2010 CS152, Spring 2010 16

Reducing Write Hit Time Problem: Writes take two cycles in memory stage, one cycle for tag check plus one cycle for data write if hit Solutions: Design data RAM that can perform read and write in one cycle, restore old value after tag miss Fully-associative (CAM Tag) caches: Word line only enabled if hit Pipelined writes: Hold write data for store in single buffer ahead of cache, write cache data during next store s tag check February 9, 2010 CS152, Spring 2010 17

Pipelining Cache Writes Address and Store Data From CPU Tag Index Store Data Tags Delayed Write Addr. =? Load/Store S L Delayed Write Data Data =? 1 0 Load Data to CPU Hit? Data from a store hit written into data portion of cache during tag access of subsequent store February 9, 2010 CS152, Spring 2010 18

Write Buffer to Reduce Read Miss Penalty CPU RF Data Cache Write buffer Unified L2 Cache Evicted dirty lines for writeback cache OR All writes in writethrough cache Processor is not stalled on writes, and read misses can go ahead of write to main memory Problem: Write buffer may hold updated value of location needed by a read miss Simple scheme: on a read miss, wait for the write buffer to go empty Faster scheme: Check write buffer addresses against read miss addresses, if no match, allow read miss to go ahead of writes, else, return value in write buffer February 9, 2010 CS152, Spring 2010 19

Block-level Optimizations Tags are too large, i.e., too much overhead Simple solution: Larger blocks, but miss penalty could be large. Sub-block placement (aka sector cache) A valid bit added to units smaller than full block, called sub-blocks Only read a sub-block on a miss If a tag matches, is the word in the cache? 100 300 204 1 1 1 1 1 1 0 0 0 1 0 1 February 9, 2010 CS152, Spring 2010 20

Multilevel Caches Problem: A memory cannot be large and fast Solution: Increasing sizes of cache at each level CPU L1$ L2$ DRAM Local miss rate = misses in cache / accesses to cache Global miss rate = misses in cache / CPU memory accesses Misses per instruction = misses in cache / number of instructions February 9, 2010 CS152, Spring 2010 21

Presence of L2 influences L1 design Use smaller L1 if there is also L2 Trade increased L1 miss rate for reduced L1 hit time and reduced L1 miss penalty Reduces average access energy Use simpler write-through L1 with on-chip L2 Write-back L2 cache absorbs write traffic, doesn t go off-chip At most one L1 miss request per L1 access (no dirty victim write back) simplifies pipeline control Simplifies coherence issues Simplifies error recovery in L1 (can use just parity bits in L1 and reload from L2 when parity error detected on L1 read) February 9, 2010 CS152, Spring 2010 22

Inclusion Policy Inclusive multilevel cache: Inner cache holds copies of data in outer cache External coherence snoop access need only check outer cache Exclusive multilevel caches: Inner cache may hold data not in outer cache Swap lines between inner/outer caches on miss Used in AMD Athlon with 64KB primary and 256KB secondary cache Why choose one type or the other? February 9, 2010 CS152, Spring 2010 23

Itanium-2 On-Chip Caches (Intel/HP, 2002) Level 1: 16KB, 4-way s.a., 64B line, quad-port (2 load+2 store), single cycle latency Level 2: 256KB, 4-way s.a, 128B line, quad-port (4 load or 4 store), five cycle latency Level 3: 3MB, 12-way s.a., 128B line, single 32B port, twelve cycle latency February 9, 2010 CS152, Spring 2010 2/17/2009 24

Power 7 On-Chip Caches [IBM 2009] 32KB L1 I$/core 32KB L1 D$/core 3-cycle latency 256KB Unified L2$/core 8-cycle latency 32MB Unified Shared L3$ Embedded DRAM 25-cycle latency to local slice February 9, 2010 CS152, Spring 2010 25

Acknowledgements These slides contain material developed and copyright by: Arvind (MIT) Krste Asanovic (MIT/UCB) Joel Emer (Intel/MIT) James Hoe (CMU) John Kubiatowicz (UCB) David Patterson (UCB) MIT material derived from course 6.823 UCB material derived from course CS252 February 9, 2010 CS152, Spring 2010 26