Lecture 7 - Memory Hierarchy-II

Similar documents
CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

EE 660: Computer Architecture Advanced Caches

CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. Lecture 7 Memory III

ECE 252 / CPS 220 Advanced Computer Architecture I. Lecture 13 Memory Part 2

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

ECE 552 / CPS 550 Advanced Computer Architecture I. Lecture 13 Memory Part 2

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

ECE 4750 Computer Architecture, Fall 2014 T05 FSM and Pipelined Cache Memories

Announcements. ECE4750/CS4420 Computer Architecture L6: Advanced Memory Hierarchy. Edward Suh Computer Systems Laboratory

Page 1. Multilevel Memories (Improving performance using a little cash )

CSC 631: High-Performance Computer Architecture

Lecture 11 Cache. Peng Liu.

EC 513 Computer Architecture

Memory Hierarchy. 2/18/2016 CS 152 Sec6on 5 Colin Schmidt

Agenda. EE 260: Introduction to Digital Design Memory. Naive Register File. Agenda. Memory Arrays: SRAM. Memory Arrays: Register File

CS252 Spring 2017 Graduate Computer Architecture. Lecture 11: Memory

Memory Hierarchy. Slides contents from:

Advanced Computer Architecture

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

CS 152 Computer Architecture and Engineering. Lecture 6 - Memory

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Improving Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Highly-Associative Caches

Memory Hierarchy. Slides contents from:

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

10/16/2017. Miss Rate: ABC. Classifying Misses: 3C Model (Hill) Reducing Conflict Misses: Victim Buffer. Overlapping Misses: Lockup Free Cache

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

Computer Architecture Spring 2016

Mo Money, No Problems: Caches #2...

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Inside out of your computer memories (III) Hung-Wei Tseng

Advanced Caching Techniques (2) Department of Electrical Engineering Stanford University

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o

Lecture 6 - Memory. Dr. George Michelogiannakis EECS, University of California at Berkeley CRD, Lawrence Berkeley National Laboratory

Show Me the $... Performance And Caches

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

L2 cache provides additional on-chip caching space. L2 cache captures misses from L1 cache. Summary

Chapter 5. Topics in Memory Hierachy. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ.

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

Chapter 2: Memory Hierarchy Design Part 2

EITF20: Computer Architecture Part4.1.1: Cache - 2

Lecture notes for CS Chapter 2, part 1 10/23/18

Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance

Announcements. ! Previous lecture. Caches. Inf3 Computer Architecture

Lecture 9 - Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

CS252 Graduate Computer Architecture Spring 2014 Lecture 10: Memory

Lecture 13: Cache Hierarchies. Today: cache access basics and innovations (Sections )

Caching Basics. Memory Hierarchies

Chapter 2: Memory Hierarchy Design Part 2

Advanced Caching Techniques

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

Memory hierarchy review. ECE 154B Dmitri Strukov

Lecture 11 Cache and Virtual Memory

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture Caches Part 2

Topics. Digital Systems Architecture EECE EECE Need More Cache?

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

ECE 4750 Computer Architecture, Fall 2018 T04 Fundamental Memory Microarchitecture

Q3: Block Replacement. Replacement Algorithms. ECE473 Computer Architecture and Organization. Memory Hierarchy: Set Associative Cache

Lecture 9: Improving Cache Performance: Reduce miss rate Reduce miss penalty Reduce hit time

EITF20: Computer Architecture Part4.1.1: Cache - 2

DECstation 5000 Miss Rates. Cache Performance Measures. Example. Cache Performance Improvements. Types of Cache Misses. Cache Performance Equations

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

SE-292 High Performance Computing. Memory Hierarchy. R. Govindarajan Memory Hierarchy

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Classification Steady-State Cache Misses: Techniques To Improve Cache Performance:

Computer Architecture ELEC3441

CS3350B Computer Architecture

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Cache Performance (H&P 5.3; 5.5; 5.6)

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CS 152 Computer Architecture and Engineering. Lecture 12 - Advanced Out-of-Order Superscalars

Advanced Caching Techniques

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Caches Concepts Review

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Lecture 14: Cache Innovations and DRAM. Today: cache access basics and innovations, DRAM (Sections )

CS 152 Computer Architecture and Engineering

Transcription:

CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II John Wawrzynek Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~johnw http://inst.eecs.berkeley.edu/~cs152

Last time in Lecture 6 Dynamic RAM (DRAM) is main form of main memory storage in use today Holds values on small capacitors, need refreshing (hence dynamic) Slow multi-step access: precharge, read row, read column Static RAM (SRAM) is faster but more expensive Used to build on-chip memory for caches Cache holds small set of values in fast memory (SRAM) close to processor Need to develop search scheme to find values in cache, and replacement policy to make space for newly accessed locations Caches exploit two forms of predictability in memory reference streams Temporal locality, same location likely to be accessed again soon Spatial locality, neighboring location likely to be accessed soon 2

Line Size and Spatial Locality A line is unit of transfer between the cache and memory Tag Word0 Word1 Word2 Word3 4 word line, b=2 Split CPU address Line Address Offset 32-b bits 2 b = line size a.k.a line size (in bytes) b bits Larger line size has distinct hardware advantages less tag overhead exploit fast burst transfers from DRAM exploit fast burst transfers over wide busses What are the disadvantages of increasing line size? Fewer lines => more conflicts. Can waste bandwidth. 3

Direct-Mapped Cache Tag Index Offset t V Tag k Data b 2 k lines = t HIT Data Word or Byte

2-Way Set-Associative Cache Tag Index Offset b t V Tag k Data V Tag Data t = = Data Word or Byte HIT

V Fully Associative Cache Tag Data t = Offset Tag b t = = Data Word or Byte HIT

Replacement Policy In an associative cache, which line from a set should be evicted when the set becomes full? Random Least-Recently Used (LRU) LRU cache state must be updated on every access True implementation only feasible for small sets (2-way) Pseudo-LRU binary tree often used for 4-8 way First-In, First-Out (FIFO) a.k.a. Round-Robin Used in highly associative caches Not-Most-Recently Used (NMRU) FIFO with exception for most-recently used line or lines This is a second-order effect. Why? Replacement only happens on misses 7

CPU-Cache Interaction (5-stage pipeline) 0x4 Add bubble PC addr inst hit? PCen Primary Instruction Cache IR D To Memory Control Decode, Register Fetch E A B MD1 ALU M Y MD2 we addr Primary Data rdata Cache hit? wdata R Stall entire CPU on data cache miss Cache Refill Data from Lower Levels of Memory Hierarchy 8

Improving Cache Performance Average memory access time (AMAT) = Hit time + Miss rate x Miss penalty To improve performance: reduce the hit time reduce the miss rate reduce the miss penalty What is best cache design for 5-stage pipeline? Biggest cache that doesn t increase hit time past 1 cycle (approx 8-32KB in modern technology) [ design issues more complex with deeper pipelines and/or out-oforder superscalar processors] 9

Causes of Cache Misses: The 3 C s Compulsory: first reference to a line (a.k.a. cold start misses) misses that would occur even with infinite cache Capacity: cache is too small to hold all data needed by the program misses that would occur even under perfect replacement policy Conflict: misses that occur because of collisions due to line-placement strategy misses that would not occur with ideal full associativity 10

Effect of Cache Parameters on Performance Larger cache size + reduces capacity and conflict misses - hit time will increase Higher associativity + reduces conflict misses - may increase hit time Larger line size + reduces compulsory misses - increases conflict misses and miss penalty 11

Write Policy Choices Cache hit: write through: write both cache & memory Generally higher traffic but simpler pipeline & cache design write back: write cache only, memory is written only when the entry is evicted A dirty bit per line further reduces write-back traffic Must handle 0, 1, or 2 accesses to memory for each load/store Cache miss: no write allocate: only write to main memory write allocate (aka fetch on write): fetch into cache Common combinations: write through and no write allocate write back with write allocate 12

Write Performance Tag Index Offset t V Tag k b Data 2 k lines = t WE HIT Data Word or Byte 13

Reducing Write Hit Time Problem: Writes take two cycles in memory stage, one cycle for tag check plus one cycle for data write if hit Solutions: Design data RAM that can perform read and write in one cycle Pipelined writes: Hold write data for store in single buffer ahead of cache, write cache data during next store s tag check 14

Pipelining Cache Writes Address and Store Data From CPU Tag Index Store Data Tags Delayed Write Addr. =? Load/Store S L Delayed Write Data Data =? 1 0 Load Data to CPU Hit? Data from a store hit written into data portion of cache during tag access of subsequent store 15

Write Buffer to Reduce Read Miss Penalty CPU RF Data Cache Write buffer Unified L2 Cache Evicted dirty lines for writeback cache OR All writes in writethrough cache Processor is not stalled on writes, and read misses can go ahead of write to main memory Problem: Write buffer may hold updated value of location needed by a read miss Simple solution: on a read miss, wait for the write buffer to go empty Faster solution: Check write buffer addresses against read miss addresses, if no match, allow read miss to go ahead of writes, else, return value in write buffer 16

CS152 Administrivia PS 2 and Lab 2 are out get started! 18

Multilevel Caches Problem: A memory cannot be large and fast Solution: Increasing sizes of cache at each level CPU L1$ L2$ DRAM Local miss rate = misses in cache / accesses to cache Global miss rate = misses in cache / CPU memory accesses Misses per instruction = misses in cache / number of instructions 19

Presence of L2 influences L1 design Use smaller L1 if there is also L2 Trade increased L1 miss rate for reduced L1 hit time Backup L2 reduces L1 miss penalty Reduces average access energy Use simpler write-through L1 with on-chip L2 Write-back L2 cache absorbs write traffic, doesn t go off-chip At most one L1 memory request per L1 access simplifies pipeline control Simplifies coherence issues Simplifies error recovery in L1 (use error detection parity bits in L1 and reload from L2 when parity error detected on L1 read) 20

Itanium-2 On-Chip Caches (Intel/HP, 2002) Level 1: 16KB, 4-way s.a., 64B line, quad-port (2 load+2 store), single cycle latency Level 2: 256KB, 4-way s.a, 128B line, quad-port (4 load or 4 store), five cycle latency Level 3: 3MB, 12-way s.a., 128B line, single 32B port, twelve cycle latency 22

Power 7 On-Chip Caches [IBM 2009] 32KB L1 I$/core 32KB L1 D$/core 3-cycle latency 256KB Unified L2$/core 8-cycle latency 32MB Unified Shared L3$ Embedded DRAM (edram) 25-cycle latency to local slice 9/22/2016 CS152, Fall 2016 23

IBM z196 Mainframe Caches 2010 96 cores (4 cores/chip, 24 chips/system) Out-of-order, 3-way superscalar @ 5.2GHz L1: 64KB I-$/core + 128KB D-$/core L2: 1.5MB private/core (144MB total) L3: 24MB shared/chip (edram) (576MB total) L4: 768MB shared/system (edram) 24

Prefetching Speculate on future instruction and data accesses and fetch them into cache(s) Instruction accesses easier to predict than data accesses Varieties of prefetching Hardware prefetching Software prefetching Mixed schemes What types of misses does prefetching affect? 25

Issues in Prefetching Usefulness should produce hits Timeliness not late and not too early Cache and bandwidth pollution CPU RF L1 Instruction L1 Data Unified L2 Cache Prefetched data 26

Hardware Instruction Prefetching Instruction prefetch in Alpha AXP 21064 Fetch two lines on a miss; the requested line (i) and the next consecutive line (i+1) Requested line placed in cache, and next line in instruction stream buffer If miss in cache but hit in stream buffer, move stream buffer line into cache and prefetch next line (i+2) CPU RF Req line Stream Buffer L1 Instruction Req line Prefetched instruction line Unified L2 Cache 27

Hardware Data Prefetching Prefetch-on-miss: Prefetch b + 1 upon miss on b One-Block Lookahead (OBL) scheme (HP PA7200) Initiate prefetch for block b + 1 when block b is accessed Why is this different from doubling block size? Can extend to N-block lookahead Strided prefetch If observe sequence of accesses to line b, b+n, b+2n, then prefetch b+3n etc. Example: IBM Power 5 [2003] supports eight independent streams of strided prefetch per processor, prefetching 12 lines ahead of current access 28

Software Prefetching for(i=0; i < N; i++) { prefetch( &a[i + 1] ); prefetch( &b[i + 1] ); SUM = SUM + a[i] * b[i]; } Cache must be non-blocking or lockup-free. the processor can proceed while the prefetched data is being fetched; and the caches continue to supply instructions and data while waiting for the prefetched data to return. 29

Software Prefetching Issues Timing is the biggest issue, not predictability If you prefetch very close to when the data is required, you might be too late Prefetch too early, cause pollution Estimate how long it will take for the data to come into L1, so we can set P appropriately Why is this hard to do? for(i=0; i < N; i++) { prefetch( &a[i + P] ); prefetch( &b[i + P] ); SUM = SUM + a[i] * b[i]; } Must consider cost of prefetch instructions 30

Compiler Optimizations for improved caching Restructuring code affects the data access sequence Group data accesses together to improve spatial locality Re-order data accesses to improve temporal locality Prevent data from entering the cache Useful for variables that will only be accessed once before being replaced Needs mechanism for software to tell hardware not to cache data ( noallocate instruction hints or page table bits) 31

Loop Interchange for(j=0; j < N; j++) { for(i=0; i < M; i++) { x[i][j] = 2 * x[i][j]; } } for(i=0; i < M; i++) { for(j=0; j < N; j++) { x[i][j] = 2 * x[i][j]; } } What type of locality does this improve? 32

Loop Fusion for(i=0; i < N; i++) a[i] = b[i] * c[i]; for(i=0; i < N; i++) d[i] = a[i] * c[i]; for(i=0; i < N; i++) { a[i] = b[i] * c[i]; d[i] = a[i] * c[i]; } What type of locality does this improve? 33

Matrix Multiply, Naïve Code X = Y * Z for(i=0; i < N; i++) for(j=0; j < N; j++) { r = 0; for(k=0; k < N; k++) r = r + y[i][k] * z[k][j]; x[i][j] = r; } z k j y k x j i i Not touched Old access New access 34

Matrix Multiply with Cache Tiling for(jj=0; jj < N; jj=jj+b) for(kk=0; kk < N; kk=kk+b) for(i=0; i < N; i++) for(j=jj; j < min(jj+b,n); j++) { r = 0; for(k=kk; k < min(kk+b,n); k++) r = r + y[i][k] * z[k][j]; x[i][j] = x[i][j] + r; } y k z k x j j i i What type of locality does this improve? 35

Acknowledgements These slides contain material developed and copyright by: Arvind (MIT) Krste Asanovic (MIT/UCB) Joel Emer (Intel/MIT) James Hoe (CMU) John Kubiatowicz (UCB) David Patterson (UCB) MIT material derived from course 6.823 UCB material derived from course CS252 36