Cache Memories. Topics. Next time. Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance

Similar documents
CISC 360. Cache Memories Nov 25, 2008

Cache memories are small, fast SRAM based memories managed automatically in hardware.

Cache Memories October 8, 2007

Cache Memories. EL2010 Organisasi dan Arsitektur Sistem Komputer Sekolah Teknik Elektro dan Informatika ITB 2010

Today Cache memory organization and operation Performance impact of caches

Giving credit where credit is due

Cache memories The course that gives CMU its Zip! Cache Memories Oct 11, General organization of a cache memory

Cache Memories. Cache Memories Oct. 10, Inserting an L1 Cache Between the CPU and Main Memory. General Org of a Cache Memory

ν Hold frequently accessed blocks of main memory 2 CISC 360, Fa09 Cache is an array of sets. Each set contains one or more lines.

211: Computer Architecture Summer 2016

Systems I. Optimizing for the Memory Hierarchy. Topics Impact of caches on performance Memory hierarchy considerations

Denison University. Cache Memories. CS-281: Introduction to Computer Systems. Instructor: Thomas C. Bressoud

Agenda Cache memory organization and operation Chapter 6 Performance impact of caches Cache Memories

Today. Cache Memories. General Cache Concept. General Cache Organization (S, E, B) Cache Memories. Example Memory Hierarchy Smaller, faster,

Last class. Caches. Direct mapped

Carnegie Mellon. Cache Memories. Computer Architecture. Instructor: Norbert Lu1enberger. based on the book by Randy Bryant and Dave O Hallaron

Cache Memories /18-213/15-513: Introduction to Computer Systems 12 th Lecture, October 5, Today s Instructor: Phil Gibbons

Memory Hierarchy. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Memory Hierarchy. Announcement. Computer system model. Reference

Memory Hierarchy. Computer Systems Organization (Spring 2017) CSCI-UA 201, Section 3. Instructor: Joanna Klukowska

CS 33. Caches. CS33 Intro to Computer Systems XVIII 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Memory Hierarchy. Cache Memory Organization and Access. General Cache Concept. Example Memory Hierarchy Smaller, faster,

Carnegie Mellon. Cache Memories

Cache Memories. From Bryant and O Hallaron, Computer Systems. A Programmer s Perspective. Chapter 6.

Example. How are these parameters decided?

The course that gives CMU its Zip! Memory System Performance. March 22, 2001

Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory

Cache Memories. Andrew Case. Slides adapted from Jinyang Li, Randy Bryant and Dave O Hallaron

High-Performance Parallel Computing

Cache Memories : Introduc on to Computer Systems 12 th Lecture, October 6th, Instructor: Randy Bryant.

Memory Hierarchy. Instructor: Adam C. Champion, Ph.D. CSE 2431: Introduction to Operating Systems Reading: Chap. 6, [CSAPP]

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Roadmap. Java: Assembly language: OS: Machine code: Computer system:

Computer Organization: A Programmer's Perspective

CSCI-UA.0201 Computer Systems Organization Memory Hierarchy

CS429: Computer Organization and Architecture

Systems Programming and Computer Architecture ( ) Timothy Roscoe

CISC 360. Cache Memories Exercises Dec 3, 2009

Lecture 15: Caches and Optimization Computer Architecture and Systems Programming ( )

CS 261 Fall Caching. Mike Lam, Professor. (get it??)

CS429: Computer Organization and Architecture

Locality. CS429: Computer Organization and Architecture. Locality Example 2. Locality Example

CS 240 Stage 3 Abstractions for Practical Systems

Chapter 6 Caches. Computer System. Alpha Chip Photo. Topics. Memory Hierarchy Locality of Reference SRAM Caches Direct Mapped Associative

Cache Memories. Lecture, Oct. 30, Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Computer Architecture and System Software Lecture 09: Memory Hierarchy. Instructor: Rob Bergen Applied Computer Science University of Winnipeg

Memory hierarchies: caches and their impact on the running time

CSCI 402: Computer Architectures. Performance of Multilevel Cache

211: Computer Architecture Summer 2016

Problem: Processor Memory BoJleneck

CS356: Discussion #9 Memory Hierarchy and Caches. Marco Paolieri Illustrations from CS:APP3e textbook

Caches Concepts Review

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2015 Lecture 15

Caches Spring 2016 May 4: Caches 1

Cache Performance II 1

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2017 Lecture 15

Caches III. CSE 351 Spring Instructor: Ruth Anderson

Kathryn Chan, Kevin Bi, Ryan Wong, Waylon Huang, Xinyu Sui

How to Write Fast Numerical Code

CS3350B Computer Architecture

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

Problem: Processor- Memory Bo<leneck

How to Write Fast Numerical Code

CS 33. Architecture and Optimization (3) CS33 Intro to Computer Systems XVI 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

Caches III. CSE 351 Winter Instructor: Mark Wyse

CS3350B Computer Architecture

write-through v. write-back write-through v. write-back write-through v. write-back option 1: write-through write 10 to 0xABCD CPU RAM Cache ABCD: FF

EE108B Lecture 13. Caches Wrap Up Processes, Interrupts, and Exceptions. Christos Kozyrakis Stanford University

Types of Cache Misses. The Hardware/So<ware Interface CSE351 Winter Cache Read. General Cache OrganizaJon (S, E, B) Memory and Caches II

The Hardware/So<ware Interface CSE351 Winter 2013

Computer Systems. Memory Hierarchy. Han, Hwansoo

Computer Systems CSE 410 Autumn Memory Organiza:on and Caches

Key Point. What are Cache lines

Announcements. EE108B Lecture 13. Caches Wrap Up Processes, Interrupts, and Exceptions. Measuring Performance. Memory Performance

+ Random-Access Memory (RAM)

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

Memory Hierarchy: Caches, Virtual Memory

Lecture 17: Memory Hierarchy and Cache Coherence Concurrent and Mul7core Programming

Computer Organization - Overview

CS 31: Intro to Systems Caching. Kevin Webb Swarthmore College March 24, 2015

General Cache Mechanics. Memory Hierarchy: Cache. Cache Hit. Cache Miss CPU. Cache. Memory CPU CPU. Cache. Cache. Memory. Memory

L2 cache provides additional on-chip caching space. L2 cache captures misses from L1 cache. Summary

Advanced Memory Organizations

211: Computer Architecture Summer 2016

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Spring 2018 :: CSE 502. Cache Design Basics. Nima Honarmand

Copyright 2012, Elsevier Inc. All rights reserved.

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

Cache Memory and Performance

CSE502: Computer Architecture CSE 502: Computer Architecture

Question 13 1: (Solution, p 4) Describe the inputs and outputs of a (1-way) demultiplexer, and how they relate.

LECTURE 11. Memory Hierarchy

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CS241 Computer Organization Spring Principle of Locality

Random-Access Memory (RAM) CISC 360. The Memory Hierarchy Nov 24, Conventional DRAM Organization. SRAM vs DRAM Summary.

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Agenda. Cache-Memory Consistency? (1/2) 7/14/2011. New-School Machine Structures (It s a bit more complicated!)

Spring 2016 :: CSE 502 Computer Architecture. Caches. Nima Honarmand

Caches. Han Wang CS 3410, Spring 2012 Computer Science Cornell University. See P&H 5.1, 5.2 (except writes)

Cache Performance (H&P 5.3; 5.5; 5.6)

Transcription:

Cache Memories Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance Next time Dynamic memory allocation and memory bugs Fabián E. Bustamante, 27

Cache memories Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory CPU looks first for data in L1, then in L2,, then in main memory. Typical bus structure: CPU chip register file L1 ALU cache cache bus system bus memory bus L2 cache bus interface I/O bridge main memory 2

Inserting an L1 cache The transfer unit between the CPU register file and the cache is a 4-byte block. The transfer unit between the cache and main memory is a 4-word block (16 bytes). line line 1 block 1 block 21 block 3 a b c d... p q r s... w x y z... The tiny, very fast CPU register file has room for four 4-byte words. The small fast L1 cache has room for two 4-word blocks. The big slow main memory has room for many 4-word blocks. 3

General organization of a cache memory Each memory address is m bits 1 bit per line t bits per line B = 2 b bytes per cache block Cache is an array of S = 2 s sets Each set contains one or more lines (E) set : 1 1 B 1 B 1 E lines per set Each line holds a block of data (size B) S = 2 s sets set 1: 1 1 B 1 B 1 Cache size: C = S x E x B data bytes set S-1: 1 1 B 1 B 1 4

Addressing caches Address A: t bits s bits b bits set : v v 1 1 B 1 B 1 m-1 <> <set index> <block offset> set 1: set S-1: v v v v 1 1 1 1 B 1 B 1 B 1 B 1 The word at address A is in the cache if the bits in one of the <> lines in set <set index> match <>. The word contents begin at offset <block offset> bytes from the beginning of the block. 5

Direct-mapped cache Simplest kind of cache Characterized by exactly one line per set. set : cache block E=1 lines per set set 1: cache block set S-1: cache block 6

Accessing direct-mapped caches Set selection Use the set index bits to determine the set of interest. set : cache block selected set set 1: cache block m-1 t bits s bits 1 b bits set index block offset set S-1: cache block 7

Accessing direct-mapped caches Line matching and word selection Line matching: Find a line in the selected set with a matching Word selection: Then extract the word =1? (1) The bit must be set selected set (i): 1 1 2 3 4 5 6 7 11 w w 1 w 2 w 3 (2) The bits in the cache line must match the bits in the address m-1 =? t bits 11 s bits b bits i 1 set index block offset (3) If (1) and (2), then cache hit, and block offset selects starting byte. 8

Direct-mapped cache simulation t=1 s=2 b=1 m=16 byte addresses, B=2 bytes/block, S=4 sets, E=1 entry/set x xx x Address Tag Index Offset Block # And you can tell them apart by the 1 1 2 1 1 3 1 1 1 4 1 2 5 1 1 2 6 11 3 7 11 1 3 8 1 4 9 1 1 4 1 1 1 5 11 1 1 1 5 12 1 1 6 13 1 1 1 6 14 1 11 7 15 1 11 1 7 Tag + index uniquely identifies each block Multiple blocks map to the same cache set ( & 4 to, 1 & 5 to 1) 9

Direct-mapped cache simulation t=1 s=2 b=1 x xx x v block[] block[1] [ 2 ] (miss) 1 m[] m[1] 1 [1 2 ] (hit) 13 [111 2 ] (miss) v block[] block[1] 1 m[] m[1] 1 m[12] m[13] 8 [1 2 ] (miss) v block[] block[1] 1 m[8] m[9] 1 m[12] m[13] [ 2 ] (miss) A conflict miss v block[] block[1] 1 m[] m[1] 1 m[12] m[13] 1

Why use middle bits as index? 1 1 11 4-line Cache High-order bit indexing Adjacent memory lines would map to same cache entry Poor use of spatial locality Middle-order bit indexing Consecutive memory lines map to different cache lines Can hold C-byte region of address space in cache at one time 1 1 11 1 11 11 111 1 11 11 111 11 111 111 1111 High-Order Bit Indexing Middle-Order Bit Indexing 1 1 11 1 11 11 111 1 11 11 111 11 111 111 1111 11

Set associative caches In direct mapped caches, since every set as exactly one line conflict misses Set associative cache >1 line per set (1< E < C/B) E-way associative set : cache block cache block E=2 lines per set set 1: set S-1: cache block cache block cache block cache block 12

Accessing set associative caches Set selection identical to direct-mapped cache set : cache block cache block Selected set set 1: cache block cache block m-1 t bits s bits 1 b bits set index block offset set S-1: cache block cache block 13

Accessing set associative caches Line matching and word selection must compare the in each line in the selected set. =1? (1) The bit must be set. 1 2 3 4 5 6 7 selected set (i): 1 11 1 11 w w 1 w 2 w 3 (2) The bits in one of the cache lines must match the bits in the address m-1 =? t bits 11 s bits b bits i 1 set index block offset (3) If (1) and (2), then cache hit, and block offset selects starting byte. 14

Fully associative caches A single set with all the cache lines (E = C/B) Set selection is trivial, only one set Line matching and word selection same as with set associative Pricy so typically use for small caches like TLBs cache block set : cache block cache block cache block cache block E=C/B lines t bits b bits m-1 block offset 15

The issues with writes So far, all examples have used reads simple Look for a copy of the desired word, if hit, return Else, fetch block from next level, cache it, return word For writes a bit more complicated If there s a hit, what to do after updating the cache copy? Write it to next level? Write-through; simple but expensive Defer update? Write-back; write when the block is evicted, faster but more complex (need a dirty bit) 16

The issues with writes For writes a bit more complicated If there s a miss, bring it to cache or write through? Write-allocate Bring the block to cache and update; leverage spatial locality but a block transfer per write miss No-write-allocate Write through bypassing the cache Write through caches are typically no-write-allocate As logic density increases, write-back s complexity is less of an issue and performance is a plus 17

Regs. Real Cache Hierarchies L1 Data 1 cycle 16 KB 4-way assoc Write-through 32B lines L1 Instruction 1 cycle 16 KB, 4-way 32B lines L2 Unified 128KB--2 MB 4-way assoc Write-back Write allocate 32B lines Main Memory Pentium III Xeon Processor Chip Caches can be for anything (unified) or specialized for data/instruction (d-cache & i-cache); why specialized? Processor can read both at the same time i-caches are typically read-only, simpler, and with different access patterns Data and instruction access can t create conflict with each other 18

Regs. Real Cache Hierarchies L1 Data Core i7 4 cycles 32KB 8-way assoc L2 Unified L3 Unified L1 Instruction 4 cycles 32KB 8-way assoc 11 cycles 256KB 8-way assoc 3-4 cycles 8MB 16-way assoc Main Memory Core Core 3 larger, slower, cheaper Processor chip Size: E: Access: 32KB 8-way 4 cycles 256KB 8-way 11 cycles 8MB 16-way 3-4 cycles 19

Cache performance metrics Miss Rate Fraction of memory references not found in cache Typical numbers: 3-1% for L1 can be quite small (e.g., < 1%) for L2, depending on size, etc. Hit Time Time to deliver a line in the cache to the processor includes time to determine whether the line is in the cache Typical numbers: 1-2 clock cycle for L1, 5-2 clock cycles for L2 Miss Penalty Additional time required because of a miss Typically 5-2 cycles for main memory (increasing) 2

Cache performance metrics Big difference between a hit and a miss 1x if you only have L1 and main memory A 99% hit rate is twice as good as 97% rate? Consider Cache hit time 1 cycle Miss penalty 1 cycles Average access time 97% hit rate:.97 * 1 cycle +.3 * (1+1 cycles) = 1 cycle +.3 * 1 cycles = 4 cycles 99% hit rate:.99 * 1 cycle +.1* (1+1 cycles) = 1 cycle +.1 * 1 cycles = 2 cycles 21

Writing cache-friendly code Programs with better locality will tend to have lower miss rates and run faster Basic approach to cache friendly code Make the common case go fast core loops in core functions Minimize the number of cache misses in each inner loop all other things being equal, better miss rates means faster runs Example Repeated references to variables are good (temporal locality) Stride-1 reference patterns are good (spatial locality) cold cache, 4-byte words, 4-word cache blocks int sumarrayrows(int a[m][n]) { int i, j, sum = ; } int sumarraycols(int a[m][n]) { int i, j, sum = ; for (i = ; i < M; i++) for (j = ; j < N; j++) sum += a[i][j]; return sum; for (j = ; j < N; j++) for (i = ; i < M; i++) sum += a[i][j]; return sum; Miss rate = 1/4 = 25% } Miss rate = 1% 22

The memory mountain Read throughput (read bandwidth) Number of bytes read from memory per sec (MB/s) Memory mountain Measured read throughput as a function of spatial and temporal locality Compact way to characterize memory system performance /* The test function */ void test(int elems, int stride) { int i, result = ; volatile int sink; for (i = ; i < elems; i += stride) result += data[i]; /* Run test(elems, stride) and return read throughput (MB/s) */ double run(int size, int stride, double Mhz) { double cycles; int elems = size / sizeof(int); /* warm up the cache */ test(elems, stride); } /* So compiler doesn't optimize away the loop */ sink = result; } /* call test(elems,stride) */ cycles = fcyc2(test, elems, stride, ); /* convert cycles to MB/s */ return (size / stride) / (cycles / Mhz); 23

64M 16M s11 s13 s15 s32 4M 1M s9 s7 256K s5 64K s3 16K s1 4K Read throughput (MB/s) The memory mountain for Intel Core i7 7 Flat line from hw prefetching in Core i7 6 (undocumented algorithm) Even with poor temporal loc, 5 spatial loc. helps! 4 3 2 L3 L2 L1 An artifact of overhead not being amortized Core i7 2.67GHz 32KB L1 d-cache 256KB L2 cache 8MB L3 cache Ridges of temporal locality Slopes of spatial locality 1 Mem Stride (x8 bytes) Size (bytes) 24

Rearranging loops to improve locality Matrix multiply Multiply N x N matrices O(N 3 ) total operations Accesses N reads per source element N values summed per destination but may be able to hold in register /* ijk */ Variable sum for (i=; i<n; i++) { held in register for (j=; j<n; j++) { sum =.; for (k=; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } 25

Miss rate analysis for matrix multiply Assume: Line size = 32B (big enough for 4 64-bit words) Matrix dimension (N) is very large A single matrix row does not fit in L1 Compiler stores local variables in registers Analysis method: Look at access pattern of inner loop k j j i k i A B C 26

Matrix multiplication (ijk) /* ijk */ for (i=; i<n; i++) { for (j=; j<n; j++) { sum =.; for (k=; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } Inner loop: (i,*) (*,j) (i,j) A B C Row-wise Columnwise Fixed Per iteration Loads Stores A misses B misses C misses Total misses 2.25 1.. 1.25 Each cache block holds 4 elements (doublewords) But it scans B with a stride of n 27

Matrix multiplication (jik) /* jik */ for (j=; j<n; j++) { for (i=; i<n; i++) { sum =.; for (k=; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } } Per iteration Inner loop: (i,*) (*,j) (i,j) A B C Row-wise Columnwise Fixed Loads Stores A misses B misses C misses Total misses 2.25 1.. 1.25 28

Matrix multiplication (jki) /* jki */ for (j=; j<n; j++) { for (k=; k<n; k++) { r = b[k][j]; for (i=; i<n; i++) c[i][j] += a[i][k] * r; } } Inner loop: (*,k) (*,j) (k,j) A B C Column - wise Fixed Columnwise Per iteration Loads Stores A misses B misses C misses Total misses 2 1 1.. 1. 2. Scan A and C with stride of n; a miss on each iteration; that plus 1 more memory op! 29

Matrix multiplication (kji) /* kji */ for (k=; k<n; k++) { for (j=; j<n; j++) { r = b[k][j]; for (i=; i<n; i++) c[i][j] += a[i][k] * r; } } Inner loop: (*,k) (k,j) (*,j) A B C Fixed Columnwise Columnwise Per iteration Loads Stores A misses B misses C misses Total misses 2 1 1.. 1. 2. 3

Matrix multiplication (kij) /* kij */ for (k=; k<n; k++) { for (i=; i<n; i++) { r = a[i][k]; for (j=; j<n; j++) c[i][j] += r * b[k][j]; } } Inner loop: (i,k) (k,*) (i,*) A B C Fixed Row-wise Row-wise Per iteration Loads Stores A misses B misses C misses Total misses 2 1..25.25.5 An interesting trade-off; one more memory operation for fewer misses 31

Matrix multiplication (ikj) /* ikj */ for (i=; i<n; i++) { for (k=; k<n; k++) { r = a[i][k]; for (j=; j<n; j++) c[i][j] += r * b[k][j]; } } Inner loop: (i,k) (k,*) (i,*) A B C Fixed Row-wise Row-wise Per iteration Loads Stores A misses B misses C misses Total misses 2 1..25.25.5 32

Summary of matrix multiplication Matrix multiply class ijk & jik (AB) jki & kji (AC) kij & ikj (BC) Loads Stores A misses B misses C misses Total misses 2.25 1.. 1.25 2 1 1.. 1. 2. 2 1..25.25.5 33

Cycles per inner loop iteration Core i7 matrix multiply performance Miss rates are helpful but not perfect predictors. 6 Code scheduling matters, too. 5 Jki & kji (AC) 4 3 2 jki kji ijk jik kij ikj 1 ijk & jik (AB) kij & ikj (BC) 5 1 15 2 25 3 35 4 45 5 55 6 65 7 75 Array size (n) 34

Concluding observations Programmer can optimize for cache performance How data structures are organized How data are accessed Nested loop structure You can try to help with blocking, but that s better left to libraries and compilers All systems favor cache friendly code Getting absolute optimum performance is very platform specific Cache sizes, line sizes, associativities, etc. Can get most of the advane with generic code Keep working set reasonably small (temporal locality) Use small strides (spatial locality) 35