Denison University. Cache Memories. CS-281: Introduction to Computer Systems. Instructor: Thomas C. Bressoud

Similar documents
Today Cache memory organization and operation Performance impact of caches

Cache memories are small, fast SRAM based memories managed automatically in hardware.

Carnegie Mellon. Cache Memories. Computer Architecture. Instructor: Norbert Lu1enberger. based on the book by Randy Bryant and Dave O Hallaron

Today. Cache Memories. General Cache Concept. General Cache Organization (S, E, B) Cache Memories. Example Memory Hierarchy Smaller, faster,

Agenda Cache memory organization and operation Chapter 6 Performance impact of caches Cache Memories

Cache Memories /18-213/15-513: Introduction to Computer Systems 12 th Lecture, October 5, Today s Instructor: Phil Gibbons

Carnegie Mellon. Cache Memories

Cache Memories. Andrew Case. Slides adapted from Jinyang Li, Randy Bryant and Dave O Hallaron

Cache Memories. Topics. Next time. Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance

CS 33. Caches. CS33 Intro to Computer Systems XVIII 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Cache Memories October 8, 2007

Memory Hierarchy. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

CISC 360. Cache Memories Nov 25, 2008

Cache Memories. EL2010 Organisasi dan Arsitektur Sistem Komputer Sekolah Teknik Elektro dan Informatika ITB 2010

Giving credit where credit is due

Lecture 15: Caches and Optimization Computer Architecture and Systems Programming ( )

Memory Hierarchy. Computer Systems Organization (Spring 2017) CSCI-UA 201, Section 3. Instructor: Joanna Klukowska

Memory Hierarchy. Cache Memory Organization and Access. General Cache Concept. Example Memory Hierarchy Smaller, faster,

Systems Programming and Computer Architecture ( ) Timothy Roscoe

CS 33. Architecture and Optimization (3) CS33 Intro to Computer Systems XVI 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

211: Computer Architecture Summer 2016

Systems I. Optimizing for the Memory Hierarchy. Topics Impact of caches on performance Memory hierarchy considerations

Example. How are these parameters decided?

Cache Memories : Introduc on to Computer Systems 12 th Lecture, October 6th, Instructor: Randy Bryant.

Cache Memories. Cache Memories Oct. 10, Inserting an L1 Cache Between the CPU and Main Memory. General Org of a Cache Memory

Cache memories The course that gives CMU its Zip! Cache Memories Oct 11, General organization of a cache memory

ν Hold frequently accessed blocks of main memory 2 CISC 360, Fa09 Cache is an array of sets. Each set contains one or more lines.

Cache Memories. From Bryant and O Hallaron, Computer Systems. A Programmer s Perspective. Chapter 6.

Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory

Memory Hierarchy. Instructor: Adam C. Champion, Ph.D. CSE 2431: Introduction to Operating Systems Reading: Chap. 6, [CSAPP]

Last class. Caches. Direct mapped

Roadmap. Java: Assembly language: OS: Machine code: Computer system:

Memory Hierarchy. Announcement. Computer system model. Reference

Problem: Processor Memory BoJleneck

Computer Systems CSE 410 Autumn Memory Organiza:on and Caches

CSCI-UA.0201 Computer Systems Organization Memory Hierarchy

The course that gives CMU its Zip! Memory System Performance. March 22, 2001

+ Random-Access Memory (RAM)

Cache Memories. Lecture, Oct. 30, Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Types of Cache Misses. The Hardware/So<ware Interface CSE351 Winter Cache Read. General Cache OrganizaJon (S, E, B) Memory and Caches II

The Hardware/So<ware Interface CSE351 Winter 2013

Computer Organization: A Programmer's Perspective

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

CS 240 Stage 3 Abstractions for Practical Systems

Problem: Processor- Memory Bo<leneck

CS241 Computer Organization Spring Principle of Locality

Read-only memory (ROM): programmed during production Programmable ROM (PROM): can be programmed once SRAM (Static RAM)

High-Performance Parallel Computing

Caches Spring 2016 May 4: Caches 1

Locality. CS429: Computer Organization and Architecture. Locality Example 2. Locality Example

Computer Systems. Memory Hierarchy. Han, Hwansoo

Computer Organization: A Programmer's Perspective

Large and Fast: Exploiting Memory Hierarchy

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, SPRING 2013

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, FALL 2012

Computer Architecture and System Software Lecture 09: Memory Hierarchy. Instructor: Rob Bergen Applied Computer Science University of Winnipeg

Memory hierarchies: caches and their impact on the running time

211: Computer Architecture Summer 2016

CS356: Discussion #9 Memory Hierarchy and Caches. Marco Paolieri Illustrations from CS:APP3e textbook

CS 201 The Memory Hierarchy. Gerson Robboy Portland State University

The Memory Hierarchy Sept 29, 2006

Random Access Memory (RAM)

Carnegie Mellon. Cache Lab. Recitation 7: Oct 11 th, 2016

CS3350B Computer Architecture

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

CISC 360. The Memory Hierarchy Nov 13, 2008

Advanced Memory Organizations

Random-Access Memory (RAM) Lecture 13 The Memory Hierarchy. Conventional DRAM Organization. SRAM vs DRAM Summary. Topics. d x w DRAM: Key features

Giving credit where credit is due

CS 261 Fall Caching. Mike Lam, Professor. (get it??)

Key features. ! RAM is packaged as a chip.! Basic storage unit is a cell (one bit per cell).! Multiple RAM chips form a memory.

Caches III. CSE 351 Spring Instructor: Ruth Anderson

CMPSC 311- Introduction to Systems Programming Module: Caching

Roadmap. Java: Assembly language: OS: Machine code: Computer system:

How to Write Fast Numerical Code

The Memory Hierarchy /18-213/15-513: Introduction to Computer Systems 11 th Lecture, October 3, Today s Instructor: Phil Gibbons

Kathryn Chan, Kevin Bi, Ryan Wong, Waylon Huang, Xinyu Sui

CS429: Computer Organization and Architecture

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CS429: Computer Organization and Architecture

Random-Access Memory (RAM) CISC 360. The Memory Hierarchy Nov 24, Conventional DRAM Organization. SRAM vs DRAM Summary.

The Memory Hierarchy. Fall Instructors: Aykut & Erkut Erdem

How to Write Fast Numerical Code

Storage Technologies and the Memory Hierarchy

CS 33. Memory Hierarchy I. CS33 Intro to Computer Systems XVI 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

General Cache Mechanics. Memory Hierarchy: Cache. Cache Hit. Cache Miss CPU. Cache. Memory CPU CPU. Cache. Cache. Memory. Memory

CMPSC 311- Introduction to Systems Programming Module: Caching

The Memory Hierarchy 10/25/16

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Caches III. CSE 351 Winter Instructor: Mark Wyse

Cache Memory and Performance

The Memory Hierarchy / : Introduction to Computer Systems 10 th Lecture, Feb 12, 2015

Cache Lab Implementation and Blocking

F28HS Hardware-Software Interface: Systems Programming

Memory systems. Memory technology. Memory technology Memory hierarchy Virtual memory

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

The University of Adelaide, School of Computer Science 13 September 2018

Transcription:

Cache Memories CS-281: Introduction to Computer Systems Instructor: Thomas C. Bressoud 1

Random-Access Memory (RAM) Key features RAM is traditionally packaged as a chip. Basic storage unit is normally a cell (one bit per cell). Multiple RAM chips form a memory. Static RAM (SRAM) Each cell stores a bit with a four or six-transistor circuit. Retains value indefinitely, as long as it is kept powered. Relatively insensitive to electrical noise (EMI), radiation, etc. Faster and more expensive than DRAM. Dynamic RAM (DRAM) Each cell stores bit with a capacitor. One transistor is used for access Value must be refreshed every 10-100 ms. More sensitive to disturbances (EMI, radiation, ) than SRAM. Slower and cheaper than SRAM. 3

SRAM vs DRAM Summary Trans. Access Needs Needs per bit time refresh? EDC? Cost Applications SRAM 4 or 6 1X No Maybe 100x Cache memories DRAM 1 10X Yes Yes 1X Main memories, frame buffers 4

Memory Modules addr (row = i, col = j) DRAM 7 DRAM 0 : supercell (i,j) 64 MB memory module consisting of eight 8Mx8 DRAMs bits 56-63 bits 48-55 bits 40-47 bits 32-39 bits 24-31 bits 16-23 bits 8-15 bits 0-7 63 56 55 48 47 40 39 32 31 24 23 16 15 8 7 64-bit doubleword at main memory address A 0 Memory controller 64-bit doubleword 5

Traditional Bus Structure Connecting CPU and Memory A bus is a collection of parallel wires that carry address, data, and control signals. Buses are typically shared by multiple devices. Load, Store, Instruction Fetch CPU chip Register file ALU System bus Memory bus Bus interface I/O bridge Main memory 6

I/O Bus (DMA) CPU chip Register file ALU System bus Memory bus Bus interface I/O bridge Main memory USB controller Graphics adapter I/O bus Disk controller Expansion slots for other devices such as network adapters. Mouse Keyboard Monitor Disk 7

The CPU-Memory Gap The gap between DRAM, disk, and CPU speeds. 100,000,000.0 10,000,000.0 Disk 1,000,000.0 100,000.0 SSD ns 10,000.0 1,000.0 100.0 DRAM Disk seek time Flash SSD access time DRAM access time SRAM access time CPU cycle time Effective CPU cycle time 10.0 1.0 0.1 CPU 0.0 1980 1985 1990 1995 2000 2003 2005 2010 Year 8

Locality Principle of Locality: Programs tend to use data and instructions with addresses near or equal to those they have used recently Temporal locality: Recently referenced items are likely to be referenced again in the near future Spatial locality: Items with nearby addresses tend to be referenced close together in time 9

Locality Example sum = 0; for (i = 0; i < n; i++) sum += a[i]; return sum; Data references Reference array elements in succession (stride-1 reference pattern). Reference variable sum each iteration. Instruction references Reference instructions in sequence. Cycle through loop repeatedly. Spa/al locality Temporal locality Spa/al locality Temporal locality 10

Caches Cache: A smaller, faster storage device that acts as a staging area for a subset of the data in a larger, slower device. Fundamental idea of a memory hierarchy: For each k, the faster, smaller device at level k serves as a cache for the larger, slower device at level k+1. Why do memory hierarchies work? Because of locality, programs tend to access the data at level k more often than they access the data at level k+1. Thus, the storage at level k+1 can be slower, and thus larger and cheaper per bit. Big Idea: The memory hierarchy creates a large pool of storage that costs as much as the cheap storage near the bottom, but that serves data to programs at the rate of the fast storage near the top. 11

Cache Memories Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory CPU looks first for data in caches (e.g., L1, L2, and L3), then in main memory. Typical system structure: CPU chip Register file Cache memories ALU System bus Memory bus Bus interface I/O bridge Main memory 12

General Cache Concepts Cache 84 9 10 14 3 Smaller, faster, more expensive memory caches a subset of the blocks 10 4 Data is copied in block- sized transfer units Memory 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Larger, slower, cheaper memory viewed as par//oned into blocks 13

General Cache Concepts: Hit Cache Request: 14 8 9 14 3 Data in block b is needed Block b is in cache: Hit! Memory 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 14

General Cache Concepts: Miss Cache Request: 12 8 12 9 14 3 Data in block b is needed Block b is not in cache: Miss! 12 Request: 12 Block b is fetched from memory Memory 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Block b is stored in cache Placement policy: determines where b goes Replacement policy: determines which block gets evicted (vic:m) 15

General Caching Concepts: Types of Cache Misses Cold (compulsory) miss Cold misses occur because the cache is empty. Conflict miss Most caches limit blocks at level k+1 to a small subset (sometimes a singleton) of the block positions at level k. E.g. Block i at level k+1 must be placed in block (i mod 4) at level k. Conflict misses occur when the level k cache is large enough, but multiple data objects all map to the same level k block. E.g. Referencing blocks 0, 8, 0, 8, 0, 8,... would miss every time. Capacity miss Occurs when the set of active cache blocks (working set) is larger than the cache. 16

General Cache Organization (S, E, B) E = 2 e lines per set set line S = 2 s sets v tag 0 1 2 B- 1 Cache size: C = S x E x B data bytes valid bit B = 2 b bytes per cache block (the data) 17

Cache Read E = 2 e lines per set Locate set Check if any line in set has matching tag Yes + line valid: hit Locate data stardng at offset Address of word: t bits s bits b bits S = 2 s sets tag set index block offset data begins at this offset v tag 0 1 2 B- 1 valid bit B = 2 b bytes per cache block (the data) 18

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Assume: cache block size 8 bytes S = 2 s sets v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 Address of int: t bits 0 01 100 find set v tag 0 1 2 3 4 5 6 7 19

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Assume: cache block size 8 bytes valid? + match: assume yes = hit Address of int: t bits 0 01 100 v tag 0 1 2 3 4 5 6 7 block offset 20

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Assume: cache block size 8 bytes valid? + match: assume yes = hit Address of int: t bits 0 01 100 v tag 0 1 2 3 4 5 6 7 block offset int (4 Bytes) is here No match: old line is evicted and replaced 21

Direct-Mapped Cache Simulation t=1 s=2 b=1 x xx x M=16 byte addresses, B=2 bytes/block, S=4 sets, E=1 Blocks/set Address trace (reads, one byte per read): 0 [0000 2 ], miss 1 [0001 2 ], hit 7 [0111 2 ], miss 8 [1000 2 ], miss 0 [0000 2 ] miss Set 0 Set 1 Set 2 Set 3 v Tag Block 0 1 10? M[8-9] M[0-1]? 1 0 M[6-7] 22

A Higher Level Example int sum_array_rows(double a[16][16]) { int i, j; double sum = 0; Ignore the variables sum, i, j assume: cold (empty) cache, a[0][0] goes here } for (i = 0; i < 16; i++) for (j = 0; j < 16; j++) sum += a[i][j]; return sum; int sum_array_cols(double a[16][16]) { int i, j; double sum = 0; for (j = 0; i < 16; i++) 32 B = 4 doubles for (i = 0; j < 16; j++) sum += a[i][j]; return sum; } blackboard 23

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size 8 bytes Address of short int: t bits 0 01 100 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 find set v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 24

E- way Set Associa/ve Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size 8 bytes compare both Address of short int: t bits 0 01 100 valid? + match: yes = hit v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 block offset 25

E- way Set Associa/ve Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size 8 bytes compare both Address of short int: t bits 0 01 100 valid? + match: yes = hit v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 short int (2 Bytes) is here block offset No match: One line in set is selected for evic/on and replacement Replacement policies: random, least recently used (LRU), 26

2-Way Set Associative Cache Simulation t=2 s=1 b=1 xx x x M=16 byte addresses, B=2 bytes/block, S=2 sets, E=2 blocks/set Address trace (reads, one byte per read): 0 [0000 2 ], miss 1 [0001 2 ], hit 7 [0111 2 ], miss 8 [1000 2 ], miss 0 [0000 2 ] hit Set 0 Set 1 v Tag Block 01? 00? M[0-1] 10 10 0 10 01 M[8-9] M[6-7] 27

A Higher Level Example int sum_array_rows(double a[16][16]) { int i, j; double sum = 0; Ignore the variables sum, i, j assume: cold (empty) cache, a[0][0] goes here } for (i = 0; i < 16; i++) for (j = 0; j < 16; j++) sum += a[i][j]; return sum; int sum_array_rows(double a[16][16]) { int i, j; double sum = 0; 32 B = 4 doubles for (j = 0; i < 16; i++) for (i = 0; j < 16; j++) sum += a[i][j]; return sum; } blackboard 28

What about writes? Multiple copies of data exist: L1, L2, Main Memory, Disk What to do on a write-hit? Write-through (write immediately to memory) Write-back (defer write to memory until replacement of line) Need a dirty bit (line different from memory or not) What to do on a write-miss? Write-allocate (load into cache, update line in cache) Good if more writes to the location follow No-write-allocate (writes immediately to memory) Typical Write-through + No-write-allocate Write-back + Write-allocate 29

Intel Core i7 Cache Hierarchy Processor package Core 0 Regs Core 3 Regs L1 i- cache and d- cache: 32 KB, 8- way, Access: 4 cycles L1 d-cache L1 i-cache L1 d-cache L1 i-cache L2 unified cache: 256 KB, 8- way, Access: 11 cycles L2 unified cache L2 unified cache L3 unified cache: 8 MB, 16- way, Access: 30-40 cycles L3 unified cache (shared by all cores) Block size: 64 bytes for all caches. Main memory 30

Cache Performance Metrics Miss Rate Fraction of memory references not found in cache (misses / accesses) = 1 hit rate Typical numbers (in percentages): 3-10% for L1 can be quite small (e.g., < 1%) for L2, depending on size, etc. Hit Time Time to deliver a line in the cache to the processor includes time to determine whether the line is in the cache Typical numbers: 1-2 clock cycle for L1 5-20 clock cycles for L2 Miss Penalty Additional time required because of a miss typically 50-200 cycles for main memory (Trend: increasing!) 31

Lets think about those numbers Huge difference between a hit and a miss Could be 100x, if just L1 and main memory Would you believe 99% hits is twice as good as 97%? Consider: cache hit time of 1 cycle miss penalty of 100 cycles Average access time: 97% hits: 1 cycle + 0.03 * 100 cycles = 4 cycles 99% hits: 1 cycle + 0.01 * 100 cycles = 2 cycles This is why miss rate is used instead of hit rate 32

Writing Cache Friendly Code Make the common case go fast Focus on the inner loops of the core functions Minimize the misses in the inner loops Repeated references to variables are good (temporal locality) Stride-1 reference patterns are good (spatial locality) Key idea: Our qualita/ve no/on of locality is quan/fied through our understanding of cache memories. 33

Today Cache organization and operation Performance impact of caches The memory mountain Rearranging loops to improve spatial locality Using blocking to improve temporal locality 34

The Memory Mountain Read throughput (read bandwidth) Number of bytes read from memory per second (MB/s) Memory mountain: Measured read throughput as a function of spatial and temporal locality. Compact way to characterize memory system performance. 35

Memory Mountain Test Function /* The test function */ void test(int elems, int stride) { int i, result = 0; volatile int sink; } for (i = 0; i < elems; i += stride) result += data[i]; sink = result; /* So compiler doesn't optimize away the loop */ /* Run test(elems, stride) and return read throughput (MB/s) */ double run(int size, int stride, double Mhz) { double cycles; int elems = size / sizeof(int); } test(elems, stride); /* warm up the cache */ cycles = fcyc2(test, elems, stride, 0); /* call test(elems,stride) */ return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */ 36

Read throughput (MB/s) The Memory Mountain 7000 6000 5000 4000 3000 2000 Intel Core i7 32 KB L1 i- cache 32 KB L1 d- cache 256 KB unified L2 cache 8M unified L3 cache All caches on- chip 1000 0 s1 s3 s5 s7 Stride (x8 bytes) s9 s11 s13 s15 s32 64M 8M 1M 128K 16K 2K Working set size (bytes) 37

Read throughput (MB/s) The Memory Mountain Slopes of spadal locality 7000 6000 5000 4000 3000 2000 1000 0 s1 s3 s5 s7 Stride (x8 bytes) s9 s11 s13 s15 s32 64M 8M 1M 128K 16K 2K Intel Core i7 32 KB L1 i- cache 32 KB L1 d- cache 256 KB unified L2 cache 8M unified L3 cache All caches on- chip Working set size (bytes) 38

Read throughput (MB/s) The Memory Mountain Slopes of spadal locality 7000 6000 5000 4000 3000 2000 1000 0 s1 s3 s5 s7 Stride (x8 bytes) s9 s11 s13 Mem" s15 L3" s32 64M L2" 8M 1M L1" 128K 16K 2K Working set size (bytes) Denison University Intel Core i7 32 KB L1 i- cache 32 KB L1 d- cache 256 KB unified L2 cache 8M unified L3 cache All caches on- chip Ridges of Temporal locality 39

Today Cache organization and operation Performance impact of caches The memory mountain Rearranging loops to improve spatial locality Using blocking to improve temporal locality 40

Miss Rate Analysis for Matrix Multiply Assume: Line size = 32B (big enough for four 64-bit words) Matrix dimension (N) is very large Approximate 1/N as 0.0 Cache is not even big enough to hold multiple rows Analysis Method: Look at access pattern of inner loop k j j i k i A B C 41

Matrix Multiplication Example Description: Multiply N x N matrices O(N 3 ) total operations N reads per source element N values summed per destination but may be able to hold in register Variable sum /* ijk */ held in register for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } 42

Layout of C Arrays in Memory (review) C arrays allocated in row-major order each row in contiguous memory locations Stepping through columns in one row: for (i = 0; i < N; i++) sum += a[0][i]; accesses successive elements if block size (B) > 4 bytes, exploit spatial locality compulsory miss rate = 4 bytes / B Stepping through rows in one column: for (i = 0; i < n; i++) sum += a[i][0]; accesses distant elements no spatial locality! compulsory miss rate = 1 (i.e. 100%) 43

Matrix Multiplication (ijk) /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } Misses per inner loop itera:on: A B C 0.25 1.0 0.0 Inner loop: (i,*) (*,j) (i,j) A B C Row- wise Column- wise Fixed 44

Matrix Multiplication (jik) /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } } Misses per inner loop itera:on: A B C 0.25 1.0 0.0 Inner loop: (i,*) (*,j) (i,j) A B C Row- wise Column- wise Fixed 45

Matrix Multiplication (kij) /* kij */ for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } } Inner loop: (i,k) (k,*) (i,*) A B C Fixed Row- wise Row- wise Misses per inner loop itera:on: A B C 0.0 0.25 0.25 46

Matrix Multiplication (ikj) /* ikj */ for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } } Inner loop: (i,k) (k,*) (i,*) A B C Fixed Row- wise Row- wise Misses per inner loop itera:on: A B C 0.0 0.25 0.25 47

Matrix Multiplication (jki) /* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } } Inner loop: (*,k) (k,j) (*,j) A B C Column- wise Fixed Column- wise Misses per inner loop itera:on: A B C 1.0 0.0 1.0 48

Matrix Multiplication (kji) /* kji */ for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } } Misses per inner loop itera:on: A B C 1.0 0.0 1.0 Inner loop: (*,k) (k,j) (*,j) A B C Column- wise Fixed Column- wise 49

Summary of Matrix Multiplication for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; } } for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } } for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; } } for (i=0; i<n; i++) c[i][j] += a[i][k] * r; ijk (& jik): 2 loads, 0 stores misses/iter = 1.25 kij (& ikj): 2 loads, 1 store misses/iter = 0.5 jki (& kji): 2 loads, 1 store misses/iter = 2.0 50

Core i7 Matrix Multiply Performance 60 jki / kji Cycles per inner loop iteration 50 40 30 20 10 ijk / jik kij / ikj jki kji ijk jik kij ikj 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 Array size (n) 51

Today Cache organization and operation Performance impact of caches The memory mountain Rearranging loops to improve spatial locality Using blocking to improve temporal locality 52

Example: Matrix Mul/plica/on c = (double *) calloc(sizeof(double), n*n); /* Multiply n x n matrices a and b */ void mmm(double *a, double *b, double *c, int n) { int i, j, k; for (i = 0; i < n; i++) for (j = 0; j < n; j++) for (k = 0; k < n; k++) c[i*n+j] += a[i*n + k]*b[k*n + j]; } c = i a * b j 53

Cache Miss Analysis Assume: Matrix elements are doubles Cache block = 8 doubles Cache size C << n (much smaller than n) First iteration: n/8 + n = 9n/8 misses = * n Afterwards in cache: (schematic) = * 8 wide 54

Cache Miss Analysis Assume: Matrix elements are doubles Cache block = 8 doubles Cache size C << n (much smaller than n) Second iteration: Again: n/8 + n = 9n/8 misses = * n Total misses: 9n/8 * n 2 = (9/8) * n 3 8 wide 55

Blocked Matrix Multiplication c = (double *) calloc(sizeof(double), n*n); /* Multiply n x n matrices a and b */ void mmm(double *a, double *b, double *c, int n) { int i, j, k; for (i = 0; i < n; i+=b) for (j = 0; j < n; j+=b) for (k = 0; k < n; k+=b) /* B x B mini matrix multiplications */ for (i1 = i; i1 < i+b; i++) for (j1 = j; j1 < j+b; j++) for (k1 = k; k1 < k+b; k++) c[i1*n+j1] += a[i1*n + k1]*b[k1*n + j1]; } c = i1 a * b j1 + c Block size B x B 56

Cache Miss Analysis Assume: Cache block = 8 doubles Cache size C << n (much smaller than n) Three blocks fit into cache: 3B 2 < C First (block) iteration: B 2 /8 misses for each block 2n/B * B 2 /8 = nb/4 (omitting matrix c) = * n/b blocks Afterwards in cache (schematic) = Block size B x B * 57

Cache Miss Analysis Assume: Cache block = 8 doubles Cache size C << n (much smaller than n) Three blocks fit into cache: 3B 2 < C Second (block) iteration: Same as first iteration 2n/B * B 2 /8 = nb/4 = * n/b blocks Total misses: nb/4 * (n/b) 2 = n 3 /(4B) Block size B x B 58

Summary No blocking: (9/8) * n 3 Blocking: 1/(4B) * n 3 Suggest largest possible block size B, but limit 3B 2 < C! Reason for dramatic difference: Matrix multiplication has inherent temporal locality: Input data: 3n 2, computation 2n 3 Every array elements used O(n) times! But program has to be written properly 59

Concluding Observations Programmer can optimize for cache performance How data structures are organized How data are accessed Nested loop structure Blocking is a general technique All systems favor cache friendly code Getting absolute optimum performance is very platform specific Cache sizes, line sizes, associativities, etc. Can get most of the advantage with generic code Keep working set reasonably small (temporal locality) Use small strides (spatial locality) 60