CS 33. Caches. CS33 Intro to Computer Systems XVIII 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Similar documents
Cache memories are small, fast SRAM based memories managed automatically in hardware.

Today Cache memory organization and operation Performance impact of caches

Agenda Cache memory organization and operation Chapter 6 Performance impact of caches Cache Memories

Memory Hierarchy. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Today. Cache Memories. General Cache Concept. General Cache Organization (S, E, B) Cache Memories. Example Memory Hierarchy Smaller, faster,

211: Computer Architecture Summer 2016

Denison University. Cache Memories. CS-281: Introduction to Computer Systems. Instructor: Thomas C. Bressoud

Cache Memories /18-213/15-513: Introduction to Computer Systems 12 th Lecture, October 5, Today s Instructor: Phil Gibbons

Systems I. Optimizing for the Memory Hierarchy. Topics Impact of caches on performance Memory hierarchy considerations

Cache Memories. Topics. Next time. Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance

Memory Hierarchy. Computer Systems Organization (Spring 2017) CSCI-UA 201, Section 3. Instructor: Joanna Klukowska

CISC 360. Cache Memories Nov 25, 2008

Memory Hierarchy. Cache Memory Organization and Access. General Cache Concept. Example Memory Hierarchy Smaller, faster,

Last class. Caches. Direct mapped

Cache Memories October 8, 2007

Cache Memories. EL2010 Organisasi dan Arsitektur Sistem Komputer Sekolah Teknik Elektro dan Informatika ITB 2010

Giving credit where credit is due

Carnegie Mellon. Cache Memories. Computer Architecture. Instructor: Norbert Lu1enberger. based on the book by Randy Bryant and Dave O Hallaron

Cache Memories. Andrew Case. Slides adapted from Jinyang Li, Randy Bryant and Dave O Hallaron

Carnegie Mellon. Cache Memories

The course that gives CMU its Zip! Memory System Performance. March 22, 2001

High-Performance Parallel Computing

Cache Memories. Cache Memories Oct. 10, Inserting an L1 Cache Between the CPU and Main Memory. General Org of a Cache Memory

Cache memories The course that gives CMU its Zip! Cache Memories Oct 11, General organization of a cache memory

ν Hold frequently accessed blocks of main memory 2 CISC 360, Fa09 Cache is an array of sets. Each set contains one or more lines.

Cache Memories. Lecture, Oct. 30, Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Example. How are these parameters decided?

Memory Hierarchy. Announcement. Computer system model. Reference

Systems Programming and Computer Architecture ( ) Timothy Roscoe

Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory

Lecture 15: Caches and Optimization Computer Architecture and Systems Programming ( )

Roadmap. Java: Assembly language: OS: Machine code: Computer system:

Cache Memories : Introduc on to Computer Systems 12 th Lecture, October 6th, Instructor: Randy Bryant.

CSCI-UA.0201 Computer Systems Organization Memory Hierarchy

Memory Hierarchy. Instructor: Adam C. Champion, Ph.D. CSE 2431: Introduction to Operating Systems Reading: Chap. 6, [CSAPP]

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

CS241 Computer Organization Spring Principle of Locality

CSCI 402: Computer Architectures. Performance of Multilevel Cache

211: Computer Architecture Summer 2016

Cache Memory and Performance

CS3350B Computer Architecture

Cache Memories. From Bryant and O Hallaron, Computer Systems. A Programmer s Perspective. Chapter 6.

Problem: Processor Memory BoJleneck

CS 240 Stage 3 Abstractions for Practical Systems

Problem: Processor- Memory Bo<leneck

Caches Spring 2016 May 4: Caches 1

Computer Organization: A Programmer's Perspective

CS 2461: Computer Architecture 1

Computer Systems CSE 410 Autumn Memory Organiza:on and Caches

Virtual Memory Overview

Caches III. CSE 351 Winter Instructor: Mark Wyse

CS429: Computer Organization and Architecture

Caches III. CSE 351 Spring Instructor: Ruth Anderson

Lecture 17: Memory Hierarchy and Cache Coherence Concurrent and Mul7core Programming

Kathryn Chan, Kevin Bi, Ryan Wong, Waylon Huang, Xinyu Sui

A Crash Course in Compilers for Parallel Computing. Mary Hall Fall, L2: Transforms, Reuse, Locality

EE108B Lecture 13. Caches Wrap Up Processes, Interrupts, and Exceptions. Christos Kozyrakis Stanford University

Virtual Memory. Virtual Memory Overview. Ken Wong Washington University. Demand Paging. Virtual Memory - Part 1 (CSE 422S)

Essential constraints: Data Dependences. S1: a = b + c S2: d = a * 2 S3: a = c + 2 S4: e = d + c + 2

Announcements. EE108B Lecture 13. Caches Wrap Up Processes, Interrupts, and Exceptions. Measuring Performance. Memory Performance

Cache Performance II 1

Carnegie Mellon. Cache Lab. Recitation 7: Oct 11 th, 2016

Introduction to Computer Systems: Semester 1 Computer Architecture

CS/ECE 250 Computer Architecture

high-speed-high-capacity memory

Types of Cache Misses. The Hardware/So<ware Interface CSE351 Winter Cache Read. General Cache OrganizaJon (S, E, B) Memory and Caches II

The Hardware/So<ware Interface CSE351 Winter 2013

Random-Access Memory (RAM) CISC 360. The Memory Hierarchy Nov 24, Conventional DRAM Organization. SRAM vs DRAM Summary.

General Cache Mechanics. Memory Hierarchy: Cache. Cache Hit. Cache Miss CPU. Cache. Memory CPU CPU. Cache. Cache. Memory. Memory

DECstation 5000 Miss Rates. Cache Performance Measures. Example. Cache Performance Improvements. Types of Cache Misses. Cache Performance Equations

Parallelizing The Matrix Multiplication. 6/10/2013 LONI Parallel Programming Workshop

Locality. CS429: Computer Organization and Architecture. Locality Example 2. Locality Example

Matrix Multiplication

F28HS Hardware-Software Interface: Systems Programming

Read-only memory (ROM): programmed during production Programmable ROM (PROM): can be programmed once SRAM (Static RAM)

Lecture 2. Memory locality optimizations Address space organization

Advanced optimizations of cache performance ( 2.2)

+ Random-Access Memory (RAM)

Memory Cache. Memory Locality. Cache Organization -- Overview L1 Data Cache

Program Optimization

CS222: Cache Performance Improvement

Cache Impact on Program Performance. T. Yang. UCSB CS240A. 2017

write-through v. write-back write-through v. write-back write-through v. write-back option 1: write-through write 10 to 0xABCD CPU RAM Cache ABCD: FF

Computer Organization - Overview

Performance Optimization Tutorial. Labs

Cache Memory: Instruction Cache, HW/SW Interaction. Admin

CS429: Computer Organization and Architecture

Roadmap. Java: Assembly language: OS: Machine code: Computer system:

Caches Concepts Review

Introduction to Computer Systems

Introduction to Computer Systems

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

Computer Systems. Memory Hierarchy. Han, Hwansoo

SE-292 High Performance Computing. Memory Hierarchy. R. Govindarajan Memory Hierarchy

Matrix Multiplication

Question 13 1: (Solution, p 4) Describe the inputs and outputs of a (1-way) demultiplexer, and how they relate.

High Performance Computing and Programming, Lecture 3

Agenda. Cache-Memory Consistency? (1/2) 7/14/2011. New-School Machine Structures (It s a bit more complicated!)

Parallel Processing. Parallel Processing. 4 Optimization Techniques WS 2018/19

CS422 Computer Architecture

Transcription:

CS 33 Caches CS33 Intro to Computer Systems XVIII 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Cache Performance Metrics Miss rate fraction of memory references not found in cache (misses / accesses) = 1 hit rate typical numbers (in percentages): Hit time» 3-10% for L1» can be quite small (e.g., < 1%) for L2, depending on size, etc. time to deliver a line in the cache to the processor» includes time to determine whether the line is in the cache typical numbers: Miss penalty» 1-2 clock cycles for L1» 5-20 clock cycles for L2 additional time required because of a miss» typically 50-200 cycles for main memory (trend: increasing!) CS33 Intro to Computer Systems XVIII 2 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Let s Think About Those Numbers Huge difference between a hit and a miss could be 100x, if just L1 and main memory Would you believe 99% hit rate is twice as good as 97%? consider: cache hit time of 1 cycle miss penalty of 100 cycles average access time: 97% hits:.97 * 1 cycle + 0.03 * 100 cycles 4 cycles 99% hits:.99 * 1 cycle + 0.01 * 100 cycles 2 cycles This is why miss rate is used instead of hit rate CS33 Intro to Computer Systems XVIII 3 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Locality Principle of Locality: programs tend to use data and instructions with addresses near or equal to those they have used recently Temporal locality: recently referenced items are likely to be referenced again in the near future Spatial locality: items with nearby addresses tend to be referenced close together in time CS33 Intro to Computer Systems XVIII 4 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Locality Example sum = 0; for (i = 0; i < n; i++) sum += a[i]; return sum; Data references reference array elements in succession (stride-1 reference pattern) reference variable sum each iteration Instruction references reference instructions in sequence. cycle through loop repeatedly Spatial locality Temporal locality Spatial locality Temporal locality CS33 Intro to Computer Systems XVIII 5 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Qualitative Estimates of Locality Claim: being able to look at code and get a qualitative sense of its locality is a key skill for a professional programmer Question: does this function have good locality with respect to array a? int sum_array_rows(int a[m][n]){ int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; CS33 Intro to Computer Systems XVIII 6 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Quiz 1 Does this function have good locality with respect to array a? a) yes b) no int sum_array_cols(int a[m][n]) { int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; CS33 Intro to Computer Systems XVIII 7 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Writing Cache-Friendly Code Make the common case go fast focus on the inner loops of the core functions Minimize the misses in the inner loops repeated references to variables are good (temporal locality) stride-1 reference patterns are good (spatial locality) Key idea: our qualitative notion of locality is quantified through our understanding of cache memories CS33 Intro to Computer Systems XVIII 8 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Matrix Multiplication Example Description: multiply N x N matrices O(N 3 ) total operations N reads per source element N values summed per destination» but may be able to hold in register Variable sum /* ijk */ held in register for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; /* ikj */ for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; CS33 Intro to Computer Systems XVIII 9 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Miss-Rate Analysis for Matrix Multiply Assume: Block size = 32B (big enough for four 64-bit words) matrix dimension (N) is very large» approximate 1/N as 0.0 cache is not big enough to hold multiple rows Analysis method: look at access pattern of inner loop j k j i i = * k C A B CS33 Intro to Computer Systems XVIII 10 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Layout of C Arrays in Memory (review) C arrays allocated in row-major order each row in contiguous memory locations Stepping through columns in one row: for (i = 0; i < N; i++) sum += a[0][i]; accesses successive elements if block size (B) > 4 bytes, exploit spatial locality» compulsory miss rate = 4 bytes / B Stepping through rows in one column: for (i = 0; i < n; i++) sum += a[i][0]; accesses distant elements no spatial locality!» compulsory miss rate = 1 (i.e. 100%) CS33 Intro to Computer Systems XVIII 11 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Matrix Multiplication (ijk) /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; Misses per inner loop iteration: A B C 0.25 1.0 0.0 Inner loop: (i,*) (*,j) (i,j) A B C Row-wise Columnwise Fixed CS33 Intro to Computer Systems XVIII 12 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Matrix Multiplication (jik) /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum Misses per inner loop iteration: A B C 0.25 1.0 0.0 Inner loop: (i,*) (*,j) (i,j) A B C Row-wise Columnwise Fixed CS33 Intro to Computer Systems XVIII 13 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Matrix Multiplication (kij) /* kij */ for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; Inner loop: (i,k) (k,*) (i,*) A B C Fixed Row-wise Row-wise Misses per inner loop iteration: A B C 0.0 0.25 0.25 CS33 Intro to Computer Systems XVIII 14 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Matrix Multiplication (ikj) /* ikj */ for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; Inner loop: (i,k) (k,*) (i,*) A B C Fixed Row-wise Row-wise Misses per inner loop iteration: A B C 0.0 0.25 0.25 CS33 Intro to Computer Systems XVIII 15 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Matrix Multiplication (jki) /* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; Inner loop: (*,k) (k,j) (*,j) A B C Fixed Columnwise Columnwise Misses per inner loop iteration: A B C 1.0 0.0 1.0 CS33 Intro to Computer Systems XVIII 16 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Matrix Multiplication (kji) /* kji */ for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; Misses per inner loop iteration: A B C 1.0 0.0 1.0 Inner loop: (*,k) (k,j) (*,j) A B C Fixed Columnwise Columnwise CS33 Intro to Computer Systems XVIII 17 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Summary of Matrix Multiplication for (i=0; i<n; i++) for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; for (k=0; k<n; k++) for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; ijk (& jik): 2 loads, 0 stores misses/iter = 1.25 kij (& ikj): 2 loads, 1 store misses/iter = 0.5 for (j=0; j<n; j++) for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; jki (& kji): 2 loads, 1 store misses/iter = 2.0 CS33 Intro to Computer Systems XVIII 18 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Core i7 Matrix Multiply Performance 60 jki / kji 50 Cycles per inner loop iteration 40 30 20 10 ijk / jik kij / ikj jki kji ijk jik kij ikj 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 Array size (n) CS33 Intro to Computer Systems XVIII 19 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Matrix Multiplication: More Analysis /* Multiply n x n matrices a and b */ void mmm(int n, double a[n][n], double b[n][n], double c[n][n]) { int i, j, k; for (i = 0; i < n; i++) for (j = 0; j < n; j++) for (k = 0; k < n; k++) c[i][j] += a[i][k]*b[k][j]; c = i a * b j CS33 Intro to Computer Systems XVIII 20 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Cache-Miss Analysis Assume: matrix elements are doubles cache block = 8 doubles cache size C << n (much smaller than n) n First iteration: n/8 + n = 9n/8 misses = * afterwards in cache: (schematic) = * 8 wide CS33 Intro to Computer Systems XVIII 21 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Cache-Miss Analysis Assume: matrix elements are doubles cache block = 8 doubles cache size C << n (much smaller than n) Second iteration: again: n/8 + n = 9n/8 misses Total misses: 9n/8 * n 2 = (9/8) * n 3 = * 8 wide n CS33 Intro to Computer Systems XVIII 22 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Blocked Matrix Multiplication /* Multiply n x n matrices a and b */ void mmm(int n, double a[n][n], double b[n][n], double c[n][n]) { int i, j, k; for (i = 0; i < n; i+=b) for (j = 0; j < n; j+=b) for (k = 0; k < n; k+=b) /* B x B mini matrix multiplications */ for (i1 = i; i1 < i+b; i1++) for (j1 = j; j1 < j+b; j1++) for (k1 = k; k1 < k+b; k1++) c[i1][j1] += a[i1][k1]*b[k1][j1]; j1 c = i1 a * b + c Matrix Block size B x B CS33 Intro to Computer Systems XVIII 23 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Cache-Miss Analysis Assume: cache block = 8 doubles cache size C << n (much smaller than n) three matrix blocks fit into cache: 3B 2 < C First (matrix block) iteration: B 2 /8 misses for each block 2n/B * B 2 /8 = nb/4 (omitting matrix c) = * n/b blocks afterwards in cache (schematic) = Matrix Block size B x B * CS33 Intro to Computer Systems XVIII 24 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Cache-Miss Analysis Assume: cache block = 8 doubles cache size C << n (much smaller than n) three matrix blocks fit into cache: 3B 2 < C Second (matrix block) iteration: same as first iteration 2n/B * B 2 /8 = nb/4 = * n/b blocks Total misses: nb/4 * (n/b) 2 = n 3 /(4B) Matrix Block size B x B CS33 Intro to Computer Systems XVIII 25 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Summary No blocking: (9/8) * n 3 Blocking: 1/(4B) * n 3 Suggest largest possible block size B, but limit 3B 2 < C! Reason for dramatic difference: matrix multiplication has inherent temporal locality:» input data: 3n 2, computation 2n 3» every array element used O(n) times! but program has to be written properly CS33 Intro to Computer Systems XVIII 26 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Quiz 2 What is the smallest value of B (in 8-byte doubles) for which the cache-miss analysis works? a) 1 b) 2 c) 4 d) 8 CS33 Intro to Computer Systems XVIII 27 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Blocking vs. ikj j1 c = i1 a * b + c n 3 /(4*8) misses (i,k) (k,*) = (i,*) 2*n 3 /8 misses CS33 Intro to Computer Systems XVIII 28 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Blocking vs. ikj $./matmult_ikj ikj:.608 secs $./matmult_blocked Blocked:.880 secs CS33 Intro to Computer Systems XVIII 29 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Why is ikj Faster? c Prefetching the processor detects sequential (stride-1) accesses to memory and issues loads before they are needed j1 = i1 a * b + c (i,k) = (k,*) (i,*) CS33 Intro to Computer Systems XVIII 30 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Concluding Observations Programmer can optimize for cache performance organize data structures appropriately take care in how data structures are accesses» nested loop structure» blocking is a general technique All systems favor cache-friendly code getting absolute optimum performance is very platform specific» cache sizes, line sizes, associativities, etc. can get most of the advantage with generic code» keep working set reasonably small (temporal locality)» use small strides (spatial locality) CS33 Intro to Computer Systems XVIII 31 Copyright 2017 Thomas W. Doeppner. All rights reserved.