Optimizing Cache Performance in Matrix Multiplication. UCSB CS240A, 2017 Modified from Demmel/Yelick s slides

Similar documents
CS Software Engineering for Scientific Computing Lecture 10:Dense Linear Algebra

COMPUTATIONAL LINEAR ALGEBRA

Uniprocessor Optimizations. Idealized Uniprocessor Model

CS267 Lecture 2 Single Processor Machines: Memory Hierarchies and Processor Features Case Study: Tuning Matrix Multiply

Linear Algebra for Modern Computers. Jack Dongarra

Behavioral Data Mining. Lecture 12 Machine Biology

Matrix Multiplication

Agenda. Cache-Memory Consistency? (1/2) 7/14/2011. New-School Machine Structures (It s a bit more complicated!)

COMP Parallel Computing. SMM (1) Memory Hierarchies and Shared Memory

Matrix Multiplication

Lecture 2: Single processor architecture and memory

Copyright 2012, Elsevier Inc. All rights reserved.

Lecture 16 Optimizing for the memory hierarchy

Bindel, Fall 2011 Applications of Parallel Computers (CS 5220) Tuning on a single core

Scientific Computing. Some slides from James Lambers, Stanford

CSE 502 Graduate Computer Architecture

Performance Models for Evaluation and Automatic Tuning of Symmetric Sparse Matrix-Vector Multiply

Lecture 18. Optimizing for the memory hierarchy

CS 2461: Computer Architecture 1

Lecture 3: Intro to parallel machines and models

CS/ECE 250 Computer Architecture

Lecture 2. Memory locality optimizations Address space organization

Automatically Tuned Linear Algebra Software (ATLAS) R. Clint Whaley Innovative Computing Laboratory University of Tennessee.

Lecture 7: Linear Algebra Algorithms

BLAS: Basic Linear Algebra Subroutines I

1 Motivation for Improving Matrix Multiplication

High-Performance Parallel Computing

Cache Impact on Program Performance. T. Yang. UCSB CS240A. 2017

LINPACK Benchmark. on the Fujitsu AP The LINPACK Benchmark. Assumptions. A popular benchmark for floating-point performance. Richard P.

Memory Hierarchy. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Alternate definition: Instruction Set Architecture (ISA) What is Computer Architecture? Computer Organization. Computer structure: Von Neumann model

211: Computer Architecture Summer 2016

Lecture 4. Instruction Level Parallelism Vectorization, SSE Optimizing for the memory hierarchy

Outline. Issues with the Memory System Loop Transformations Data Transformations Prefetching Alias Analysis

How to Write Fast Numerical Code

BLAS: Basic Linear Algebra Subroutines I

Chapter 2. Parallel Hardware and Parallel Software. An Introduction to Parallel Programming. The Von Neuman Architecture

How to Write Fast Numerical Code Spring 2012 Lecture 13. Instructor: Markus Püschel TAs: Georg Ofenbeck & Daniele Spampinato

Optimizing MMM & ATLAS Library Generator

Effect of memory latency

Objective. We will study software systems that permit applications programs to exploit the power of modern high-performance computers.

Lecture 7: Floating Point Arithmetic Memory Hierarchy and Cache

Today. Cache Memories. General Cache Concept. General Cache Organization (S, E, B) Cache Memories. Example Memory Hierarchy Smaller, faster,

PCOPP Uni-Processor Optimization- Features of Memory Hierarchy. Uni-Processor Optimization Features of Memory Hierarchy

Cache-oblivious Programming

Data-centric Transformations for Locality Enhancement

Today Cache memory organization and operation Performance impact of caches

Cache Memories October 8, 2007

AMath 483/583 Lecture 11. Notes: Notes: Comments on Homework. Notes: AMath 483/583 Lecture 11

AMath 483/583 Lecture 11

CUDA Memory Types All material not from online sources/textbook copyright Travis Desell, 2012

Algorithms and Architecture. William D. Gropp Mathematics and Computer Science

CS222: Cache Performance Improvement

Cache Memories. Topics. Next time. Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance

Exploiting the Performance of 32 bit Floating Point Arithmetic in Obtaining 64 bit Accuracy

2.7 Numerical Linear Algebra Software

Giving credit where credit is due

CISC 360. Cache Memories Nov 25, 2008

HPCS HPCchallenge Benchmark Suite

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Last class. Caches. Direct mapped

Measuring Performance. Speed-up, Amdahl s Law, Gustafson s Law, efficiency, benchmarks

CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance. Why More on Memory Hierarchy?

Autotuning (1/2): Cache-oblivious algorithms

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

Memory Hierarchy Optimizations and Performance Bounds for Sparse A T Ax

CS 612: Software Design for High-performance Architectures. Keshav Pingali Cornell University

NOW Handout Page # Why More on Memory Hierarchy? CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance

Agenda Cache memory organization and operation Chapter 6 Performance impact of caches Cache Memories

Cache Memories. EL2010 Organisasi dan Arsitektur Sistem Komputer Sekolah Teknik Elektro dan Informatika ITB 2010

Systems I. Optimizing for the Memory Hierarchy. Topics Impact of caches on performance Memory hierarchy considerations

Java Performance Analysis for Scientific Computing

Cache Memories /18-213/15-513: Introduction to Computer Systems 12 th Lecture, October 5, Today s Instructor: Phil Gibbons

Parallel Processing. Parallel Processing. 4 Optimization Techniques WS 2018/19

BLAS. Christoph Ortner Stef Salvini

The course that gives CMU its Zip! Memory System Performance. March 22, 2001

Administration. Prerequisites. Meeting times. CS 380C: Advanced Topics in Compilers

Memory Hierarchy. Computer Systems Organization (Spring 2017) CSCI-UA 201, Section 3. Instructor: Joanna Klukowska

ν Hold frequently accessed blocks of main memory 2 CISC 360, Fa09 Cache is an array of sets. Each set contains one or more lines.

Memory Hierarchy. Cache Memory Organization and Access. General Cache Concept. Example Memory Hierarchy Smaller, faster,

HIGH PERFORMANCE NUMERICAL LINEAR ALGEBRA. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

CS 33. Caches. CS33 Intro to Computer Systems XVIII 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

What is Cache Memory? EE 352 Unit 11. Motivation for Cache Memory. Memory Hierarchy. Cache Definitions Cache Address Mapping Cache Performance

Adaptable benchmarks for register blocked sparse matrix-vector multiplication

A Crash Course in Compilers for Parallel Computing. Mary Hall Fall, L2: Transforms, Reuse, Locality

Denison University. Cache Memories. CS-281: Introduction to Computer Systems. Instructor: Thomas C. Bressoud

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Matrix Operations on the GPU

Cache Memories. Cache Memories Oct. 10, Inserting an L1 Cache Between the CPU and Main Memory. General Org of a Cache Memory

CS 426 Parallel Computing. Parallel Computing Platforms

Lecture 3. Vectorization Memory system optimizations Performance Characterization

Cache memories The course that gives CMU its Zip! Cache Memories Oct 11, General organization of a cache memory

Chapter 5. Topics in Memory Hierachy. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ.

High Performance Computing and Programming, Lecture 3

Cache memories are small, fast SRAM based memories managed automatically in hardware.

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

Classification Steady-State Cache Misses: Techniques To Improve Cache Performance:

Lecture 7. Using Shared Memory Performance programming and the memory hierarchy

Matrix Multiplication

A Cache Simulator for Shared Memory Systems

Transcription:

Optimizing Cache Performance in Matrix Multiplication UCSB CS240A, 2017 Modified from Demmel/Yelick s slides 1

Case Study with Matrix Multiplication An important kernel in many problems Optimization ideas can be used in other problems The most-studied algorithm in high performance computing How to measure quality of implementation in terms of performance? Megaflops number Defined as: Core computation count / time spent Matrix-matrix multiplication operation count = 2 n^3 Example: 300MFLOPS à 300 million MM-related floating operations performed per second. 2

What do commercial and CSE applications have in common? (Red Hot Blue Cool)

Matrix-multiply, optimized several ways Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops 4

Note on Matrix Storage A matrix is a 2-D array of elements, but memory addresses are 1-D Conventions for matrix layout by column, or column major (Fortran default); A(i,j) at A+i+j*n by row, or row major (C default) A(i,j) at A+i*n+j recursive later) Column major matrix in memory Column major Row major 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 0 4 8 12 16 1 5 9 13 17 2 6 10 14 18 3 7 11 15 19 Column major (for now) cachelines Blue row of matrix is stored in red cachelines Figure source: Larry Carter, UCSD 5

Using a Simple Model of Memory to Optimize Assume just 2 levels in the hierarchy, fast and slow All data initially in slow memory m = number of memory elements (words) moved between fast and slow memory t m = time per slow memory operation f = number of arithmetic operations t f = time per arithmetic operation << t m Computational Intensity: Key to algorithm efficiency q = f / m average number of flops per slow memory access Minimum possible time = f* t f when all data in fast memory Actual time = computation cost + data fetch cost f * t f + m * t m = f * t f * (1 + t m /t f * 1/q) Larger q means time closer to minimum f * t f q ³ t m /t f needed to get at least half of peak speed Machine Balance: Key to machine efficiency 6

Warm up: Matrix-vector multiplication {implements y = y + A*x} for i = 1 to n for j = 1 to n y(i) = y(i) + A(i,j)*x(j) A(i,:) = + * y(i) y(i) x(:) 7

Warm up: Matrix-vector multiplication {read x(1:n) into fast memory} {read y(1:n) into fast memory} for i = 1 to n {read row i of A into fast memory} for j = 1 to n y(i) = y(i) + A(i,j)*x(j) {write y(1:n) back to slow memory} m = number of slow memory refs = 3n + n 2 f = number of arithmetic operations = 2n 2 q = f / m» 2 Time f * t f + m * t m = f * t f * (1 + t m /t f * 1/q) = 2*n 2 * t f * (1 + t m /t f * 1/2) Megaflop rate =f/ Time = 1 / (t f + 0.5 t m ) Matrix-vector multiplication limited by slow memory speed 8

Modeling Matrix-Vector Multiplication Compute time for nxn = 1000x1000 matrix For t f and t m, using data from R. Vuduc s PhD (pp 351-3) http://bebop.cs.berkeley.edu/pubs/vuduc2003-dissertation.pdf For t m use minimum-memory-latency / words-per-cache-line Clock Peak Mem Lat (Min,Max) Linesize t_m/t_f MHz Mflop/s cycles Bytes Ultra 2i 333 667 38 66 16 24.8 Ultra 3 900 1800 28 200 32 14.0 Pentium 3 500 500 25 60 32 6.3 Pentium3M 800 800 40 60 32 10.0 Power3 375 1500 35 139 128 8.8 Power4 1300 5200 60 10000 128 15.0 Itanium1 800 3200 36 85 32 36.0 Itanium2 900 3600 11 60 64 5.5 machine balance (q must be at least this for ½ peak speed) 9

Simplifying Assumptions What simplifying assumptions did we make in this analysis? Ignored parallelism in processor between memory and arithmetic within the processor Sometimes drop arithmetic term in this type of analysis Assumed fast memory was large enough to hold three vectors Reasonable if we are talking about any level of cache Not if we are talking about registers (~32 words) Assumed the cost of a fast memory access is 0 Reasonable if we are talking about registers Not necessarily if we are talking about cache (1-2 cycles for L1) Memory latency is constant Could simplify even further by ignoring memory operations in X and Y vectors Megaflop rate = 1 / ( t f + 0.5 t m ) 10

Validating the Model How well does the model predict actual performance? Actual DGEMV: Most highly optimized code for the platform Model sufficient to compare across machines But under-predicting on most recent ones due to latency estimate MFlop/s 1400 1200 1000 800 600 400 200 0 Predicted MFLOP (ignoring x,y) Pre DGEMV Mflops (with x,y) Actual DGEMV (MFLOPS) Ultra 2i Ultra 3 Pentium 3 Pentium3M Power3 Power4 Itanium1 Itanium2 11

Naïve Matrix-Matrix Multiplication {implements C = C + A*B} for i = 1 to n for j = 1 to n for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) Algorithm has 2*n 3 = O(n 3 ) Flops and operates on 3*n 2 words of memory q potentially as large as 2*n 3 / 3*n 2 = O(n) C(i,j) C(i,j) A(i,:) = + * B(:,j) 12

Naïve Matrix Multiply {implements C = C + A*B} for i = 1 to n {read row i of A into fast memory} for j = 1 to n {read C(i,j) into fast memory} {read column j of B into fast memory} for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) {write C(i,j) back to slow memory} C(i,j) C(i,j) A(i,:) = + * B(:,j) 13

Naïve Matrix Multiply Number of slow memory references on unblocked matrix multiply m = n 3 to read each column of B n times + n 2 to read each row of A once + 2n 2 to read and write each element of C once = n 3 + 3n 2 So q = f / m = 2n 3 / (n 3 + 3n 2 )» 2 for large n, no improvement over matrix-vector multiply Inner two loops are just matrix-vector multiply, of row i of A times B Similar for any other order of 3 loops C(i,j) C(i,j) A(i,:) = + * B(:,j) 14

Partitioning for blocked matrix multiplication Example of submartix partitioning 15

Blocked (Tiled) Matrix Multiply Consider A,B,C to be N-by-N matrices of b-by-b subblocks where b=n / N is called the block size for i = 1 to N for j = 1 to N for k = 1 to N C(i,j) = C(i,j) + A(i,k) * B(k,j) {do a matrix multiply on blocks} C(i,j) C(i,j) A(i,k) = + * B(k,j) 16

Blocked (Tiled) Matrix Multiply Consider A,B,C to be N-by-N matrices of b-by-b subblocks where b=n / N is called the block size for i = 1 to N for j = 1 to N {read block C(i,j) into fast memory} for k = 1 to N {read block A(i,k) into fast memory} {read block B(k,j) into fast memory} C(i,j) = C(i,j) + A(i,k) * B(k,j) {do a matrix multiply on blocks} {write block C(i,j) back to slow memory} C(i,j) C(i,j) A(i,k) = + * B(k,j) 17

Blocked (Tiled) Matrix Multiply Recall: m is amount memory traffic between slow and fast memory matrix has nxn elements, and NxN blocks each of size bxb f is number of floating point operations, 2n 3 for this problem q = f / m is our measure of memory access efficiency So: m = N*n 2 read each block of B N 3 times (N 3 * b 2 = N 3 * (n/n) 2 = N*n 2 ) + N*n 2 read each block of A N 3 times + 2n 2 read and write each block of C once = (2N + 2) * n 2 So computational intensity q = f / m = 2n 3 / ((2N + 2) * n 2 )» n / N = b for large n So we can improve performance by increasing the blocksize b Can be much faster than matrix-vector multiply (q=2) 18

Using Analysis to Understand Machines The blocked algorithm has computational intensity q» b The larger the block size, the more efficient our algorithm will be Limit: All three blocks from A,B,C must fit in fast memory (cache), so we cannot make these blocks arbitrarily large Assume your fast memory has size M fast 3b 2 M fast, so q» b (M fast /3) 1/2 To build a machine to run matrix multiply at 1/2 peak arithmetic speed of the machine, we need a fast memory of size M fast ³ 3b 2» 3q 2 = 3(t m /t f ) 2 This size is reasonable for L1 cache, but not for register sets Note: analysis assumes it is possible to schedule the instructions perfectly required t_m/t_f KB Ultra 2i 24.8 14.8 Ultra 3 14 4.7 Pentium 3 6.25 0.9 Pentium3M 10 2.4 Power3 8.75 1.8 Power4 15 5.4 Itanium1 36 31.1 Itanium2 5.5 0.7 19

Basic Linear Algebra Subroutines (BLAS) Industry standard interface (evolving) www.netlib.org/blas, www.netlib.org/blas/blast--forum Vendors, others supply optimized implementations History BLAS1 (1970s): vector operations: dot product, saxpy (y=a*x+y), etc m=2*n, f=2*n, q ~1 or less BLAS2 (mid 1980s) matrix-vector operations. Example: matrix vector multiply, etc m=n^2, f=2*n^2, q~2, less overhead somewhat faster than BLAS1 BLAS3 (late 1980s) matrix-matrix operations: Example: matrix matrix multiply, etc m <= 3n^2, f=o(n^3), so q=f/m can possibly be as large as n, so BLAS3 is potentially much faster than BLAS2 Good algorithms used BLAS3 when possible (LAPACK & ScaLAPACK) See www.netlib.org/{lapack,scalapack} If BLAS3 is not possible, use BLAS2 if applicable. Otherwise BLAS1. 20

BLAS speeds on an IBM RS6000/590 Peak speed = 266 Mflops Peak BLAS 3 BLAS 2 BLAS 1 BLAS 3 (n-by-n matrix matrix multiply) vs BLAS 2 (n-by-n matrix vector multiply) vs BLAS 1 (saxpy of n vectors) 21

Dense Linear Algebra: BLAS2 vs. BLAS3 BLAS2 and BLAS3 have very different computational intensity, and therefore different performance BLAS3 (MatrixMatrix) vs. BLAS2 (MatrixVector) MFlop/s 1000 900 800 700 600 500 400 300 200 100 0 DGEMM DGEMV AMD Athlon-600 DEC ev56-533 DEC ev6-500 HP9000/735/135 IBM PPC604-112 IBM Power2-160 IBM Power3-200 Pentium Pro-200 Pentium II-266 Pentium III-550 SGI R10000ip28-200 SGI R12000ip30-270 Data source: Jack Dongarra 22

Summary Performance programming on uniprocessors requires understanding of memory system understanding of fine-grained parallelism in processor Simple performance models can aid in understanding Two ratios are key to efficiency (relative to peak) 1.computational intensity of the algorithm: q = f/m = # floating point operations / # slow memory references 2.machine balance in the memory system: t m /t f = time for slow memory reference / time for floating point operation Want q > t m /t f to get half machine peak Blocking (tiling) is a basic approach to increase q Techniques apply generally, but the details (e.g., block size) are architecture dependent Similar techniques are possible on other data structures and algorithms 23

Questions You Should Be Able to Answer 1. What is the key to understand algorithm efficiency in our simple memory model? 2. Why does block matrix multiply reduce the number of memory references? 2D blocking is sometime called tiling 3. What are the BLAS? 24

Summary Details of machine are important for performance Processor and memory system (not just parallelism) Before you parallelize, make sure you re getting good serial performance What to expect? Use understanding of hardware limits Locality is at least as important as computation Temporal: re-use of data recently used Spatial: using data nearby that recently used Machines have memory hierarchies 100s of cycles to read from DRAM (main memory) Caches are fast (small) memory that optimize average case Can rearrange code/data to improve locality Useful techniques: Blocking. Loop exchange. 25