5-23 The course that gies CMU its Zip! Topics Cache Memories Oct., 22! Generic cache memory organization! Direct mapped caches! Set associatie caches! Impact of caches on performance Cache Memories Cache memories are small, fast SRAM-based memories managed automatically in hardware.! Hold frequently accessed blocks of main memory CPU looks first for data in L, then in L2, then in main memory. Typical bus structure: CPU chip register file L ALU cache cache bus system bus memory bus L2 cache bus interface I/O bridge main memory class4.ppt 2 5-23, F 2 Inserting an L Cache Between the CPU and Main Memory The transfer unit between the CPU register file and the cache is a 4-byte block. line line The transfer unit between the cache and main memory is a 4-word block (6 bytes). block block 2 block 3 a b c d... p q r s... w x y z... The tiny, ery fast CPU register file has room for four 4-byte words. The small fast L cache has room for two 4-word blocks. The big slow main memory has room for many 4-word blocks. 3 5-23, F 2 General Org of a Cache Memory Cache is an array of sets. Each set contains one or more lines. Each line holds a block of data. S = 2 s sets set : set : set S-: bit per line bits per line B Cache size: C = B x E x S data bytes 4 5-23, F 2 B = 2 b bytes per B B B B B E lines per set
Addressing Caches set : set : set S-: B B B B B B Address A: t bits s bits b bits m- <> <set index> <block offset> The word at address A is in the cache if the bits in one of the <> lines in set <set index> match <>. The word contents begin at offset <block offset> bytes from the beginning of the block. Direct-Mapped Cache Simplest kind of cache Characterized by exactly one line per set. set : set : set S-: E= lines per set 5 5-23, F 2 6 5-23, F 2 Accessing Direct-Mapped Caches Set selection! Use the set index bits to determine the set of interest. Accessing Direct-Mapped Caches Line matching and word selection! Line matching: Find a line in the selected set with a matching! Word selection: Then extract the word m- t bits selected set s bits b bits set index block offset set : set : set S-: selected set (i): (2) The bits in the cache line must match the bits in the address =? () The bit must be set m- =? t bits 2 3 4 5 6 7 w w w 2 w 3 s bits b bits i set index block offset (3) If () and (2), then cache hit, and block offset selects starting byte. 7 5-23, F 2 8 5-23, F 2
Direct-Mapped Cache Simulation t= s=2 b= x xx x () m[] M[-] m[] M=6 byte addresses, B=2 bytes/block, S=4 sets, E= entry/set Address trace (reads): [ 2 ], [ 2 ], 3 [ 2 ], 8 [ 2 ], [ 2 ] [ 2 ] (miss) data 8 [ 2 ] (miss) data m[9] M[8-9] m[8] 3 [ 2 ] (miss) data m[] M[-] m[] m[3] M[2-3] m[2] (4) (5) M[2-3] m[3] M[2-3] m[2] 9 5-23, F 2 (3) [ 2 ] (miss) data m[] M[-] m[] Why Use Middle Bits as Index? 4-line Cache High-Order Bit Indexing Middle-Order Bit Indexing High-Order Bit Indexing! Adjacent memory lines would map to same cache entry! Poor use of spatial locality Middle-Order Bit Indexing! Consecutie memory lines map to different cache lines! Can hold C-byte region of address space in cache at one time 5-23, F 2 Set Associatie Caches Characterized by more than one line per set Accessing Set Associatie Caches Set selection! identical to direct-mapped cache set : E=2 lines per set set : set : Selected set set : set S-: m- t bits s bits b bits set index block offset set S-: 5-23, F 2 2 5-23, F 2
Accessing Set Associatie Caches Line matching and word selection! must compare the in each line in the selected set. Multi-Leel Caches Options: separate data and instruction caches,, or a unified cache selected set (i): =? () The bit must be set. 2 3 4 5 6 7 w w w 2 w 3 Processor Regs L d-cache L i-cache Unified Unified L2 L2 Cache Cache Memory disk disk (2) The bits in one of the cache lines must match the bits in the address m- =? t bits s bits b bits i set index block offset (3) If () and (2), then cache hit, and block offset selects starting byte. size: speed: $/Mbyte: line size: 2 B 3 ns 8-64 KB 3 ns 8 B 32 B larger, slower, cheaper -4MB SRAM 6 ns $/MB 32 B 28 MB DRAM 6 ns $.5/MB 8 KB 3 GB 8 ms $.5/MB 3 5-23, F 2 4 5-23, F 2 Intel Pentium Cache Hierarchy Regs. L Data cycle latency 6 KB 4-way assoc Write-through 32B lines L Instruction 6 KB, 4-way 32B lines Processor Processor Chip Chip L2 L2 Unified Unified 28KB--2 28KB--2 MB MB 4-way 4-way assoc assoc Write-back Write-back Write Write allocate allocate 32B 32B lines lines Main Main Memory Memory Up Up to to 4GB 4GB 5 5-23, F 2 Cache Performance Metrics Miss Rate! Fraction of memory references not found in cache (misses/references)! Typical numbers: " 3-% for L " can be quite small (e.g., < %) for L2, depending on size, etc. Hit Time! Time to delier a line in the cache to the processor (includes time to determine whether the line is in the cache)! Typical numbers: " clock cycle for L " 3-8 clock cycles for L2 Miss Penalty! Additional time required because of a miss " Typically 25- cycles for main memory 6 5-23, F 2
Writing Cache Friendly Code Repeated references to ariables are good (temporal locality) Stride- reference patterns are good (spatial locality) Examples:! cold cache, 4-byte words, 4-word s int sumarrayrows(int a[m][n]) { int i, j, sum = ; for (i = ; i < M; i++) for (j = ; j < N; j++) sum += a[i][j]; return sum; int sumarraycols(int a[m][n]) { int i, j, sum = ; for (j = ; j < N; j++) for (i = ; i < M; i++) sum += a[i][j]; return sum; Miss rate = /4 = 25% Miss rate = % 7 5-23, F 2 The Memory Mountain Read throughput (read bandwidth)! Number of bytes read from memory per second (MB/s) Memory mountain! Measured read throughput as a function of spatial and temporal locality.! Compact way to characterize memory system performance. 8 5-23, F 2 Memory Mountain Test Function /* The test function */ oid test(int elems, int stride) { int i, result = ; olatile int sink; for (i = ; i < elems; i += stride) result += data[i]; sink = result; /* So compiler doesn't optimize away the loop */ /* Run test(elems, stride) and return read throughput (MB/s) */ double run(int size, int stride, double Mhz) { double cycles; int elems = size / sizeof(int); test(elems, stride); /* warm up the cache */ cycles = fcyc2(test, elems, stride, ); /* call test(elems,stride) */ return (size / stride) / (cycles / Mhz); /* conert cycles to MB/s */ 9 5-23, F 2 Memory Mountain Main Routine /* mountain.c - Generate the memory mountain. */ #define MINBYTES ( << ) /* Working set size ranges from KB */ #define MAXBYTES ( << 23) /*... up to 8 MB */ #define MAXSTRIDE 6 /* Strides range from to 6 */ #define MAXELEMS MAXBYTES/sizeof(int) int data[maxelems]; /* The array we'll be traersing */ int main() { int size; /* Working set size (in bytes) */ int stride; /* Stride (in array elements) */ double Mhz; /* Clock frequency */ init_data(data, MAXELEMS); /* Initialize each element in data to */ Mhz = mhz(); /* Estimate the clock frequency */ for (size = MAXBYTES; size >= MINBYTES; size >>= ) { for (stride = ; stride <= MAXSTRIDE; stride++) printf("%.f\t", run(size, stride, Mhz)); printf("\n"); exit(); 2 5-23, F 2
The Memory Mountain read throughput (MB/s) Slopes of Spatial Locality 2 8 6 4 2 s s3 stride (words) s5 s7 s9 s s3 mem s5 2 5-23, F 2 xe 8m L2 2m 52k L 28k 32k 8k 2k Pentium III Xeon 55 MHz 6 KB on-chip L d-cache 6 KB on-chip L i-cache 52 KB off-chip unified L2 cache Ridges of Temporal Locality working set size (bytes) Ridges of Temporal Locality Slice through the memory mountain with stride=! illuminates read throughputs of different caches and memory read througput (MB/s) 2 8 6 4 2 8m main memory region 4m 2m 24k 52k 256k L2 cache region 22 working set size (bytes) 5-23, F 2 28k 64k 32k 6k 8k L cache region 4k 2k k A Slope of Spatial Locality Slice through memory mountain with size=256kb! shows size. read throughput (MB/s) 8 7 6 5 4 3 2 s s2 s3 s4 s5 s6 s7 s8 s9 s s s2 s3 s4 s5 s6 stride (words) one access per cache line 23 5-23, F 2 Matrix Multiplication Example Major Cache Effects to Consider! Total cache size " Exploit temporal locality and keep the working set small (e.g., by using blocking) /* /* ijk ijk */ */ Variable sum! Block size for for (i=; (i=; i<n; i<n; i++) i++) { held in register " Exploit spatial locality for for (j=; (j=; j<n; j<n; j++) j++) { sum sum =.;.; for for (k=; (k=; k<n; k<n; k++) k++) Description: sum sum += += a[i][k] a[i][k] * b[k][j]; b[k][j];! Multiply N x N matrices c[i][j] c[i][j] = sum; sum;! O(N3) total operations! Accesses " N reads per source element " N alues summed per destination» but may be able to hold in register 24 5-23, F 2
Miss Rate Analysis for Matrix Multiply Assume:! Line size = 32B (big enough for 4 64-bit words)! Matrix dimension (N) is ery large " Approximate /N as.! Cache is not een big enough to hold multiple rows Analysis Method:! Look at access pattern of inner loop i k A k j B 25 5-23, F 2 i j C Layout of C Arrays in Memory (reiew) C arrays allocated in row-major order! each row in contiguous memory locations Stepping through columns in one row:! for (i = ; i < N; i++) sum += a[][i];! accesses successie elements! if block size (B) > 4 bytes, exploit spatial locality " compulsory miss rate = 4 bytes / B Stepping through rows in one column:! for (i = ; i < n; i++) sum += a[i][];! accesses distant elements! no spatial locality! " compulsory miss rate = (i.e. %) 26 5-23, F 2 Matrix Multiplication (ijk) Matrix Multiplication (jik) /* /* ijk ijk */ */ for for (i=; (i=; i<n; i<n; i++) i++) { for for (j=; (j=; j<n; j<n; j++) j++) { sum sum =.;.; for for (k=; (k=; k<n; k<n; k++) k++) sum sum += += a[i][k] * b[k][j]; c[i][j] = sum; sum;.25.. (i,*) (*,j) (i,j) Row-wise Fixed /* /* jik jik */ */ for for (j=; (j=; j<n; j<n; j++) j++) { for for (i=; (i=; i<n; i<n; i++) i++) { sum sum =.;.; for for (k=; (k=; k<n; k<n; k++) k++) sum sum += += a[i][k] * b[k][j]; c[i][j] = sum sum.25.. (i,*) (*,j) (i,j) Row-wise Columnwise Columnwise Fixed 27 5-23, F 2 28 5-23, F 2
Matrix Multiplication (kij) Matrix Multiplication (ikj) /* /* kij kij */ */ for for (k=; (k=; k<n; k<n; k++) k++) { for for (i=; (i=; i<n; i<n; i++) i++) { r = a[i][k]; for for (j=; (j=; j<n; j<n; j++) j++) c[i][j] += += r * b[k][j]; (i,k) (k,*) (i,*) Fixed Row-wise Row-wise /* /* ikj ikj */ */ for for (i=; (i=; i<n; i<n; i++) i++) { for for (k=; (k=; k<n; k<n; k++) k++) { r = a[i][k]; for for (j=; (j=; j<n; j<n; j++) j++) c[i][j] += += r * b[k][j]; (i,k) (k,*) (i,*) Fixed Row-wise Row-wise..25.25..25.25 29 5-23, F 2 3 5-23, F 2 Matrix Multiplication (jki) Matrix Multiplication (kji) /* /* jki jki */ */ for for (j=; (j=; j<n; j<n; j++) j++) { for for (k=; (k=; k<n; k<n; k++) k++) { r = b[k][j]; for for (i=; (i=; i<n; i<n; i++) i++) c[i][j] += += a[i][k] * r; r;... (*,k) (k,j) (*,j) Column - wise Fixed /* /* kji kji */ */ for for (k=; (k=; k<n; k<n; k++) k++) { for for (j=; (j=; j<n; j<n; j++) j++) { r = b[k][j]; for for (i=; (i=; i<n; i<n; i++) i++) c[i][j] += += a[i][k] * r; r;... (*,k) (k,j) (*,j) Fixed Columnwise Columnwise Columnwise 3 5-23, F 2 32 5-23, F 2
Summary of Matrix Multiplication Pentium Matrix Multiply Performance ijk (& jik): 2 loads, stores misses/iter =.25 kij (& ikj): 2 loads, store misses/iter =.5 jki (& kji): 2 loads, store misses/iter = 2. Miss rates are helpful but not perfect predictors. " Code scheduling matters, too. 6 for (i=; i<n; i++) { for (j=; j<n; j++) { for (k=; k<n; k++) { for (i=; i<n; i++) { for (j=; j<n; j++) { for (k=; k<n; k++) { 5 sum =.; for (k=; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; r = a[i][k]; for (j=; j<n; j++) c[i][j] += r * b[k][j]; r = b[k][j]; for (i=; i<n; i++) c[i][j] += a[i][k] * r; Cycles/iteration 4 3 2 kji jki kij ikj jik ijk 33 5-23, F 2 25 5 75 25 5 75 2 225 25 275 3 325 35 375 4 34 5-23, F 2 Array size (n) Improing Temporal Locality by Blocking Example: Blocked matrix multiplication! block (in this context) does not mean.! Instead, it mean a sub-block within the matrix.! Example: N = 8; sub-block size = 4 A A 2 A 2 A 22 X B B 2 B 2 B 22 C = A B + A 2 B 2 C 2 = A B 2 + A 2 B 22 C 2 = A 2 B + A 22 B 2 C 22 = A 2 B 2 + A 22 B 22 = C C 2 C 2 C 22 Key idea: Sub-blocks (i.e., A xy ) can be treated just like scalars. Blocked Matrix Multiply (bijk) for (jj=; jj<n; jj+=bsize) { for (i=; i<n; i++) for (j=jj; j < min(jj+bsize,n); j++) c[i][j] =.; for (kk=; kk<n; kk+=bsize) { for (i=; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum =. for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; c[i][j] += sum; 35 5-23, F 2 36 5-23, F 2
Blocked Matrix Multiply Analysis! Innermost loop pair multiplies a X bsize slier of A by a bsize X bsize block of B and accumulates into X bsize slier of C! Loop oer i steps through n row sliers of A & C, using same B for (i=; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum =. for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; Innermost c[i][j] += sum; Loop Pair kk jj jj i row slier accessed Update successie bsize times block reused n elements of slier 37 times in succession 5-23, F 2 kk i Pentium Blocked Matrix Multiply Performance Blocking (bijk( and bikj) ) improes performance by a factor of two oer unblocked ersions (ijk( and jik)! relatiely insensitie to array size. Cycles/iteration 6 5 4 3 2 25 5 75 25 5 75 2 225 25 275 3 325 35 375 4 38 5-23, F 2 Array size (n) kji jki kij ikj jik ijk bijk (bsize = 25) bikj (bsize = 25) Concluding Obserations Programmer can optimize for cache performance! How data structures are organized! How data are accessed " Nested loop structure " Blocking is a general technique All systems faor cache friendly code! Getting absolute optimum performance is ery platform specific " Cache sizes, line sizes, associatiities, etc.! Can get most of the adane with generic code " Keep working set reasonably small (temporal locality) " Use small strides (spatial locality) 39 5-23, F 2