NOW Handout Page # Why More on Memory Hierarchy? CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance

Similar documents
CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance. Why More on Memory Hierarchy?

Chapter-5 Memory Hierarchy Design

CSE 502 Graduate Computer Architecture. Lec Advanced Memory Hierarchy

CSE 502 Graduate Computer Architecture. Lec Advanced Memory Hierarchy and Application Tuning

Improving Cache Performance. Dr. Yitzhak Birk Electrical Engineering Department, Technion

Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses

Types of Cache Misses: The Three C s

Classification Steady-State Cache Misses: Techniques To Improve Cache Performance:

5008: Computer Architecture

Lecture 9: Improving Cache Performance: Reduce miss rate Reduce miss penalty Reduce hit time

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses. Professor Randy H. Katz Computer Science 252 Fall 1995

Graduate Computer Architecture. Handout 4B Cache optimizations and inside DRAM

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses Professor Randy H. Katz Computer Science 252 Spring 1996

Improving Cache Performance. Reducing Misses. How To Reduce Misses? 3Cs Absolute Miss Rate. 1. Reduce the miss rate, Classifying Misses: 3 Cs

DECstation 5000 Miss Rates. Cache Performance Measures. Example. Cache Performance Improvements. Types of Cache Misses. Cache Performance Equations

Graduate Computer Architecture. Handout 4B Cache optimizations and inside DRAM

! 11 Advanced Cache Optimizations! Memory Technology and DRAM optimizations! Virtual Machines

Cache performance Outline

Memory Cache. Memory Locality. Cache Organization -- Overview L1 Data Cache

Lecture 11 Reducing Cache Misses. Computer Architectures S

Outline. EECS 252 Graduate Computer Architecture. Lec 16 Advanced Memory Hierarchy. Why More on Memory Hierarchy? Review: 6 Basic Cache Optimizations

Lecture 7: Memory Hierarchy 3 Cs and 7 Ways to Reduce Misses Professor David A. Patterson Computer Science 252 Fall 1996

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Computer Architecture Spring 2016

EITF20: Computer Architecture Part4.1.1: Cache - 2

CS 252 Graduate Computer Architecture. Lecture 8: Memory Hierarchy

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

CPE 631 Lecture 06: Cache Design

Aleksandar Milenkovich 1

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

Lec 12 How to improve cache performance (cont.)

Copyright 2012, Elsevier Inc. All rights reserved.

Lecture 19: Memory Hierarchy Five Ways to Reduce Miss Penalty (Second Level Cache) Admin

CMSC 611: Advanced Computer Architecture. Cache and Memory

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Lecture 10 Advanced Memory Hierarchy

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Performance (1/latency) Capacity Access Time Cost. CPU Registers 100s Bytes <1 ns. Cache K Bytes M Bytes 1-10 ns.01 cents/byte

Advanced cache optimizations. ECE 154B Dmitri Strukov

Cache Memory: Instruction Cache, HW/SW Interaction. Admin

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Advanced optimizations of cache performance ( 2.2)

CSE 502 Graduate Computer Architecture

Advanced Computer Architecture- 06CS81-Memory Hierarchy Design

Memory Hierarchy Design. Chapter 5

EITF20: Computer Architecture Part4.1.1: Cache - 2

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

NOW Handout Page 1. Review: Who Cares About the Memory Hierarchy? EECS 252 Graduate Computer Architecture. Lec 12 - Caches

Chapter - 5 Multiprocessors and Thread-Level Parallelism

Chapter 5. Topics in Memory Hierachy. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ.

COSC 5351 Advanced Computer Architecture. Slides modified from Hennessy CS252 course slides

Outline. 1 Reiteration. 2 Cache performance optimization. 3 Bandwidth increase. 4 Reduce hit time. 5 Reduce miss penalty. 6 Reduce miss rate

MEMORY HIERARCHY DESIGN. B649 Parallel Architectures and Programming

CS/ECE 250 Computer Architecture

Aleksandar Milenkovich 1

L2 cache provides additional on-chip caching space. L2 cache captures misses from L1 cache. Summary

Lec 11 How to improve cache performance

Memory Hierarchy Basics

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Memory Hierarchy Basics. Ten Advanced Optimizations. Small and Simple

Recap: The Big Picture: Where are We Now? The Five Classic Components of a Computer. CS152 Computer Architecture and Engineering Lecture 20.

Memory Hierarchy. Advanced Optimizations. Slides contents from:

Cache Optimisation. sometime he thought that there must be a better way

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

COSC 6385 Computer Architecture. - Memory Hierarchies (II)

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved.

Announcements. ! Previous lecture. Caches. Inf3 Computer Architecture

COSC 6385 Computer Architecture - Memory Hierarchy Design (III)

EE 4683/5683: COMPUTER ARCHITECTURE

Lecture 11. Virtual Memory Review: Memory Hierarchy

Memory Hierarchies 2009 DAT105

Lecture-18 (Cache Optimizations) CS422-Spring

Lecture 13: Memory Systems -- Cache Op7miza7ons CSE 564 Computer Architecture Summer 2017

Advanced cache memory optimizations

Cache Performance (H&P 5.3; 5.5; 5.6)

Topics. Digital Systems Architecture EECE EECE Need More Cache?

NOW Handout Page 1 CS258 S99 1. Outline. CSE 820 Graduate Computer Architecture. Advanced Memory Hierarchy Virtual Machine Architecture

COSC 6385 Computer Architecture - Memory Hierarchies (II)

LECTURE 5: MEMORY HIERARCHY DESIGN

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Adapted from David Patterson s slides on graduate computer architecture

Copyright 2012, Elsevier Inc. All rights reserved.

CS422 Computer Architecture

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

Virtual memory why? Virtual memory parameters Compared to first-level cache Parameter First-level cache Virtual memory. Virtual memory concepts

Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

CS252 S05. Outline. Dynamic Branch Prediction. Static Branch Prediction. Dynamic Branch Prediction. Dynamic Branch Prediction

EI338: Computer Systems and Engineering (Computer Architecture & Operating Systems)

Advanced Caching Techniques

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Copyright 2012, Elsevier Inc. All rights reserved.

Advanced Caching Techniques

CISC 662 Graduate Computer Architecture Lecture 13 - Limits of ILP

Administrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies

Reducing Miss Penalty: Read Priority over Write on Miss. Improving Cache Performance. Non-blocking Caches to reduce stalls on misses

Memories. CPE480/CS480/EE480, Spring Hank Dietz.

COEN6741. Computer Architecture and Design

Transcription:

CISC 66 Graduate Computer Architecture Lecture 8 - Cache Performance Michela Taufer Performance Why More on Memory Hierarchy?,,, Processor Memory Processor-Memory Performance Gap Growing Powerpoint Lecture Notes from John Hennessy and David Patterson s: Computer Architecture, 4th edition ---- Additional teaching material from: Jelena Mirkovic (U Del) and John Kubiatowicz (UC Berkeley) 98 985 99 995 Year 5 Review: 6 Basic Cache Optimizations Advanced Cache Optimizations Reducing hit time. Giving Reads Priority over Writes E.g., Read complete before earlier writes in write buffer. Avoiding Address Translation during Cache Indexing Reducing Miss Penalty. Multilevel Caches Reducing Miss Rate 4. Larger Block size (Compulsory misses) 5. Larger Cache size (Capacity misses) 6. Higher Associativity (Conflict misses) Reducing hit time. Small and simple caches. Way prediction. Trace caches Increasing cache bandwidth 4. Pipelined caches 5. Multibanked caches 6. Nonblocking caches Reducing Miss Penalty 7. Critical word first 8. Merging write buffers Reducing Miss Rate 9. Compiler optimizations Reducing miss penalty or miss rate via parallelism.hardware prefetching.compiler prefetching 4 CS58 S99

. Fast Hit times via Small and Simple Caches Index tag memory and then compare takes time Small cache can help hit time since smaller memory takes less time to index E.g., L caches same size for generations of AMD microprocessors: K6, Athlon, and Opteron Also L cache small enough to fit on chip with the processor avoids time penalty of going off chip Simple direct mapping Can overlap tag check with data transmission since no choice Access time estimate for 9 nm using CACTI model 4. Median ratios of access time relative to the direct-mapped caches are.,.9, and.4 for -way, 4-way, and 8-way caches Access time (ns).5..5..5 - -way -way 4-way 8-way 6 KB KB 64 KB 8 KB 56 KB 5 KB MB Cache size 5. Fast Hit times via Way Prediction How to combine fast hit time of Direct Mapped and have the lower conflict misses of -way SA cache? Way prediction: keep extra bits in cache to predict the way, or block within the set, of next cache access. Multiplexor is set early to select desired block, only tag comparison performed that clock cycle in parallel with reading the cache data Miss st check other blocks for matches in next clock cycle Hit Time Way-Miss Hit Time Miss Penalty Accuracy 85% Drawback: CPU pipeline is hard if hit takes or cycles Used for instruction caches vs. data caches 6. Fast Hit times via Trace Cache (Pentium 4 only; and last time?) Find more instruction level parallelism? How avoid translation from x86 to microops? Trace cache in Pentium 4. Dynamic traces of the executed instructions vs. static sequences of instructions as determined by layout in memory Built-in branch predictor. Cache the micro-ops vs. x86 instructions Decode/translate from x86 to micro-ops on trace cache miss. better utilize long blocks (don t exit in middle of block, don t enter at label in middle of block) -. complicated address mapping since addresses no longer aligned to power-of- multiples of word size -. instructions may appear multiple times in multiple dynamic traces due to different branch outcomes 7 4: Increasing Cache Bandwidth by Pipelining Pipeline cache access to maintain bandwidth, but higher latency Instruction cache access pipeline stages: : Pentium : Pentium Pro through Pentium III 4: Pentium 4 - greater penalty on mispredicted branches - more clock cycles between the issue of the load and the use of the data 8 CS58 S99

5. Increasing Cache Bandwidth: Non-Blocking Caches Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss requires F/E bits on registers or out-of-order execution requires multi-bank memories hit under miss reduces the effective miss penalty by working during miss vs. ignoring CPU requests hit under multiple miss or miss under miss may further lower the effective miss penalty by overlapping multiple misses Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses Requires muliple memory banks (otherwise cannot support) Penium Pro allows 4 outstanding memory misses 9 Value of Hit Under Miss for SPEC (old data) Hit Under i Misses Avg. Mem. Access Time.8.6.4..8.6.4. eqntott espresso xlisp compress mdljsp ear fpppp Integer Floating Point FP programs on average: AMAT=.68 ->.5 ->.4 ->.6 Int programs on average: AMAT=.4 ->. ->.9 ->.9 tomcatv swm56 8 KB Data Cache, Direct Mapped, B block, 6 cycle miss, SPEC 9 doduc sucor wave5 mdljdp hydrod alvinn nasa7 spiceg6 ora -> -> ->64 Base -> -> ->64 Base Hit under n Misses 6: Increasing Cache Bandwidth via Multiple Banks Rather than treat the cache as a single monolithic block, divide into independent banks that can support simultaneous accesses E.g.,T ( Niagara ) L has 4 banks Banking works best when accesses naturally spread themselves across banks mapping of addresses to banks affects behavior of memory system Simple mapping that works well is sequential interleaving Spread block addresses sequentially across banks E,g, if there 4 banks, Bank has all blocks whose address modulo 4 is ; bank has all blocks whose address modulo 4 is ; 7. Reduce Miss Penalty: Early Restart and Critical Word First Don t wait for full block before restarting CPU Early restart As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution Spatial locality tend to want next sequential word, so not clear size of benefit of just early restart Critical Word First Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block Long blocks more popular today Critical Word st Widely used block CS58 S99

8. Merging Write Buffer to Reduce Miss Penalty Write buffer to allow processor to continue while waiting to write to memory If buffer contains modified blocks, the addresses can be checked to see if address of new data matches the address of a valid write buffer entry If so, new data are combined with that entry Increases block size of write for write-through cache of writes to sequential words, bytes since multiword writes more efficient to memory The Sun T (Niagara) processor, among many others, uses write merging 9. Reducing Misses by Compiler Optimizations McFarling [989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software Instructions Reorder procedures in memory so as to reduce conflict misses Profiling to look at conflicts(using tools they developed) Data Merging Arrays: improve spatial locality by single array of compound elements vs. arrays Loop Interchange: change nesting of loops to access data in order stored in memory Loop Fusion: Combine independent loops that have same looping and some variables overlap Blocking: Improve temporal locality by accessing blocks of data repeatedly vs. going down whole columns or rows 4 Merging Arrays Example Loop Interchange Example /* Before: sequential arrays */ int val[size]; int key[size]; /* After: array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[size]; /* Before */ for (k = ; k < ; k = k) for (j = ; j < ; j = j) for (i = ; i < 5; i = i) x[i][j] = * x[i][j]; /* After */ for (k = ; k < ; k = k) for (i = ; i < 5; i = i) for (j = ; j < ; j = j) x[i][j] = * x[i][j]; Reducing conflicts between val & key; Sequential accesses instead of striding through memory every words; improved spatial locality improve spatial locality 5 6 CS58 S99 4

Loop Fusion Example /* Before */ for (i = ; i < N; i = i) for (j = ; j < N; j = j) a[i][j] = /b[i][j] * c[i][j]; for (i = ; i < N; i = i) for (j = ; j < N; j = j) d[i][j] = a[i][j] c[i][j]; /* After */ for (i = ; i < N; i = i) for (j = ; j < N; j = j) { a[i][j] = /b[i][j] * c[i][j]; d[i][j] = a[i][j] c[i][j];} misses per access to a & c vs. one miss per access; improve spatial locality 7 Blocking Example /* Before */ for (i = ; i < N; i = i) for (j = ; j < N; j = j) {r = ; for (k = ; k < N; k = k){ r = r y[i][k]*z[k][j];}; x[i][j] = r; }; Two Inner Loops: Read all NxN elements of z[] Read N elements of row of y[] repeatedly Write N elements of row of x[] Capacity Misses a function of N & Cache Size: N N => (assuming no conflict; otherwise ) Idea: compute on BxB submatrix that fits 8 Blocking Example Reducing Conflict Misses by Blocking /* After */ for (jj = ; jj < N; jj = jjb) for (kk = ; kk < N; kk = kkb) for (i = ; i < N; i = i) for (j = jj; j < min(jjb-,n); j = j) {r = ; for (k = kk; k < min(kkb-,n); k = k) { r = r y[i][k]*z[k][j];}; x[i][j] = x[i][j] r; }; B called Blocking Factor Capacity Misses from N N to N /B N Conflict Misses Too? 9 Miss Rate..5 Direct Mapped Cache Fully Associative Cache 5 5 Blocking Factor Conflict misses in caches not FA vs. Blocking size Lam et al [99] a blocking factor of 4 had a fifth the misses vs. 48 despite both fit in cache CS58 S99 5

Summary of Compiler Optimizations to Reduce Cache Misses (by hand) vpenta (nasa7) gmty (nasa7) tomcatv btrix (nasa7) mxm (nasa7) spice cholesky (nasa7) compress merged arrays.5.5 Performance Improvement loop interchange loop fusion blocking. Reducing Misses by Hardware Prefetching of Instructions & Data Prefetching relies on having extra memory bandwidth that can be used without penalty Instruction Prefetching Typically, CPU fetches blocks on a miss: the requested block and the next consecutive block. Requested block is placed in instruction cache when it returns, and prefetched block is placed into instruction stream buffer Data Prefetching Pentium 4 can prefetch data into L cache from up to 8 streams from 8 different 4 KB pages Prefetching invoked if successive L cache misses to a page, if distance between those cache blocks is < 56 bytes Performance Improvement...8.6.4.. gap.6 SPECint.45 mcf famd.8...6.9. wupwise galgel facerec swim SPECfp applu lucas.4 mgrid.49 equake.97. Reducing Misses by Software Prefetching Data Data Prefetch Load data into register (HP PA-RISC loads) Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) Special prefetching instructions cannot cause faults; a form of speculative execution Issuing Prefetch Instructions takes time Is cost of prefetch issues < savings in reduced misses? Higher superscalar reduces difficulty of issue bandwidth Compiler Optimization vs. Memory Hierarchy Search Compiler tries to figure out memory hierarchy optimizations New approach: Auto-tuners st run variations of program on computer to find best combinations of optimizations (blocking, padding, ) and algorithms, then produce C code to be compiled for that computer Auto-tuner targeted to numerical method E.g., PHiPAC (BLAS), Atlas (BLAS), Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W 4 CS58 S99 6

Sparse Matrix Search for Blocking for finite element problem [Im, Yelick, Vuduc, 5] Best: 4x Reference Mflop/s Mflop/s 5 row block size (r) 8 4 Best Sparse Blocking for 8 Computers IBM Power 4, Intel/HP Itanium Intel Pentium M Intel/HP Itanium IBM Power Sun Ultra, Sun Ultra, AMD Opteron 4 8 column block size (c) All possible column block sizes selected for 8 computers; How could compiler know? 6 Technique Hit Time Bandwidth Mi ss pe nal ty Miss rate HW cost/ complexity Comment Small and simple caches Trivial; widely used Way-predicting caches Used in Pentium 4 Trace caches Used in Pentium 4 Pipelined cache access Widely used Nonblocking caches Banked caches Critical word first and early restart Merging write buffer Compiler techniques to reduce cache misses Hardware prefetching of instructions and data Compiler-controlled prefetching instr., data Widely used Used in L of Opteron and Niagara Widely used Widely used with write through Software is a challenge; some computers have compiler option Many prefetch instructions; AMD Opteron prefetches data Needs nonblocking 7cache; in many CPUs CS58 S99 7