CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Similar documents
CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

EE 660: Computer Architecture Advanced Caches

CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. Lecture 7 Memory III

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

ECE 252 / CPS 220 Advanced Computer Architecture I. Lecture 13 Memory Part 2

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

ECE 552 / CPS 550 Advanced Computer Architecture I. Lecture 13 Memory Part 2

Lecture 19: Memory Hierarchy Five Ways to Reduce Miss Penalty (Second Level Cache) Admin

Classification Steady-State Cache Misses: Techniques To Improve Cache Performance:

Announcements. ECE4750/CS4420 Computer Architecture L6: Advanced Memory Hierarchy. Edward Suh Computer Systems Laboratory

Improving Cache Performance. Dr. Yitzhak Birk Electrical Engineering Department, Technion

Types of Cache Misses: The Three C s

Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses

Chapter-5 Memory Hierarchy Design

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

CSC 631: High-Performance Computer Architecture

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance. Why More on Memory Hierarchy?

Cache performance Outline

ECE 4750 Computer Architecture, Fall 2014 T05 FSM and Pipelined Cache Memories

NOW Handout Page # Why More on Memory Hierarchy? CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance

Advanced cache optimizations. ECE 154B Dmitri Strukov

EITF20: Computer Architecture Part4.1.1: Cache - 2

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

CS252 Spring 2017 Graduate Computer Architecture. Lecture 11: Memory

CS 252 Graduate Computer Architecture. Lecture 8: Memory Hierarchy

EITF20: Computer Architecture Part4.1.1: Cache - 2

Lecture 9: Improving Cache Performance: Reduce miss rate Reduce miss penalty Reduce hit time

Advanced Caching Techniques

Outline. 1 Reiteration. 2 Cache performance optimization. 3 Bandwidth increase. 4 Reduce hit time. 5 Reduce miss penalty. 6 Reduce miss rate

Recap: The Big Picture: Where are We Now? The Five Classic Components of a Computer. CS152 Computer Architecture and Engineering Lecture 20.

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

10/16/2017. Miss Rate: ABC. Classifying Misses: 3C Model (Hill) Reducing Conflict Misses: Victim Buffer. Overlapping Misses: Lockup Free Cache

Advanced Caching Techniques

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

COSC 6385 Computer Architecture. - Memory Hierarchies (II)

Lecture 7: Memory Hierarchy 3 Cs and 7 Ways to Reduce Misses Professor David A. Patterson Computer Science 252 Fall 1996

Page 1. Multilevel Memories (Improving performance using a little cash )

CSE 502 Graduate Computer Architecture. Lec Advanced Memory Hierarchy

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Improving Cache Performance. Reducing Misses. How To Reduce Misses? 3Cs Absolute Miss Rate. 1. Reduce the miss rate, Classifying Misses: 3 Cs

Computer Architecture Spring 2016

Memory Hierarchy. Slides contents from:

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

L2 cache provides additional on-chip caching space. L2 cache captures misses from L1 cache. Summary

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

CSE 502 Graduate Computer Architecture. Lec Advanced Memory Hierarchy and Application Tuning

MEMORY HIERARCHY DESIGN. B649 Parallel Architectures and Programming

Announcements. ! Previous lecture. Caches. Inf3 Computer Architecture

Memory Hierarchy. 2/18/2016 CS 152 Sec6on 5 Colin Schmidt

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

EC 513 Computer Architecture

COSC 6385 Computer Architecture - Memory Hierarchies (II)

CS3350B Computer Architecture

Memory Hierarchy Design. Chapter 5

Chapter 5. Topics in Memory Hierachy. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ.

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Advanced optimizations of cache performance ( 2.2)

DECstation 5000 Miss Rates. Cache Performance Measures. Example. Cache Performance Improvements. Types of Cache Misses. Cache Performance Equations

Chapter 2: Memory Hierarchy Design Part 2

Reducing Miss Penalty: Read Priority over Write on Miss. Improving Cache Performance. Non-blocking Caches to reduce stalls on misses

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved.

Lecture notes for CS Chapter 2, part 1 10/23/18

Memory Hierarchy. Slides contents from:

Chapter 2: Memory Hierarchy Design Part 2

Adapted from David Patterson s slides on graduate computer architecture

Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance

CS 152, Spring 2011 Section 8

Cache Performance (H&P 5.3; 5.5; 5.6)

Memories. CPE480/CS480/EE480, Spring Hank Dietz.

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses. Professor Randy H. Katz Computer Science 252 Fall 1995

CS 152 Computer Architecture and Engineering. Lecture 6 - Memory

Inside out of your computer memories (III) Hung-Wei Tseng

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses Professor Randy H. Katz Computer Science 252 Spring 1996

CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory

Advanced Caching Techniques (2) Department of Electrical Engineering Stanford University

Memory Hierarchy Basics. Ten Advanced Optimizations. Small and Simple

Lecture 13 - VLIW Machines and Statically Scheduled ILP

COSC 6385 Computer Architecture - Memory Hierarchy Design (III)

CS252 Graduate Computer Architecture Spring 2014 Lecture 10: Memory

Lecture 14: Multithreading

Topics to be covered. EEC 581 Computer Architecture. Virtual Memory. Memory Hierarchy Design (II)

Copyright 2012, Elsevier Inc. All rights reserved.

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

CPE 631 Lecture 06: Cache Design

CS 152 Computer Architecture and Engineering. Lecture 12 - Advanced Out-of-Order Superscalars

Lecture 9 - Virtual Memory

Aleksandar Milenkovich 1

Memory Hierarchies 2009 DAT105

ECE 4750 Computer Architecture, Fall 2018 T04 Fundamental Memory Microarchitecture

LECTURE 5: MEMORY HIERARCHY DESIGN

Mainstream Computer System Components CPU Core 2 GHz GHz 4-way Superscaler (RISC or RISC-core (x86): Dynamic scheduling, Hardware speculation

Mainstream Computer System Components

CS252 Spring 2017 Graduate Computer Architecture. Lecture 8: Advanced Out-of-Order Superscalar Designs Part II

Transcription:

CS 152 Computer Architecture and Engineering Lecture 8 - Memory Hierarchy-III Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste http://inst.eecs.berkeley.edu/~cs152 Last time in Lecture 7 3 C s of cache misses: compulsory, capacity, conflict Average memory access time = hit time + miss rate * miss penalty To improve performance, reduce: hit time miss rate and/or miss penalty Primary cache parameters: Total cache capacity line size Associativity 2/21/2008 CS152-Spring!08 2

Multilevel s A memory cannot be large and fast Increasing sizes of cache at each level CPU L1$ L2$ DRAM Local miss rate = misses in cache / accesses to cache Global miss rate = misses in cache / CPU memory accesses Misses per instruction = misses in cache / number of instructions 2/21/2008 CS152-Spring!08 3 A Typical Memory Hierarchy c.2008 Split instruction & data primary caches (on-chip SRAM) Multiple interleaved memory banks (off-chip DRAM) CPU RF L1 Instruction L1 Data Unified L2 Memory Memory Memory Memory Multiported register file (part of CPU) Large unified secondary cache (on-chip SRAM) 2/21/2008 CS152-Spring!08 4

Presence of L2 influences L1 design Use smaller L1 if there is also L2 Trade increased L1 miss rate for reduced L1 hit time and reduced L1 miss penalty Reduces average access energy Use simpler write-through L1 cache with on-chip L2 Write-back L2 cache absorbs write traffic, doesn t go off-chip At most one L1 miss request per L1 access (no dirty victim write back) simplifies pipeline control Simplifies coherence issues Simplifies error recovery in L1 (can use just parity bits in L1 and reload from L2 when parity error detected on L1 read) 2/21/2008 CS152-Spring!08 5 Inclusion Policy Inclusive multilevel cache: Inner cache holds copies of data in outer cache External access need only check outer cache Most common case Exclusive multilevel caches: Inner cache may hold data not in outer cache Swap lines between inner/outer caches on miss Used in AMD Athlon with 64KB primary and 256KB secondary cache Why choose one type or the other? 2/21/2008 CS152-Spring!08 6

Itanium-2 On-Chip s (Intel/HP, 2002) Level 1, 16KB, 4-way s.a., 64B line, quad-port (2 load+2 store), single cycle latency Level 2, 256KB, 4-way s.a, 128B line, quad-port (4 load or 4 store), five cycle latency Level 3, 3MB, 12-way s.a., 128B line, single 32B port, twelve cycle latency 2/21/2008 CS152-Spring!08 7 Reducing penalty of associativity Associativity reduces conflict misses, but requires expensive (area, energy, delay) multi-way tag search Two optimizations to reduce cost of associativity Victim caches Way prediction 2/21/2008 CS152-Spring!08 8

Victim s (Jouppi 1990) CPU RF (HP 7200) L1 Data Direct Map. Hit data from VC (miss in L1) Victim Fully Assoc. 4 blocks Evicted data from L1 Unified L2 where? Evicted data From VC Victim cache is a small associative back up cache, added to a direct mapped cache, which holds recently evicted lines First look up in direct mapped cache If miss, look in victim cache If hit in victim cache, swap hit line with line now evicted from L1 If miss in victim cache, L1 victim -> VC, VC victim->? Fast hit time of direct mapped but with reduced conflict misses 2/21/2008 CS152-Spring!08 9 Way Predicting s (MIPS R10000 off-chip L2 cache) Use processor address to index into way prediction table Look in predicted way at given index, then: HIT MISS Return copy of data from cache Look in other way SLOW HIT (change entry in prediction table) MISS Read block of data from next level of cache 2/21/2008 CS152-Spring!08 10

Way Predicting Instruction (Alpha 21264-like) Jump target Jump control 0x4 Add PC addr way Primary Instruction inst Sequential Way Branch Target Way 2/21/2008 CS152-Spring!08 11 Reduce Miss Penalty of Long Blocks: Early Restart and Critical Word First Don t wait for full block before restarting CPU Early restart As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution Spatial locality! tend to want next sequential word, so not clear size of benefit of just early restart Critical Word First Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block Long blocks more popular today! Critical Word 1 st Widely used block 2/21/2008 CS152-Spring!08 12

Increasing Bandwidth with Non-Blocking s Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss requires Full/Empty bits on registers or out-of-order execution hit under miss reduces the effective miss penalty by working during miss vs. ignoring CPU requests hit under multiple miss or miss under miss may further lower the effective miss penalty by overlapping multiple misses Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses, and can get miss to line with outstanding miss (secondary miss) Requires pipelined or banked memory system (otherwise cannot support multiple misses) Pentium Pro allows 4 outstanding memory misses (Cray X1E vector supercomputer allows 2,048 outstanding memory misses) 2/21/2008 CS152-Spring!08 13 Value of Hit Under Miss for SPEC (old data) Hit Under i Misses 2 1.8 Avg. Mem. Access Time 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0->1 1->2 2->64 Base 0->1 1->2 2->64 Base Hit under n Misses 0 eqntott espresso xlisp compress mdljsp2 ear fpppp tomcatv swm256 Integer Floating Point FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19 8 KB Data, Direct Mapped, 32B block, 16 cycle miss, SPEC 92 doduc su2cor wave5 mdljdp2 hydro2d alvinn nasa7 spice2g6 ora 2/21/2008 CS152-Spring!08 14

CS152 Administrivia Textbooks should appear in Cal book store soon (already there? Or next couple of days) Quiz 1 handed back next Tuesday in class. First set of open-ended labs seemed fine. Opinions? Encourage you to try to find your own ideas, instead of following suggestions. 2/21/2008 CS152-Spring!08 15 Prefetching Speculate on future instruction and data accesses and fetch them into cache(s) Instruction accesses easier to predict than data accesses Varieties of prefetching Hardware prefetching Software prefetching Mixed schemes What types of misses does prefetching affect? 2/21/2008 CS152-Spring!08 16

Issues in Prefetching Usefulness should produce hits Timeliness not late and not too early and bandwidth pollution CPU RF L1 Instruction L1 Data Unified L2 Prefetched data 2/21/2008 CS152-Spring!08 17 Hardware Instruction Prefetching Instruction prefetch in Alpha AXP 21064 Fetch two blocks on a miss; the requested block (i) and the next consecutive block (i+1) Requested block placed in cache, and next block in instruction stream buffer If miss in cache but hit in stream buffer, move stream buffer block into cache and prefetch next block (i+2) CPU RF Req block Stream Buffer L1 Instruction Prefetched instruction block Req block Unified L2 2/21/2008 CS152-Spring!08 18

Hardware Data Prefetching Prefetch-on-miss: Prefetch b + 1 upon miss on b One Block Lookahead (OBL) scheme Initiate prefetch for block b + 1 when block b is accessed Why is this different from doubling block size? Can extend to N block lookahead Strided prefetch If observe sequence of accesses to block b, b+n, b+2n, then prefetch b+3n etc. Example: IBM Power 5 [2003] supports eight independent streams of strided prefetch per processor, prefetching 12 lines ahead of current access 2/21/2008 CS152-Spring!08 19 Software Prefetching for(i=0; i < N; i++) { prefetch( &a[i + 1] ); prefetch( &b[i + 1] ); SUM = SUM + a[i] * b[i]; What property do we require of the cache for prefetching to work? 2/21/2008 CS152-Spring!08 20

Software Prefetching Issues Timing is the biggest issue, not predictability If you prefetch very close to when the data is required, you might be too late Prefetch too early, cause pollution Estimate how long it will take for the data to come into L1, so we can set P appropriately Why is this hard to do? for(i=0; i < N; i++) { prefetch( &a[i + P] ); prefetch( &b[i + P] ); SUM = SUM + a[i] * b[i]; Must consider cost of prefetch instructions 2/21/2008 CS152-Spring!08 21 Compiler Optimizations Restructuring code affects the data block access sequence Group data accesses together to improve spatial locality Re-order data accesses to improve temporal locality Prevent data from entering the cache Useful for variables that will only be accessed once before being replaced Needs mechanism for software to tell hardware not to cache data (instruction hints or page table bits) Kill data that will never be used again Streaming data exploits spatial locality but not temporal locality Replace into dead cache locations 2/21/2008 CS152-Spring!08 22

Loop Interchange for(j=0; j < N; j++) { for(i=0; i < M; i++) { x[i][j] = 2 * x[i][j]; for(i=0; i < M; i++) { for(j=0; j < N; j++) { x[i][j] = 2 * x[i][j]; What type of locality does this improve? 2/21/2008 CS152-Spring!08 23 Loop Fusion for(i=0; i < N; i++) a[i] = b[i] * c[i]; for(i=0; i < N; i++) d[i] = a[i] * c[i]; for(i=0; i < N; i++) { a[i] = b[i] * c[i]; d[i] = a[i] * c[i]; What type of locality does this improve? 2/21/2008 CS152-Spring!08 24

Blocking for(i=0; i < N; i++) for(j=0; j < N; j++) { r = 0; for(k=0; k < N; k++) r = r + y[i][k] * z[k][j]; x[i][j] = r; x j y k z j i i k Not touched Old access New access 2/21/2008 CS152-Spring!08 25 Blocking for(jj=0; jj < N; jj=jj+b) for(kk=0; kk < N; kk=kk+b) for(i=0; i < N; i++) for(j=jj; j < min(jj+b,n); j++) { r = 0; for(k=kk; k < min(kk+b,n); k++) r = r + y[i][k] * z[k][j]; x[i][j] = x[i][j] + r; x j y k z j i i k What type of locality does this improve? 2/21/2008 CS152-Spring!08 26

Workstation Memory System (Apple PowerMac G5, 2003) Dual 2GHz processors, each has: 64KB I-cache, direct mapped 32KB D-cache, 2-way 512KB L2 unified cache, 8-way All 128B lines AGP Graphics Card, 533MHz, 32-bit bus, 2.1GB/s North Bridge Chip 1GHz, 2x32-bit bus, 16GB/s Up to 8GB DDR SDRAM, 400MHz, 128-bit bus, 6.4GB/s PCI-X Expansion, 133MHz, 64-bit bus, 1 GB/s 2/21/2008 CS152-Spring!08 27 Acknowledgements These slides contain material developed and copyright by: Arvind (MIT) Krste Asanovic (MIT/UCB) Joel Emer (Intel/MIT) James Hoe (CMU) John Kubiatowicz (UCB) David Patterson (UCB) MIT material derived from course 6.823 UCB material derived from course CS252 2/21/2008 CS152-Spring!08 28