Copyright 2012, Elsevier Inc. All rights reserved.

Similar documents
Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

LECTURE 5: MEMORY HIERARCHY DESIGN

CSE 502 Graduate Computer Architecture

EI338: Computer Systems and Engineering (Computer Architecture & Operating Systems)

Adapted from David Patterson s slides on graduate computer architecture

Copyright 2012, Elsevier Inc. All rights reserved.

Memories. CPE480/CS480/EE480, Spring Hank Dietz.

COSC 6385 Computer Architecture - Memory Hierarchies (II)

COSC 6385 Computer Architecture - Memory Hierarchies (III)

Memory Hierarchy Basics

Lecture 9: Improving Cache Performance: Reduce miss rate Reduce miss penalty Reduce hit time

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

The University of Adelaide, School of Computer Science 13 September 2018

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

DECstation 5000 Miss Rates. Cache Performance Measures. Example. Cache Performance Improvements. Types of Cache Misses. Cache Performance Equations

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses. Professor Randy H. Katz Computer Science 252 Fall 1995

Improving Cache Performance. Reducing Misses. How To Reduce Misses? 3Cs Absolute Miss Rate. 1. Reduce the miss rate, Classifying Misses: 3 Cs

Types of Cache Misses: The Three C s

Memory Hierarchy Basics. Ten Advanced Optimizations. Small and Simple

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses Professor Randy H. Katz Computer Science 252 Spring 1996

Classification Steady-State Cache Misses: Techniques To Improve Cache Performance:

Cache Memory: Instruction Cache, HW/SW Interaction. Admin

MEMORY HIERARCHY DESIGN

Improving Cache Performance. Dr. Yitzhak Birk Electrical Engineering Department, Technion

Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses

Introduction to cache memories

Computer Architecture Spring 2016

CMSC 611: Advanced Computer Architecture. Cache and Memory

Memory Cache. Memory Locality. Cache Organization -- Overview L1 Data Cache

Cache performance Outline

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved.

CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance. Why More on Memory Hierarchy?

Advanced optimizations of cache performance ( 2.2)

NOW Handout Page # Why More on Memory Hierarchy? CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

5008: Computer Architecture

Lecture 11 Reducing Cache Misses. Computer Architectures S

CS/ECE 250 Computer Architecture

Chapter 5. Topics in Memory Hierachy. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ.

EITF20: Computer Architecture Part4.1.1: Cache - 2

Graduate Computer Architecture. Handout 4B Cache optimizations and inside DRAM

Lec 12 How to improve cache performance (cont.)

CPE 631 Lecture 06: Cache Design

CSE 502 Graduate Computer Architecture. Lec Advanced Memory Hierarchy

! 11 Advanced Cache Optimizations! Memory Technology and DRAM optimizations! Virtual Machines

Aleksandar Milenkovich 1

CSE 502 Graduate Computer Architecture. Lec Advanced Memory Hierarchy and Application Tuning

Memory Hierarchy. Advanced Optimizations. Slides contents from:

Graduate Computer Architecture. Handout 4B Cache optimizations and inside DRAM

Memory Hierarchy Design. Chapter 5

Chapter-5 Memory Hierarchy Design

MEMORY HIERARCHY DESIGN. B649 Parallel Architectures and Programming

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

,e-pg PATHSHALA- Computer Science Computer Architecture Module 25 Memory Hierarchy Design - Basics

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Advanced Computer Architecture- 06CS81-Memory Hierarchy Design

Aleksandar Milenkovich 1

Lecture 7: Memory Hierarchy 3 Cs and 7 Ways to Reduce Misses Professor David A. Patterson Computer Science 252 Fall 1996

L2 cache provides additional on-chip caching space. L2 cache captures misses from L1 cache. Summary

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

Cache Performance (H&P 5.3; 5.5; 5.6)

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

Lecture 20: Memory Hierarchy Main Memory and Enhancing its Performance. Grinch-Like Stuff

COSC 6385 Computer Architecture. - Memory Hierarchies (II)

Announcements. ! Previous lecture. Caches. Inf3 Computer Architecture

NOW Handout Page 1. Review: Who Cares About the Memory Hierarchy? EECS 252 Graduate Computer Architecture. Lec 12 - Caches

Lecture 11. Virtual Memory Review: Memory Hierarchy

Lecture 18: Memory Hierarchy Main Memory and Enhancing its Performance Professor Randy H. Katz Computer Science 252 Spring 1996

Outline. 1 Reiteration. 2 Cache performance optimization. 3 Bandwidth increase. 4 Reduce hit time. 5 Reduce miss penalty. 6 Reduce miss rate

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

Cache Optimisation. sometime he thought that there must be a better way

A Cache Hierarchy in a Computer System

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

COSC 6385 Computer Architecture - Memory Hierarchy Design (III)

CSEE W4824 Computer Architecture Fall 2012

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

Memory Hierarchies 2009 DAT105

CSC 631: High-Performance Computer Architecture

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 2461: Computer Architecture 1

Chapter 2: Memory Hierarchy Design (Part 3) Introduction Caches Main Memory (Section 2.2) Virtual Memory (Section 2.4, Appendix B.4, B.

EITF20: Computer Architecture Part4.1.1: Cache - 2

CS422 Computer Architecture

CS650 Computer Architecture. Lecture 9 Memory Hierarchy - Main Memory

Advanced Caching Techniques

Outline. EECS 252 Graduate Computer Architecture. Lec 16 Advanced Memory Hierarchy. Why More on Memory Hierarchy? Review: 6 Basic Cache Optimizations

Memory technology and optimizations ( 2.3) Main Memory

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

COMPUTER ORGANIZATION AND DESIGN

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 7: Memory Organization Part II

Advanced Caching Techniques

Lecture 10 Advanced Memory Hierarchy

Transcription:

Computer Architecture A Quantitative Approach, Fifth Edition Chapter 2 Memory Hierarchy Design 1

Introduction Programmers want unlimited amounts of memory with low latency Fast memory technology is more expensive per bit than slower memory Solution: organize memory system into a hierarchy Entire addressable memory space available in largest, slowest memory Incrementally smaller and faster memories, each containing a subset of the memory below it, proceed in steps up toward the processor Temporal and spatial locality insures that nearly all references can be found in smaller memories Gives the allusion of a large, fast memory being presented to the processor Introduction 2

Memory Hierarchy Introduction 3

Memory Performance Gap Introduction 4

Memory Hierarchy Design Memory hierarchy design becomes more crucial with recent multi-core processors: Aggregate peak bandwidth grows with # of cores: Intel Core i7 can generate two references per core per clock Four cores and 3.2 GHz clock 25.6 billion 64-bit data references/second + 12.8 billion 128-bit instruction references = 409.6 GB/s! DRAM bandwidth is only 6% of this (25 GB/s) Requires: Multi-port, pipelined caches Two levels of cache per core Shared third-level cache on chip Introduction 5

Performance and Power High-end microprocessors have >10 MB on-chip cache Consumes large amount of area and power budget Introduction 6

Memory Hierarchy Basics When a word is not found in the cache, a miss occurs: Fetch word from lower level in hierarchy, requiring a higher latency reference Lower level may be another cache or the main memory Also fetch the other words contained within the block Takes advantage of spatial locality Place block into cache in any location within its set, determined by address block address MOD number of sets Introduction 7

Memory Hierarchy Basics n sets => n-way set associative Direct-mapped cache => one block per set Fully associative => one set Introduction Writing to cache: two strategies Write-through Immediately update lower levels of hierarchy Write-back Only update lower levels of hierarchy when an updated block is replaced Both strategies use write buffer to make writes asynchronous 8

Memory Hierarchy Basics Miss rate Fraction of cache accesses that result in a miss Introduction Causes of misses Compulsory First reference to a block Capacity Blocks discarded and retrieved again later low residency Conflict Program makes repeated references to multiple addresses from different blocks that map to the same location in the cache 9

Memory Hierarchy Basics Introduction Note that speculative and multithreaded processors may execute other instructions during a miss Reduces performance impact of misses speeds job 10

Memory Hierarchy Basics Six basic cache optimizations: Larger block size Reduces compulsory misses Increases capacity and conflict misses, increases miss penalty Larger total cache capacity to reduce miss rate Increases hit time, increases power consumption Higher associativity Reduces conflict misses Increases hit time, increases power consumption Higher number of cache levels Reduces overall memory access time Giving priority to read misses over writes Reduces miss penalty Avoiding address translation in cache indexing Reduces hit time Introduction 11

Direct-Mapped (1-Way) Cache With 64 Bytes Per Block Bits: 18b: tag 8b index: 256 entries/cache 6b: 64 Bytes/block 6 offset bits Bits: (1-way) Direct Mapped Cache Index => cache set Location of all possible blocks Must check tag for each block: No need to check index, offset bits Increasing associativity: Shrinks index & expands tag size Hit V Tag 18 bits Tag = 16 18 Address (showing bit positions) 31 14 13 6 5 2 1 0 Index 18 8 4 Byte offset 32 32 Data 512 bits Mux 32 Data Capacity: 16KB= 256 x 1 x 64 Bytes 32 Block offset Bit Fields in Memory Address Used to Access Cache Word 256 entries Data 12

Ten Advanced Optimizations 1. Small, Simple First Level Caches 2. Way Prediction 3. Pipelining Cache 4. Nonblocking Caches 5. Multibanked Caches 6. Critical Word First, Early Restart 7. Merging Write Buffer 8. Compiler Optimizations 9. Hardware Prefetching 10. Compiler Prefetching Advanced Optimizations 13

Ten Advanced Optimizations 1. Small, simple first level caches Critical timing path: addressing tag memory, then comparing tags, then selecting correct set Direct-mapped caches can overlap tag compare and transmission of data Lower associativity reduces power because fewer cache lines are accessed Advanced Optimizations 14

L1 Size and Associativity Advanced Optimizations Access time vs. size and associativity 15

L1 Size and Associativity Advanced Optimizations Energy per read vs. size and associativity 16

Figure 2.16 The virtual address, physical address, indexes, tags, and data blocks for the ARM Cortex-A8 data caches and data TLB. The instruction hierarchy is similar. The TLB is fully associative with 32 entries. The L1 cache is four-way set associative with 64-byte blocks and 32 KB capacity. The L2 cache is eight-way set associative with 64-byte blocks and 1 MB capacity. This figure doesn t show the valid bits and protection bits for the caches and TLB, nor the use of the way prediction bits that would dictate the predicted bank of the L1 cache. Copyright 2011, Elsevier Inc. 17

2. Way Prediction To improve hit time, predict the way to pre-set the mux selecting the way (and maybe the block) Mis-prediction gives longer hit time Prediction accuracy > 90% for two-way > 80% for four-way I-cache has better accuracy than D-cache First used on MIPS R10000 in mid-90s Used on ARM Cortex-A8 Extend to predict block as well Way selection Increases mis-prediction penalty Advanced Optimizations 18

3. Pipelining Cache Pipeline cache access to improve bandwidth Examples: Pentium: 1 cycle Pentium Pro Pentium III: 2 cycles Pentium 4 Core i7: 4 cycles Advanced Optimizations Increases branch mis-prediction penalty Makes it easier to increase associativity 19

4. Nonblocking Caches Allow hits before previous misses complete Hit under miss Hit under multiple miss L2 must support this In general, processors can hide L1 miss penalty but not L2 miss penalty Advanced Optimizations 20

5. Multibanked Caches Organize cache as independent banks to support simultaneous access ARM Cortex-A8 supports 1-4 banks for L2 Intel i7 supports 4 banks for L1 and 8 banks for L2 Advanced Optimizations Interleave banks according to block address 21

6. Critical Word First, Early Restart Critical word first Request missed word from memory first Send it to the processor as soon as it arrives Early restart Request words in normal order Send missed work to the processor as soon as it arrives Advanced Optimizations Effectiveness of these strategies depends on block size and likelihood of another access to the portion of the block that has not yet been fetched 22

7. Merging Write Buffer When storing to a block that is already pending in the write buffer, update write buffer Reduces stalls because of full write buffer Should not apply to I/O control & data register addresses Advanced Optimizations No write buffering Write buffering 23

8. Compiler Optimizations Loop Interchange Swap nested loops to access memory in sequential order Advanced Optimizations Blocking Instead of accessing entire rows or columns, subdivide matrices into blocks Requires more memory accesses but improves locality of accesses 24

Compiler Optimizations Data (Four methods widely use for fast scientific codes.) Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays Loop Interchange: change nesting of loops to access data in the order that they are stored in memory Loop Fusion: Combine 2 non-dependent loops with the same looping structure so more accesses to all common variables in each iteration Blocking: Improve temporal locality by accessing blocks of data (equal cache block size) repeatedly vs. going down whole columns or rows and moving from cache block to cache block rapidly 25

Merging Arrays Example /* Before: 2 sequential arrays */ int val[size]; int key[size]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[size]; Reduce conflicts between val & key; and improve spatial locality 26

Loop Interchange Example /* Before */ for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1) /* non-lexicographic accesses */ for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; /* After, since C has x[i][j+1] just after x[i][j] in memory*/ for (k = 0; k < 100; k = k+1) for (i = 0; i < 5000; i = i+1) for (j = 0; j < 100; j = j+1) x[i][j] = 2 * x[i][j]; Sequential accesses instead of striding through memory every 100 words; improve spatial locality 27

Loop Fusion Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) d[i][j] = a[i][j] + c[i][j]; /* After */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} Two misses per access to a & c vs. one miss per access; improve spatial locality 28

Blocking Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; Two Inner Loops: Read all NxN elements of z[] Read N elements of 1 row of y[] repeatedly Write N elements of 1 row of x[] Capacity Misses are a function of N & Cache Size: 2N 3 + N 2 => (assuming no conflicts; otherwise ) Idea: Compute on BxB submatrix fitting in 1 cache block For large N, these long accesses repeatedly flush cache blocks that are needed again soon Better way 29

Blocking Example /* After */ for (jj = 0; jj < N; jj = jj+b) for (kk = 0; kk < N; kk = kk+b) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+b-1,n); j = j+1) {r = 0; for (k = kk; k < min(kk+b-1,n); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; B is Blocking Factor: Larger B => smaller blocks Capacity Misses fall from 2N 3 + N 2 to 2N 3 /B +N 2 Do Conflict Misses fall also? 30

{implements C = C + A*B} for i = 1 to n {read row i of A into fast memory => n 2 } for j = 1 to n Naïve Matrix Multiply {read C(i,j) into fast memory => n 2 } {read column j of B into fast memory => n 3 } for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) {write C(i,j) back to slow memory => n 2 } C(i,j ) C(i,j A(i,: = ) ) + * B(:,j ) 31

Blocked (Tiled) Matrix Multiply Consider A,B,C to be n-by-n matrix viewed as N-by-N matrices of b-by-b subblocks where b=n / N is called the block size for i = 1 to N for j = 1 to N {read block C(i,j) into fast memory} for k = 1 to N {read block A(i,k) into fast memory} {read block B(k,j) into fast memory} C(i,j) = C(i,j) + A(i,k) * B(k,j) {do a matrix multiply on blocks} {write block C(i,j) back to slow memory} C(i,j) C(i,j) A(i,k) = + * B(k,j) 32

Summary - Compiler Optimizations vpenta (nasa7) gmty (nasa7) tomcatv btrix (nasa7) mxm (nasa7) spice cholesky (nasa7) compress 1 1.5 2 2.5 3 Performance Improvement merged arrays loop interchange loop fusion blocking 33

Sparse Matrix Search for Blocking for finite element problem [Im, Yelick, Vuduc, 2005] Best: 4x2 Mflop/s Reference Mflop/s 34

9. Hardware Prefetching Fetch two blocks on miss (include next sequential block) Advanced Optimizations Pentium 4 Pre-fetching 35

10. Compiler Prefetching Insert prefetch instructions before data is needed Non-faulting: prefetch does not cause exceptions Advanced Optimizations Register prefetch Loads data into register Cache prefetch Loads data into cache Combine with loop unrolling and software pipelining 36

Summary Advanced Optimizations 37

Memory Technology Performance metrics Latency is concern of cache Bandwidth is concern of multiprocessors and I/O Access time Time between read request and when desired word arrives Cycle time Minimum time between unrelated requests to memory Memory Technology DRAM used for main memory, SRAM used for cache 38

Memory Technology SRAM Requires low power to retain bit Requires 6 transistors/bit Memory Technology DRAM Must be re-written after being read Must also be periodically refeshed Every ~ 8 ms Each row can be refreshed simultaneously One transistor/bit Address lines are multiplexed: Upper half of address: row access strobe (RAS) Lower half of address: column access strobe (CAS) 39

Memory Technology Amdahl: Memory capacity should grow linearly with processor speed Unfortunately, memory capacity and speed has not kept pace with processors Memory Technology Some optimizations: Multiple accesses to same row Synchronous DRAM Added clock to DRAM interface Burst mode with critical word first Wider interfaces Double data rate (DDR) Multiple banks on each DRAM device 40

Memory Optimizations Memory Technology 41

Memory Optimizations Memory Technology 42

Memory Optimizations DDR: DDR2 Lower power (2.5 V -> 1.8 V) Higher clock rates (266 MHz, 333 MHz, 400 MHz) DDR3 1.5 V 800 MHz DDR4 1-1.2 V 1600 MHz Memory Technology GDDR5 is graphics memory based on DDR3 43

Memory Optimizations Graphics memory: Achieve 2-5 X bandwidth per DRAM vs. DDR3 Wider interfaces (32 vs. 16 bit) Higher clock rate Possible because they are attached via soldering instead of socketted DIMM modules Memory Technology Reducing power in SDRAMs: Lower voltage Low power mode (ignores clock, continues to refresh) 44

Memory Power Consumption Memory Technology 45

Flash Memory Type of EEPROM Must be erased (in blocks) before being overwritten Non volatile Limited number of write cycles Cheaper than SDRAM, more expensive than disk Slower than SRAM, faster than disk Memory Technology 46

Memory Dependability Memory is susceptible to cosmic rays Soft errors: dynamic errors Detected and fixed by error correcting codes (ECC) Hard errors: permanent errors Use sparse rows to replace defective rows Memory Technology Chipkill: a RAID-like error recovery technique 47

Virtual Memory Protection via virtual memory Keeps processes in their own memory space Role of architecture: Provide user mode and supervisor mode Protect certain aspects of CPU state Provide mechanisms for switching between user mode and supervisor mode Provide mechanisms to limit memory accesses Provide TLB to translate addresses Virtual Memory and Virtual Machines 48

Virtual Machines Supports isolation and security Sharing a computer among many unrelated users Enabled by raw speed of processors, making the overhead more acceptable Allows different ISAs and operating systems to be presented to user programs System Virtual Machines SVM software is called virtual machine monitor or hypervisor Individual virtual machines run under the monitor are called guest VMs Virtual Memory and Virtual Machines 49

Impact of VMs on Virtual Memory Each guest OS maintains its own set of page tables VMM adds a level of memory between physical and virtual memory called real memory VMM maintains shadow page table that maps guest virtual addresses to physical addresses Requires VMM to detect guest s changes to its own page table Occurs naturally if accessing the page table pointer is a privileged operation Virtual Memory and Virtual Machines 50