Introduction to cache memories

Similar documents
COSC 6385 Computer Architecture - Memory Hierarchies (III)

Copyright 2012, Elsevier Inc. All rights reserved.

Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

LECTURE 5: MEMORY HIERARCHY DESIGN

Adapted from David Patterson s slides on graduate computer architecture

EI338: Computer Systems and Engineering (Computer Architecture & Operating Systems)

COSC 6385 Computer Architecture - Memory Hierarchies (II)

Copyright 2012, Elsevier Inc. All rights reserved.

Memories. CPE480/CS480/EE480, Spring Hank Dietz.

Copyright 2012, Elsevier Inc. All rights reserved.

The University of Adelaide, School of Computer Science 13 September 2018

,e-pg PATHSHALA- Computer Science Computer Architecture Module 25 Memory Hierarchy Design - Basics

CSE 502 Graduate Computer Architecture

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

The Memory Hierarchy & Cache

EE 4683/5683: COMPUTER ARCHITECTURE

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

MEMORY HIERARCHY DESIGN

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Caching Basics. Memory Hierarchies

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Memory Hierarchy and Caches

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Course Administration

Memory technology and optimizations ( 2.3) Main Memory

Page 1. Memory Hierarchies (Part 2)

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

Topics. Digital Systems Architecture EECE EECE Need More Cache?

CSEE W4824 Computer Architecture Fall 2012

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

CPE 631 Lecture 04: CPU Caches

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Chapter 2: Memory Hierarchy Design (Part 3) Introduction Caches Main Memory (Section 2.2) Virtual Memory (Section 2.4, Appendix B.4, B.

LECTURE 11. Memory Hierarchy

COSC 6385 Computer Architecture - Memory Hierarchies (I)

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

The Memory Hierarchy & Cache The impact of real memory on CPU Performance. Main memory basic properties: Memory Types: DRAM vs.

Agenda. EE 260: Introduction to Digital Design Memory. Naive Register File. Agenda. Memory Arrays: SRAM. Memory Arrays: Register File

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Topics in Memory Hierachy. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ.

A Cache Hierarchy in a Computer System

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Computer Architecture. Memory Hierarchy. Lynn Choi Korea University

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 7 Large and Fast: Exploiting Memory Hierarchy. Memory Hierarchy. Locality. Memories: Review

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

CS3350B Computer Architecture

Advanced Computer Architecture

Memory. Lecture 22 CS301

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Page 1. Multilevel Memories (Improving performance using a little cash )

Advanced Memory Organizations

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

EN1640: Design of Computing Systems Topic 06: Memory System

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

EC 513 Computer Architecture

Recap: Machine Organization

Memory Hierarchy Y. K. Malaiya

Lecture 20: Memory Hierarchy Main Memory and Enhancing its Performance. Grinch-Like Stuff

Modern Computer Architecture

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

Caches. Hiding Memory Access Times

Memory Hierarchy: Motivation

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

COSC4201. Chapter 5. Memory Hierarchy Design. Prof. Mokhtar Aboelaze York University

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Memory Hierarchy Motivation, Definitions, Four Questions about Memory Hierarchy

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

Computer Systems Architecture I. CSE 560M Lecture 18 Guest Lecturer: Shakir James

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

MEMORY HIERARCHY BASICS. B649 Parallel Architectures and Programming

Memory Technologies. Technology Trends

Memory Hierarchy: The motivation

Why memory hierarchy

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

Sarah L. Harris and David Money Harris. Digital Design and Computer Architecture: ARM Edition Chapter 8 <1>

EN1640: Design of Computing Systems Topic 06: Memory System

Transcription:

Course on: Advanced Computer Architectures Introduction to cache memories Prof. Cristina Silvano Politecnico di Milano email: cristina.silvano@polimi.it 1

Summary Summary Main goal Spatial and temporal locality Levels of memory hierarchy Cache memories: basic concepts Architecture of a cache memory: Direct Mapped Cache Fully associative Cache N-way Set-Associative Cache Performance Evaluation Memory Technology Cristina Silvano, 09/05/2016 2

Main goal Main goal To increase the performance of a computer through the memory system in order to: Provide the user the illusion to use a memory that is simultaneously large and fast Provide the data to the processor at high frequency Cristina Silvano, 09/05/2016 3

Problem: Processor-memory performance gap Problem Copyright 2012, Elsevier Inc. All rights reserved. 4

Solution: to exploit the principle of locality of references Locality Temporal Locality: when there is a reference to one memory element, the trend is to refer again to the same memory element soon (i.e., instruction and data reused in loop bodies) Spatial Locality: when there is a reference to one memory element, the trend is to refer soon at other memory elements whose addresses are close by (i.e., sequence of instructions or accesses to data organized as arrays or matrices) Cristina Silvano, 09/05/2016 5

Caches Locality Caches exploit both types of predictability: Exploit temporal locality by keeping the contents of recently accessed locations. Exploit spatial locality by fetching blocks of data around recently accessed locations. 6

Levels of Memory Hierarchy Levels of Memory Hierarchy Copyright 2012, Elsevier Inc. All rights reserved.

Nehalem Architecture: Intel Core i7 9

Levels of the memory hierarchy Size Access time CPU registers < 1 KB 300 ps L1 Cache (SRAM) < 64 KB 1 ns Registers L1 Cache Upper Level + speed + cost Levels of Memory Hierarchy L2 Cache (SRAM) < 256 KB 3-10 ns L2 Cache Main Memory(DRAM) 4 16 GB, 50 100 ns Main Memory + large Lower Level Cristina Silvano, 09/05/2016 10

Basic Concepts The memory hierarchy is composed of several levels, but data are copied between two adjacent levels. Let us consider two levels: cache and main memory The cache (upper level) is smaller, faster and more expensive than the main memory (lower level). The minimum chunk of data that can be copied in the cache is the block or cache line. To exploit the spatial locality, the block size must be a multiple of the word size in memory Example: 128-bit block size = 4 words of 32-bit The number of blocks in cache is given by: Number of cache blocks = Cache Size / Block Size Cache Memory: Basic Concepts Example: Cache size 64KByte; Block size 128-bit (16 Bytes) Number of cache blocks = 4 K blocks Cristina Silvano, 09/05/2016 13

Cache Hit If the requested data is found in one of the cache blocks (upper level) there is a hit in the cache access Processor Cache Cache Memory: Basic Concepts Main Memory Cristina Silvano, 09/05/2016 14

Cache Miss If the requested data is not found in in one of the cache blocks (upper level) there is a miss in the cache access to find the block, we need to access the lower level of the memory hierarchy In case of a data miss, we need: To stall the CPU; To require to block from the main memory To copy (write) the block in cache; To repeat the cache access (hit). Processor Cache Main Memory Cache Memory: Basic Concepts Cristina Silvano, 09/05/2016 15

Memory Hierarchy: Definitions Hit: data found in a block of the upper level Hit Rate: Number of memory accesses that find the data in the upper level with respect to the total number of memory accesses Hit Rate = # hits # memory accesses Cache Memory: Basic Concepts Hit Time: time to access the data in the upper level of the hierarchy, including the time needed to decide if the attempt of access will result in a hit or miss Cristina Silvano, 09/05/2016 16

Memory Hierarchy: Definitions Miss: the data must be taken from the lower level Miss Rate: number of memory accesses not finding the data in the upper level with respect to the total number of memory accesses Miss Rate = # misses # memory accesses By definition: Hit Rate + Miss Rate = 1 Miss Penalty : time needed to access the lower level and to replace the block in the upper level Cache Memory: Basic Concepts Miss Time = Hit Time + Miss Penalty Typically: Hit Time << Miss Penalty Cristina Silvano, 09/05/2016 17

Average Memory Access Time AMAT = Hit Rate * Hit Time + Miss Rate * Miss Time where: Miss Time = Hit Time + Miss Penalty Hit Rate + Miss Rate = 1 Cache Memory: Basic Concepts AMAT= Hit Time + Miss Rate * Miss Penalty Cristina Silvano, 09/05/2016 18

Cache Structure Each entry in the cache includes: 1. Valid bit to indicate if this position contains valid data or not. At the bootstrap, all the entries in the cache are marked as INVALID 2. Cache Tag contains the value that univocally identifies the memory address corresponding to the stored data. 3. Cache Data contains a copy of data (block or cache line) Cache Structure V TAG DATA Cristina Silvano, 09/05/2016 19

Problem of block placement Problem: Given the address of the block in the main memory, where the block can be placed in the cache (upper level)? We need to find the correspondence between the memory address of the block and the cache address of the block The correspondence depends on the cache architecture: Direct Mapped Cache Fully Associative Cache n-way Set-Associative Cache Cache Structure Cristina Silvano, 09/05/2016 21

Direct Mapped Cache Cache Structure Each memory location corresponds to one and only one cache location. The cache address of the block is given by: (Block Address) cache = (Block Address) mem mod (Num. of Cache Blocks) Memory Address Block Address Tag Index Block Offset Cristina Silvano, 09/05/2016 22

Example: Direct Mapped Cache with 4 words and memory of 16 words. The cache location 00 can be occupied by data coming from the memory addresses --00 MEM ADDRESS TAG INDEX DATA 00 00 S O L E 00 01 M A R E 00 10 C A S A ADD V TAG DATA 00 11 L U C E 00 1 01 A B C D 01 00 A B C D 01 1 10 C O M O 01 01 R O M A 10 1 00 C A S A 01 10 B A R I 11 1 11 S A R A 01 11 A N N A 10 00 S A L E 10 01 C O M O 10 10 P A R I 10 11 P E P E Cache Structure 11 00 M A N O 11 01 B I R O 11 10 D U C A 11 11 S A R A Cristina Silvano, 09/05/2016 23

Simple Direct Mapped Cache, 4 blocks Cache Structure Block Address: 0100 Cache Tag Cache Index 2 bit, example: 01 2 bit, ex: 00 Block Offset Byte Select 2 bit, ex: 00 Valid Bit Cache Tag 0x01 Cache Data Byte3 Byte 2 Byte 1 Byte 3 Byte 2 Byte 1 Byte3 Byte 2 Byte 1 Byte 3 Byte 2 Byte 1 Byte 0 Byte 0 Byte 0 Byte 0 0 1 2 3

Direct Mapped Cache: Addressing Block Address Block Offset Tag Index Word OFFSET Byte OFFSET Cache Structure Memory Address (N bit) composed of 4 fields: Byte offset in the word: to identify the given byte within the word: B bit If the memory is not byte addressable: B = 0. Word offset in the block: to identify the given word within the block: K bit. If the block contains only one word K = 0. Index identifies the block : M bit Tag to be compared to the cache tag associated to the block selected by the index : N (M + K + B) bit Cristina Silvano, 09/05/2016 25

Direct Mapped Cache: Example Memory address composed of N = 32 bit Cache size 64 K Byte Block size 128 bit Number of blocks = Cache Size / Block Size = 64 K Byte / 16 Byte = 4 K blocks Structure of the memory address: Byte Offset: B = 2 bit Word Offset: K = 2 bit Index: M = 12 bit Tag: 16 bit Cache Structure Tag (16 bit) Index (12 bit) WO (2 bit) BO (2 bit) Cristina Silvano, 09/05/2016 26

Direct Mapped Cache: 64KByte cache size and 128bit block size 31 16 15 4 3 2 1 0 Offset Byte Offset byte ADDRESS Indirizzo 2 Etichetta 16 Indice 12 2 TAG INDEX Offset Word Offset parola 16 bit 128 bit Cache Structure V Etichetta TAG DATA Dati (block) 44KK blocchi BLOCKS 16 32 32 32 32 = Multiplexer Successo HIT 32 Dato DATA (word) Cristina Silvano, 09/05/2016 27

Fully associative cache In a fully associative cache, the memory block can be placed in any position of the cache All the cache blocks must be checked during the search of the block The index does not exist in the memory address, there are the tag bits only Cache Structure Number of blocks = Cache Size / Block Size Tag (n-4 bit) WO (2 bit) BO (2 bit) Cristina Silvano, 09/05/2016 28

Example: Fully Associative Cache with 4 words and memory composed of 16 words. Any cache location can be occupied by data coming from any memory location ADD V TAG DATO 00 01 10 11 1 1 1 1 01 00 10 11 10 10 A B C D P E P E P A R I 00 01 M A R E ADDRESS TAG DATA 00 00 S O L E 00 01 M A R E 00 10 C A S A 00 11 L U C E 01 00 A B C D 01 01 R O M A 01 10 B A R I 01 11 A N N A Cache Structure 10 00 S A L E 10 01 C O M O 10 10 P A R I 10 11 P E P E 11 00 M A N O 11 01 B I R O 11 10 D U C A 11 11 S A R A Cristina Silvano, 09/05/2016 29

Fully Associative Cache: Example Memory address composed of N = 32 bit Cache size 256 Byte Block size 128 bit = 16 B Number of blocks = Cache Size / Block Size = 256 Byte / 16 Byte = 16 blocks Structure of the memory address: Byte Offset: B = 2 bit Word Offset: K = 2 bit Tag: 28 bit Cache Structure Tag (28 bit) WO (2 bit) BO (2 bit) Cristina Silvano, 09/05/2016 31

Fully Associative Cache: 256Byte Cache size and 16Byte block size 31.4 3 2 1 0 ADDRESS Indirizzo Etichetta TAG 28 2 2 Sp. Byte BO B0 Sp. WO Parola V Etichetta TAG DATA Dato V B1 Etichetta TAG DATA Dato V B15 Etichetta TAG DATA Dato Cache Structure 22 32 28 128 = = = Multiplexer 16: 1 Successo HIT DATA Dato Cristina Silvano, 09/05/2016 32

n-way Set Associative Cache Cache composed of sets, each set composed of n blocks: Number of blocks = Cache Size/ Block Size Number of sets = Cache Size / (Block Size x n) The memory block can be placed in any block of the set the search must be done on all the blocks of the set. Each memory block corresponds to a single set of the cache and the block can be placed in whatever block of the n blocks of the set Cache Structure (Set) cache = (Block address) mem mod (Num. sets in cache) Cristina Silvano, 09/05/2016 33

Example: 2-waySet Associative Cache with 4 words and memory composed of 16 words. The set 0 of the cache can be occupied by data coming from the memory addresses ---0. The block can be place in any of the 2 blocks of the set 0 SET ADD V TAG DATA 0 1 00 01 10 11 1 1 1 1 010 A B C D 100 S A L E 001 L U C E 011 A N N A ADDRESS TAG INDEX DATA 000 0 S O L E 000 1 M A R E 001 0 C A S A 001 1 L U C E 010 0 A B C D 010 1 R O M A 011 0 B A R I 011 1 A N N A Cache Structure 100 0 S A L E 100 1 C O M O 101 0 P A R I 101 1 P E P E 110 0 M A N O 110 1 B I R O 111 0 D U C A 111 1 S A R A Cristina Silvano, 09/05/2016 34

n-way Set Associative Cache: Addressing Cache Structure Tag Index Word OFFSET Byte OFFSET N bit memory address composed of 4 fields: Byte offset: B bit Word offset: K bit Index to identify the set: M bit Tag: N (M + K + B) bit Cristina Silvano, 09/05/2016 36

4-way Set Associative Cache: Example Memory address: N = 32 bit Cache size 4KByte Block size 32 bit Number of blocks = Cache Size / Block Size = 4 K Byte / 4 Byte = 1 K blocks Number of sets = Cache Size/ (Block size x n)= 4 K Byte / (4 Byte x 4) = 256 sets Structure of the memory address: Byte Offset: B = 2 bit Word Offset: K = 0 bit Index: M = 8 bit Tag: 22 bit Cache Structure Cristina Silvano, 09/05/2016 37

4-way Set Associative Cache: 4KB cache size and 32bit block size 31..10 9 2 1 0 Etichetta TAG 22 8 INDEX Indice Indirizzo ADDRESS Cache Structure V WAY3 Etichetta TAG DATA Dato V WAY2 Etichetta TAG DATA Dato V WAY1 Etichetta TAG DATA Dato V WAY0 Etichetta TAG DATA Dato 0 1 2 253 254 255 22 32 SET = = = = Multiplexer 4 : 1 Successo HIT Dato DATA Cristina Silvano, 09/05/2016 38

Block 12 can be placed How can block 12 be placed in the 8-block cache? Block no. Recap: Block Placement Direct Mapped: only into block 4 (12 mod 8) 0 1 2 3 4 5 6 7 2-way Set Associative: anywhere in set 0 (12 mod 4) Block no. 0 1 2 3 4 5 6 7 Block no. Fully Associative: anywhere 0 1 2 3 4 5 6 7 Cache Structure Block-frame address Set 0 Set 1 Set 2 Set 3 Block no. 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1

Block 12 can go Recap: Block Identification Direct Mapped: only into block 4 (12 mod 8) 2-way Set Associative: anywhere in set 0 (12 mod 4) Fully Associative: anywhere Cache Structure Block no. 0 1 2 3 4 5 6 7 Block no. 0 1 2 3 4 5 6 7 Block no. 0 1 2 3 4 5 6 7 Set 0 Set 1 Set 2 Set 3 Direct Mapped: Compute block position (Block Number MOD Number of blocks), compare tag of the block and verify valid bit Set-associative: Identify set (Block Number MOD Number of sets), compare tags of the set and verify valid bit Fully Associative: Compare tags in every block and verify valid bit 40

Recap: Increment of associativity Main advantage: Reduction of miss rate Main disadvantages: Higher implementation cost Increment of hit time The choice among direct mapped, fully associative and set associative caches depends on the tradeoff between implementation cost of the associativity (both in time and added hardware) and the miss rate reduction Increasing associativity shrinks index bits, expands tag bits Cache Structure Memory Address Block Address Tag Index Block Offset Cristina Silvano, 09/05/2016 41

The problem of block replacement In case of a miss in a fully associative cache, we need to decide which block to replace: any block is a potential candidate for the replacement If the cache is set-associative, we need to select among the blocks in the given set. For a direct mapped cache, there is only one candidate to be replaced (no need of any block replacement strategy) Main strategies used to choose the block to be replaced in associative caches: Random (or pseudo random) LRU (Least Recently Used) FIFO (First In First Out) oldest block replaced Cache Structure Cristina Silvano, 09/05/2016 42

The problem of the write policy Write-Through: the information is written to both the block in the cache and to the block in the lower level memory Cache Structure Write-Back: the information is written only to the block in the cache. The modified cache block is written to the lower level memory only when it is replaced due to a miss. Is a block clean or dirty? We need to add a DIRTY bit At the end of the write in cache, the cache block becomes dirty (modified) and the main memory will contain a value different with respect to the value in the cache: the main memory and the cache are not coherent Cristina Silvano, 09/05/2016 43

Main advantages: Write-Back vs Write-Through Write-Back: The block can be written by the processor at the frequency at which the cache, and not the main memory, can accept it Multiple writes to the same block require only a single write to the main memory. Write-Through: Simpler to be implemented, but to be effective it requires a write buffer to do not wait for the lower level of the memory hierarchy (to avoid write stalls) The read misses are cheaper because they do not require any write to the lower level of the memory hierarchy Memory always up to date Cache Structure Cristina Silvano, 09/05/2016 44

Write buffer Cache Structure Processor Cache Memory Write Buffer Basic idea: Insert a FIFO buffer to do not wait for lower level memory access (typical number of entries: 4 to 8) Processor writes data to the cache and to the write buffer The memory controller writes the contents of the buffer to memory Main problem: Write buffer saturation Write through always combined with write buffer Cristina Silvano, 09/05/2016 45

Write Miss Options: Write Allocate vs No Write Allocate Cache Structure What happens on a write miss? Write allocate (or fetch on write ): allocate new cache line in cache then write (double write to cache!) Usually means that you have to do a read miss to fill in rest of the cache-line! Alternative: per/word valid bits No write allocate (or write-around ): Simply send write data to lower-level memory - don t allocate new cache line! Cristina Silvano, 09/05/2016

Write-Back vs Write-Through Write Allocate vs No Write Allocate To manage a write miss, both options (Write Allocate and No Write Allocate) can be used for both write policies (Write-Back and Write-Through), but usually: Cache Structure Write-Back cache uses the Write Allocate option (hoping next writes to the block will be done again in cache) Write-Through cache uses the No Write Allocate option (hoping next writes to the block will be done again in memory) Cristina Silvano, 09/05/2016 47

Hit and Miss: Read and Write Read Hit: Read data in cache Write Hit: Write data both in cache and in memory (write-through) Write data in cache only (write-back): memory copy only when it is replaced due to a miss Cache Structure Read Miss: CPU stalls, data request to memory, copy in cache (write in cache), repeat of cache read operation Write Miss: CPU stalls, Data request to memory, copy in cache (write in cache), repeat of cache write operation (write allocate) Simply send write data to lower-level memory (no write allocate)

Recap: Four Questions about Memory Hierarchy Q1: Where can a block be placed in the upper level? (Block placement) Recap Q2: How is a block found if it is in the upper level? (Block identification) Q3: Which block should be replaced on a miss? (Block replacement) Q4: What happens on a write? (Write strategy)

Q1: Where can a block be placed in the upper level? Recap Block placement: Direct Mapped Fully Associative n-way Set Associative

Q2: How is a block found if it is in the upper level? Recap Block identification: compare tag(s) Fully Associative Mapping: Compare tags in every block and verify valid bit Direct Mapping: Compute block position (Block Number MOD Number of blocks), compare tag of the block and verify valid bit Set-associative Mapping: Identify set (Block Number MOD Number of sets), compare tags of the set and verify valid bit No need to check index or block offset

Q3: Which block should be replaced on a miss? Recap Block Replacement: Easy choice for direct mapped caches Set associative or fully associative caches: Random LRU (Least Recently Used) FIFO

Q4: What happens on a write? Recap Write Policies: Write through Write back Write Miss Options: Write Allocate No Write Allocate

Performance Evaluation: Impact of memory hierarchy on CPU time CPU time = (CPU exec cycles + Memory stall cycles) x T CLK where: T CLK = T clock cycle time period CPU exec cycles = IC x CPI exec IC = Instruction Count (CPI exec includes ALU and LD/STORE instructions) Memory stall cycles = IC x Misses per instr x Miss Penalty Cache Performance Evaluation CPU time = IC x (CPI exec + Misses per instr x Miss penalty) x T CLK where: Misses per instr = Memory Accesses Per Instruction x Miss rate CPU time = IC x (CPI exec + MAPI x Miss rate x Miss penalty) x T CLK

Performance evaluation: Impact of memory hierarchy on CPU time CPU time = IC x (CPI exec + MAPI x Miss rate x Miss penalty)x T CLK If considering also the stalls due to pipeline hazards: CPU time = IC x (CPI exec + Stalls per instr + MAPI x Miss rate x Miss penalty) x T CLK Cache Performance Evaluation

Cache Performance Average Memory Access Time: AMAT= Hit Time + Miss Rate * Miss Penalty How to improve cache performance: 1. Reduce the hit time 2. Reduce the miss rate 3. Reduce the miss penalty Cache Performance Evaluation 59

Unified Cache vs Separate I$ & D$ (Harvard architecture) Processor Unified L1 cache I-cache L1 Processor D-cache L1 To better exploit the locality principle Cache Performance Evaluation Average Memory Access Time for Separate I$ & D$ AMAT= % Instr. (Hit Time + I$ Miss Rate * Miss Penalty) + % Data (Hit Time + D$ Miss Rate * Miss Penalty) Usually: I$ Miss Rate << D$ Miss Rate

Miss Penalty Reduction: Second Level Cache Basic Idea: L1 cache small enough to match the fast CPU cycle time L2 cache large enough to capture many accesses that would go to main memory reducing the effective miss penalty Second Level Cache Processor L1 cache L2 cache Main Memory

AMAT for L1 and L2 Caches Second Level Cache AMAT = Hit Time L1 + Miss Rate L1 x Miss Penalty L1 where: Miss Penalty L1 = Hit Time L2 + Miss Rate L2 x Miss Penalty L2 AMAT = Hit Time L1 + Miss Rate L1 x (Hit Time L2 + Miss Rate L2 x Miss Penalty L2 )

Local and global miss rates Definitions: Local miss rate: misses in this cache divided by the total number of memory accesses to this cache: Miss rate L1 for L1 and Miss rate L2 for L2 Global miss rate: misses in this cache divided by the total number of memory accesses generated by the CPU: for L1, the global miss rate is still just Miss Rate L1, while for L2, it is (Miss Rate L1 x Miss Rate L2 ) Global Miss Rate is what really matters: it indicates what fraction of memory accesses from CPU go all the way to main memory Second Level Cache

Example Let us consider a computer with a L1 cache and L2 cache memory hierarchy. Suppose that in 1000 memory references there are 40 misses in L1 and 20 misses in L2. What are the various miss rates? Miss Rate L1 = 40 /1000 = 4% (either local or global) Miss Rate L2 = 20 /40 = 50% Global Miss Rate for Last Level Cache (L2): Miss Rate L1 L2 = Miss Rate L1 x Miss Rate L2 = (40 /1000) x (20 /40) = 2% Second Level Cache Cristina Silvano, 09/05/2016 65

AMAT for L1 and L2 Caches AMAT = Hit Time L1 + Miss Rate L1 x (Hit Time L2 + Miss Rate L2 x Miss Penalty L2 ) Second Level Cache AMAT = Hit Time L1 + Miss Rate L1 x Hit Time L2 + Miss Rate L1 L2 x Miss Penalty L2

Memory stalls per instructions for L1 and L2 caches Average memory stalls per instructions: Memory stall cycles per instr = Misses per instr x Miss Penalty Second Level Cache Average memory stalls per instructions for L1 and L2 caches: Memory stall cycles per instr = Misses L1 per instr X Hit Time L2 + Misses L2 per instr X Miss Penalty L2 67

Impact of L1 and L2 on CPU time CPU time = IC x (CPI exec + Memory stall cycles per instr) x T CLK where: Memory stall cycles per instr = Misses L1 per instr X Hit Time L2 + Misses L2 per instr X Miss Penalty L2 Misses L1 per instr = Memory Accesses Per Instr x Miss rate L1 Misses L2 per instr = Memory Accesses Per Instr x Miss rate L1 L2 Second Level Cache CPU time = IC x (CPI exec + MAPI x MR L1 x HT L2 + MAPI x MR L1 L2 x MP L2 ) x T CLK

Multi-level Inclusion: L1 and L2 caches MM L2 L1 Second Level Cache Multi-Level Exclusion: MM L1 L2 69

Memory Hierarchy: Summary Summary Cache: small, fast storage used to improve average access time to slow lower level memory. Cache exploits spatial and temporal locality: Present to the user as much memory it is available at the cheapest technology. Provide access at the speed offered by the fastest technology. 70

Summary: the cache design space Summary Several interacting dimensions cache size block size associativity replacement policy write-through vs write-back Cache Size Associativity Block Size

Memory Technology Performance metrics Latency is concern of cache Bandwidth is concern of multiprocessors and I/O Access time Time between read request and when desired word arrives Cycle time Minimum time between unrelated requests to memory Memory Technology DRAM used for main memory SRAM used for cache Copyright 2012, Elsevier Inc. All rights reserved.

Memory Technology SRAM Requires low power to retain bit Requires 6 transistors/bit Memory Technology DRAM Must be re-written after being read Must also be periodically refreshed Every ~ 8 ms Each row can be refreshed simultaneously Requires 1 transistor/bit Address lines are multiplexed: Upper half of address: row access strobe (RAS) Lower half of address: column access strobe (CAS) Copyright 2012, Elsevier Inc. All rights reserved.

Memory Technology Amdahl law: Memory capacity should grow linearly with processor speed Unfortunately, memory capacity and speed has not kept pace with processors Memory Technology Some optimizations: Multiple accesses to same row Synchronous DRAM Added clock to DRAM interface Burst mode with critical word first Wider interfaces Double data rate (DDR) Multiple banks on each DRAM device Copyright 2012, Elsevier Inc. All rights reserved.

Memory Optimizations DDR: DDR2 Lower power (2.5 V -> 1.8 V) Higher clock rates (266 MHz, 333 MHz, 400 MHz) DDR3 1.5 V 800 MHz DDR4 1-1.2 V 1600 MHz Memory Technology GDDR5 is graphics memory based on DDR3 Copyright 2012, Elsevier Inc. All rights reserved.

Memory Optimizations Graphics memory: Achieve 2-5 X bandwidth per DRAM vs. DDR3 Wider interfaces (32 vs. 16 bit) Higher clock rate Possible because they are attached via soldering instead of socketted DIMM modules Memory Technology Reducing power in SDRAMs: Lower voltage Low power mode (ignores clock, continues to refresh) Copyright 2012, Elsevier Inc. All rights reserved.

Flash Memory Type of EEPROM Must be erased (in blocks) before being overwritten Non volatile Limited number of write cycles Cheaper than SDRAM, more expensive than disk Slower than SRAM, faster than disk Memory Technology Copyright 2012, Elsevier Inc. All rights reserved.

Memory Dependability Memory is susceptible to cosmic rays Soft errors: dynamic errors Detected and fixed by error correcting codes (ECC) Hard errors: permanent errors Use sparse rows to replace defective rows Memory Technology Chipkill: a RAID-like error recovery technique Copyright 2012, Elsevier Inc. All rights reserved.