CS152 Computer Architecture and Engineering Lecture 17: Cache System

Similar documents
ECE ECE4680

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Technologies. Technology Trends

CPS101 Computer Organization and Programming Lecture 13: The Memory System. Outline of Today s Lecture. The Big Picture: Where are We Now?

CS152 Computer Architecture and Engineering Lecture 16: Memory System

COSC 6385 Computer Architecture - Memory Hierarchies (I)

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

ECE468 Computer Organization and Architecture. Memory Hierarchy

CPE 631 Lecture 04: CPU Caches

Modern Computer Architecture

CpE 442. Memory System

EECS150 - Digital Design Lecture 11 SRAM (II), Caches. Announcements

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

14:332:331. Week 13 Basics of Cache

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

EE 4683/5683: COMPUTER ARCHITECTURE

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

ארכיטקטורת יחידת עיבוד מרכזי ת

Course Administration

14:332:331. Week 13 Basics of Cache

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Cache Memory - II. Some of the slides are adopted from David Patterson (UCB)

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

Advanced Computer Architecture

Levels in memory hierarchy

Handout 4 Memory Hierarchy

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

CSC Memory System. A. A Hierarchy and Driving Forces

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

LECTURE 11. Memory Hierarchy

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley.

Cache Memory: Instruction Cache, HW/SW Interaction. Admin

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency)

CS3350B Computer Architecture

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Page 1. Memory Hierarchies (Part 2)

Computer Architecture Spring 2016

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

ECE468 Computer Organization and Architecture. Virtual Memory

ECE4680 Computer Organization and Architecture. Virtual Memory

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

EEC 483 Computer Organization

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Memory Hierarchy Review

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

Memory hierarchy review. ECE 154B Dmitri Strukov

CS152 Computer Architecture and Engineering Lecture 19: I/O Systems

Question?! Processor comparison!

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

Memory Hierarchy: Caches, Virtual Memory

Chapter 8. Virtual Memory

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Speicherarchitektur. Who Cares About the Memory Hierarchy? Technologie-Trends. Speicher-Hierarchie. Referenz-Lokalität. Caches

Computer Architecture. Memory Hierarchy. Lynn Choi Korea University

10/7/13! Anthony D. Joseph and John Canny CS162 UCB Fall 2013! " (0xE0)" " " " (0x70)" " (0x50)"

Chapter 7 Large and Fast: Exploiting Memory Hierarchy. Memory Hierarchy. Locality. Memories: Review

Agenda. Recap: Components of a Computer. Agenda. Recap: Cache Performance and Average Memory Access Time (AMAT) Recap: Typical Memory Hierarchy

Lecture 11. Virtual Memory Review: Memory Hierarchy

UCB CS61C : Machine Structures

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

Lctures 33: Cache Memory - I. Some of the slides are adopted from David Patterson (UCB)

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

12 Cache-Organization 1

CS61C : Machine Structures

Recap: Machine Organization

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Memory Hierarchy Technology. The Big Picture: Where are We Now? The Five Classic Components of a Computer

Aleksandar Milenkovich 1

UCB CS61C : Machine Structures

1" 0" d" c" b" a" ...! ! Benefits of Larger Block Size. " Very applicable with Stored-Program Concept " Works well for sequential array accesses

UC Berkeley CS61C : Machine Structures

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

ECE232: Hardware Organization and Design

Topics. Computer Organization CS Improving Performance. Opportunity for (Easy) Points. Three Generic Data Hazards

Caches. Samira Khan March 23, 2017

The Memory Hierarchy & Cache

Lecture 33 Caches III What to do on a write hit? Block Size Tradeoff (1/3) Benefits of Larger Block Size

Caches 3/23/17. Agenda. The Dataflow Model (of a Computer)

UCB CS61C : Machine Structures

Transcription:

CS152 Computer Architecture and Engineering Lecture 17 System March 17, 1995 Dave Patterson (patterson@cs) and Shing Kong (shing.kong@eng.sun.com) Slides available on http//http.cs.berkeley.edu/~patterson cs 152 cache.1 DAP & SIK 1995 Recap SRAM Timing A N WE_L OE_L 2 N words x M bit SRAM M D Write Timing Read Timing D Data In High Z Garbage Data Out Junk Data Out A Write Address Junk Read Address Read Address OE_L WE_L Write Setup Time Write Hold Time Read Access Time Read Access Time cs 152 cache.2 DAP & SIK 1995

Recap DRAM Fast Page Mode Operation Fast Page Mode DRAM N x M SRAM to save a row Column Address N cols After a row is read into the register Only CAS is needed to access other M-bit blocks on that row RAS_L remains asserted while CAS_L is toggled N rows DRAM Row Address N x M SRAM M-bit Output M bits 1st M-bit Access 2nd M-bit 3rd M-bit 4th M-bit RAS_L CAS_L A Row Address Col Address Col Address Col Address Col Address cs 152 cache.3 DAP & SIK 1995 The Motivation for s Memory System Processor DRAM Motivation Large memories (DRAM) are slow Small memories (SRAM) are fast Make the average access time small by Servicing most accesses from a small, fast memory. Reduce the bandwidth required of the large memory cs 152 cache.4 DAP & SIK 1995

Outline of Today s Lecture Recap of Memory Hierarchy & Introduction to (2 min) Questions and Administrative Matters (5 min) A In-depth Look at the Operation of (25 min) Break (5 minutes) Write and Replacement Policy (1 min) The Memory System of the SPARCstation 2 (1 min) Summary (5 min) cs 152 cache.5 DAP & SIK 1995 An Expanded View of the Memory System Processor Control Memory Memory Datapath Memory Memory Speed Size Cost Fastest Smallest Highest Slowest Biggest Lowest cs 152 cache.6 DAP & SIK 1995

Levels of the Memory Hierarchy Capacity Access Time Cost CPU Registers 1s Bytes <1s ns K Bytes 1-1 ns $.1-.1/bit Main Memory M Bytes 1ns-1us $.1-.1 Disk G Bytes ms -3-4 1-1 cents Tape infinite sec-min 1-6 Registers Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntl 8-128 bytes OS 512-4K bytes user/operator Mbytes Upper Level faster Larger Lower Level cs 152 cache.7 DAP & SIK 1995 The Principle of Locality Probability of reference Address Space 2 The Principle of Locality Program access a relatively small portion of the address space at any instant of time. Example 9% of time in 1% of the code Two Different Types of Locality Temporal Locality (Locality in Time) If an item is referenced, it will tend to be referenced again soon. Spatial Locality (Locality in Space) If an item is referenced, items whose addresses are close by tend to be referenced soon. cs 152 cache.8 DAP & SIK 1995

Memory Hierarchy Principles of Operation At any given time, data is copied between only 2 adjacent levels Block Upper Level () the one closer to the processor - Smaller, faster, and uses more expensive technology Lower Level (Memory) the one further away from the processor - Bigger, slower, and uses less expensive technology The minimum unit of information that can either be present or not present in the two level hierarchy To Processor From Processor Upper Level () Blk X Lower Level (Memory) Blk Y cs 152 cache.9 DAP & SIK 1995 Memory Hierarchy Terminology Hit data appears in some block in the upper level (example Block X) Hit Rate the fraction of memory access found in the upper level Hit Time Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss data needs to be retrieve from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) Miss Penalty = Time to replace a block in the upper level + Time to deliver the block the processor Hit Time << Miss Penalty To Processor From Processor Upper Level () Blk X Lower Level (Memory) Blk Y cs 152 cache.1 DAP & SIK 1995

Basic Terminology Typical Values Typical Values Block (line) size 4-128 bytes Hit time 1-4 cycles Miss penalty 8-32 cycles (and increasing) (access time) (6-1 cycles) (transfer time) (2-22 cycles) Miss rate 1% - 2% Size 1 KB - 256 KB cs 152 cache.11 DAP & SIK 1995 How Does Work? Temporal Locality (Locality in Time) If an item is referenced, it will tend to be referenced again soon. Keep more recently accessed data items closer to the processor Spatial Locality (Locality in Space) If an item is referenced, items whose addresses are close by tend to be referenced soon. Move blocks consists of contiguous words to the cache To Processor From Processor Upper Level Blk X Lower Level Memory Blk Y cs 152 cache.12 DAP & SIK 1995

Questions and Administrative Matters (5 Minutes) cs 152 cache.13 DAP & SIK 1995 The Simplest Direct Mapped Memory Address 1 2 3 4 5 6 7 8 9 A B C D E F Memory 4 Byte Direct Mapped Index 1 2 3 Location can be occupied by data from Memory location, 4, 8,... etc. In general any memory location whose 2 LSBs of the address are s Address<1> => cache index Which one should we place in the cache? How can we tell which one is in the cache? cs 152 cache.14 DAP & SIK 1995

Tag and Index Assume a 32-bit memory (byte ) address A 2**N bytes direct mapped cache - Index The lower N bits of the memory address - Tag The upper (32 - N) bits of the memory address 31 N Tag Example x5 Index Ex x3 Valid Bit Stored as part of the cache state x5 2 N Bytes Direct Mapped Byte Byte 1 Byte 2 Byte 3 Byte 2**N -1 1 2 3 2 N- 1 cs 152 cache.15 DAP & SIK 1995 Access Example V Tag Data Start Up Access 1 (miss) Access 1 (HIT) M [1] 1 M [1] Miss Handling Load Data Write Tag & Set V Access 1 1 (miss) M [1] Access 1 1 (HIT) M [1] 1 M [1] Load Data Write Tag & Set V M [1] 1 M [1] Sad Fact of Life A lot of misses at start up Compulsory Misses - (Cold start misses) cs 152 cache.16 DAP & SIK 1995

Definition of a Block Block the cache data that has in its own cache tag Our previous extreme example 4-byte Direct Mapped cache Block Size = 1 Byte Take advantage of Temporal Locality If a byte is referenced, it will tend to be referenced soon. Did not take advantage of Spatial Locality If a byte is referenced, its adjacent bytes will be referenced soon. In order to take advantage of Spatial Locality increase the block size Valid Tag Direct Mapped Data Byte Byte 1 Byte 2 Byte 3 cs 152 cache.17 DAP & SIK 1995 Example 1 KB Direct Mapped with 32 B Blocks For a 2 ** N byte cache The uppermost (32 - N) bits are always the Tag The lowest M bits are the Byte Select (Block Size = 2 ** M) 31 Tag Example x5 Stored as part of the cache state 9 Index Ex x1 4 Byte Select Ex x Valid Bit Tag x5 Data Byte 31 Byte 63 Byte 1 Byte 33 Byte Byte 32 1 2 3 Byte 992 31 cs 152 cache.18 DAP & SIK 1995

Block Size Tradeoff In general, larger block size take advantage of spatial locality BUT Larger block size means larger miss penalty - Takes longer time to fill up the block If block size is too big relative to cache size, miss rate will go up Average Access Time = Hit Time x (1 - Miss Rate) + Miss Penalty x Miss Rate Miss Penalty Miss Rate Exploits Spatial Locality Fewer blocks compromises temporal locality Average Access Time Increased Miss Penalty & Miss Rate Block Size Block Size Block Size cs 152 cache.19 DAP & SIK 1995 Another Extreme Example Valid Bit Tag Data Byte 3 Byte 2 Byte 1 Byte Size = 4 bytes Block Size = 4 bytes Only ONE entry in the cache True If an item is accessed, likely that it will be accessed again soon But it is unlikely that it will be accessed again immediately!!! The next access will likely to be a miss again - Continually loading data into the cache but discard (force out) them before they are used again - Worst nightmare of a cache designer Ping Pong Effect Conflict Misses are misses caused by Different memory locations mapped to the same cache index - Solution 1 make the cache size bigger - Solution 2 Multiple entries for the same Index cs 152 cache.2 DAP & SIK 1995

A Two-way Set Associative N-way set associative N entries for each Index N direct mapped caches operates in parallel Valid Example Two-way set associative cache Index selects a set from the cache The two tags in the set are compared in parallel Data is selected based on the tag result Tag Data Block Index Data Block Tag Valid Adr Tag Compare Sel1 1 Mux Sel Compare OR Block Hit cs 152 cache.21 DAP & SIK 1995 Disadvantage of Set Associative N-way Set Associative versus Direct Mapped N comparators vs. 1 Extra MUX delay for the data Data comes AFTER Hit/Miss In a direct mapped cache, Block is available BEFORE Hit/Miss Possible to assume a hit and continue. Recover later if miss. Valid Tag Data Block Index Data Block Tag Valid Adr Tag Compare Sel1 1 Mux Sel Compare Hit OR Block cs 152 cache.22 DAP & SIK 1995

And yet Another Extreme Example Fully Associative Fully Associative -- push the set associative idea to its limit! Forget about the Index Compare the Tags of all cache entries in parallel Example Block Size = 2 B blocks, we need N 27-bit comparators By definition Conflict Miss = for a fully associative cache 31 4 Tag (27 bits long) Byte Select Ex x1 X X X X Tag Valid Bit Data Byte 31 Byte 63 Byte 1 Byte 33 Byte Byte 32 X cs 152 cache.23 DAP & SIK 1995 A Summary on Sources of Misses Compulsory (cold start, first reference) first access to a block Cold fact of life not a whole lot you can do about it Conflict (collision) Multiple memory locations mapped to the same cache location Solution 1 increase cache size Solution 2 increase associativity Capacity cannot contain all blocks access by the program Solution increase cache size Invalidation other process (e.g., I/O) updates memory cs 152 cache.24 DAP & SIK 1995

Source of Misses Quiz Size Compulsory Miss Direct Mapped N-way Set Associative Fully Associative Conflict Miss Capacity Miss Invalidation Miss cs 152 cache.25 DAP & SIK 1995 Break (5 Minutes) cs 152 cache.27 DAP & SIK 1995

The Need to Make a Decision! Direct Mapped Each memory location can only mapped to 1 cache location No need to make any decision -) - Current item replaced the previous item in that cache location N-way Set Associative Each memory location have a choice of N cache locations Fully Associative Each memory location can be placed in ANY cache location miss in a N-way Set Associative or Fully Associative Bring in new block from memory Throw out a cache block to make room for the new block Damn! We need to make a decision on which block to throw out! cs 152 cache.28 DAP & SIK 1995 Block Replacement Policy Random Replacement Hardware randomly selects a cache item and throw it out Least Recently Used Hardware keeps track of the access history Replace the entry that has not been used for the longest time Example of a Simple Pseudo Least Recently Used Implementation Assume 64 Fully Associative Entries Hardware replacement pointer points to one cache entry Whenever an access is made to the entry the pointer points to - Move the pointer to the next entry Otherwise do not move the pointer Entry Replacement Pointer Entry 1 cs 152 cache.29 DAP & SIK 1995 Entry 63

Write Policy Write Through versus Write Back read is much easier to handle than cache write Instruction cache is much easier to design than data cache write How do we keep data in the cache and memory consistent? Two options (decision time again -) Write Back write to cache only. Write the cache block to memory when that cache block is being replaced on a cache miss. - Need a dirty bit for each cache block - Greatly reduce the memory bandwidth requirement - Control can be complex Write Through write to cache and memory at the same time. - What!!! How can this be? Isn t memory too slow for this? cs 152 cache.3 DAP & SIK 1995 Write Buffer for Write Through Processor DRAM Write Buffer A Write Buffer is needed between the and Memory Processor writes data into the cache and the write buffer Memory controller write contents of the buffer to memory Write buffer is just a FIFO Typical number of entries 4 Works fine if Store frequency (w.r.t. time) << 1 / DRAM write cycle Memory system designer s nightmare Store frequency (w.r.t. time) -> 1 / DRAM write cycle Write buffer saturation cs 152 cache.31 DAP & SIK 1995

Write Buffer Saturation Processor DRAM Store frequency (w.r.t. time) -> 1 / DRAM write cycle If this condition exist for a long period of time (CPU cycle time too quick and/or too many store instructions in a row) - Store buffer will overflow no matter how big you make it - The CPU Cycle Time <= DRAM Write Cycle Time Solution for write buffer saturation Use a write back cache Install a second level (L2) cache Write Buffer Processor L2 DRAM Write Buffer cs 152 cache.32 DAP & SIK 1995 Write Allocate versus Not Allocate Assume a 16-bit write to memory location x and causes a miss Do we read in the rest of the block (Byte 2, 3,... 31)? Yes Write Allocate No Write Not Allocate 31 Tag Example x 9 Index Ex x 4 Byte Select Ex x Valid Bit Tag x Data Byte 31 Byte 63 Byte 1 Byte 33 Byte Byte 32 1 2 3 Byte 992 31 cs 152 cache.33 DAP & SIK 1995

What is a Sub-block? Sub-block A unit within a block that has its own valid bit Example 1 KB Direct Mapped, 32-B Block, 8-B Sub-block - Each cache entry will have 32/8 = 4 valid bits Write miss only the bytes in that sub-block is brought in. Tag SB3 s V Bit SB2 s V Bit SB1 s V Bit SB s V Bit Data B31 B24 Sub-block2 Sub-block1 Sub-block3 Byte 123 B Sub-block Byte 992 1 2 3 31 cs 152 cache.34 DAP & SIK 1995 Reducing Memory Transfer Time CPU CPU CPU $ bus mux $ bus $ bus M M M M M M Solution 1 High BW DRAM Solution 2 Wide Path Between Memory & Solution 3 Memory Interleaving Examples Page Mode DRAM SDRAM CDRAM RAMbus Cost cs 152 cache.35 DAP & SIK 1995

SPARCstation 2 s Memory System Memory Controller Memory Bus (SIMM Bus) 128-bit wide datapath Processor Bus (Mbus) 64-bit wide Memory Module 7 External Memory Module 6 Memory Module 5 Memory Module 4 Memory Module 3 Memory Module 2 Processor Module (Mbus Module) Memory Module 1 SuperSPARC Processor Instruction Data Register File Memory Module cs 152 cache.36 DAP & SIK 1995 SPARCstation 2 s External External 1 MB Direct Mapped Write Back Write Allocate Processor Module (Mbus Module) SuperSPARC Processor Instruction Data Register File SPARCstation 2 s External Size and organization 1 MB, direct mapped Block size 128 B Sub-block size 32 B Write Policy Write back, write allocate cs 152 cache.37 DAP & SIK 1995

SPARCstation 2 s Internal Instruction External 1 MB Direct Mapped Write Back Write Allocate Processor Module (Mbus Module) SuperSPARC Processor I- 2 KB 5-way Data Register File SPARCstation 2 s Internal Instruction Size and organization 2 KB, 5-way Set Associative Block size 64 B Sub-block size 32 B Write Policy Does not apply Note Sub-block size the same as the External (L2) cs 152 cache.38 DAP & SIK 1995 SPARCstation 2 s Internal Data External 1 MB Direct Mapped Write Back Write Allocate Processor Module (Mbus Module) SuperSPARC Processor I- 2 KB 5-way D- 16 KB 4-way WT, WNA Register File SPARCstation 2 s Internal Data Size and organization 16 KB, 4-way Set Associative Block size 64 B Sub-block size 32 B Write Policy Write through, write not allocate Sub-block size the same as the External (L2) cs 152 cache.39 DAP & SIK 1995

Two Interesting Questions? External 1 MB Direct Mapped Write Back Write Allocate Processor Module (Mbus Module) SuperSPARC Processor I- 2 KB 5-way D- 16 KB 4-way WT, WNA Register File Why did they use N-way set associative cache internally? Answer A N-way set associative cache is like having N direct mapped caches in parallel. They want each of those N direct mapped cache to be 4 KB. Same as the virtual page size. Virtual Page Size cover in next week s virtual memory lecture How many levels of cache does SPARCstation 2 has? Answer Three levels. (1) Internal I & D caches, (2) External cache and (3)... cs 152 cache.4 DAP & SIK 1995 SPARCstation 2 s Memory Module Supports a wide range of sizes Smallest 4 MB 16 2Mb DRAM chips, 8 KB of Page Mode SRAM Biggest 64 MB 32 16Mb chips, 16 KB of Page Mode SRAM DRAM Chip 15 512 cols DRAM Chip 256K x 8 = 2 MB 512 rows 256K x 8 = 2 MB 8 bits 512 x 8 SRAM bits<127> 512 x 8 SRAM bits<7> Memory Bus<127> cs 152 cache.41 DAP & SIK 1995

Summary The Principle of Locality Program access a relatively small portion of the address space at any instant of time. - Temporal Locality Locality in Time - Spatial Locality Locality in Space Three Major Categories of Misses Compulsory Misses sad facts of life. Example cold start misses. Conflict Misses increase cache size and/or associativity. Nightmare Scenario ping pong effect! Capacity Misses increase cache size Write Policy Write Through need a write buffer. Nightmare WB saturation Write Back control can be complex cs 152 cache.42 DAP & SIK 1995 Where to get more information? General reference, Chapter 8 of John Hennessy & David Patterson, Computer Architecture A Quantitative Approach, Morgan Kaufmann Publishers Inc., 199 A landmark paper on caches Alan Smith, Memories, Computing Surveys, September 1982 A book on everything you need to know about caches Steve Przybylski, and Memory Hierarchy Design A Performance-Directed Approach, Morgan Kaufmann Publishers Inc., 199. cs 152 cache.43 DAP & SIK 1995