Classification Steady-State Cache Misses: Techniques To Improve Cache Performance:

Similar documents
Types of Cache Misses: The Three C s

Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses

Improving Cache Performance. Dr. Yitzhak Birk Electrical Engineering Department, Technion

Improving Cache Performance. Reducing Misses. How To Reduce Misses? 3Cs Absolute Miss Rate. 1. Reduce the miss rate, Classifying Misses: 3 Cs

Lecture 9: Improving Cache Performance: Reduce miss rate Reduce miss penalty Reduce hit time

DECstation 5000 Miss Rates. Cache Performance Measures. Example. Cache Performance Improvements. Types of Cache Misses. Cache Performance Equations

Cache performance Outline

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses. Professor Randy H. Katz Computer Science 252 Fall 1995

Lecture 16: Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses Professor Randy H. Katz Computer Science 252 Spring 1996

Lecture 11 Reducing Cache Misses. Computer Architectures S

Lecture 7: Memory Hierarchy 3 Cs and 7 Ways to Reduce Misses Professor David A. Patterson Computer Science 252 Fall 1996

Lecture 19: Memory Hierarchy Five Ways to Reduce Miss Penalty (Second Level Cache) Admin

Memory Cache. Memory Locality. Cache Organization -- Overview L1 Data Cache

CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance. Why More on Memory Hierarchy?

Computer Architecture Spring 2016

CPE 631 Lecture 06: Cache Design

NOW Handout Page # Why More on Memory Hierarchy? CISC 662 Graduate Computer Architecture Lecture 18 - Cache Performance

Aleksandar Milenkovich 1

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

CMSC 611: Advanced Computer Architecture. Cache and Memory

Chapter-5 Memory Hierarchy Design

Cache Memory: Instruction Cache, HW/SW Interaction. Admin

Memory Hierarchy Design. Chapter 5

EITF20: Computer Architecture Part4.1.1: Cache - 2

Lec 12 How to improve cache performance (cont.)

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Reducing Miss Penalty: Read Priority over Write on Miss. Improving Cache Performance. Non-blocking Caches to reduce stalls on misses

Advanced optimizations of cache performance ( 2.2)

Chapter 5. Topics in Memory Hierachy. Computer Architectures. Tien-Fu Chen. National Chung Cheng Univ.

L2 cache provides additional on-chip caching space. L2 cache captures misses from L1 cache. Summary

CSE 502 Graduate Computer Architecture. Lec Advanced Memory Hierarchy

Copyright 2012, Elsevier Inc. All rights reserved.

CSE 502 Graduate Computer Architecture. Lec Advanced Memory Hierarchy and Application Tuning

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

NOW Handout Page 1. Review: Who Cares About the Memory Hierarchy? EECS 252 Graduate Computer Architecture. Lec 12 - Caches

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

CSE 502 Graduate Computer Architecture

Announcements. ! Previous lecture. Caches. Inf3 Computer Architecture

Aleksandar Milenkovich 1

CS/ECE 250 Computer Architecture

Lec 11 How to improve cache performance

Lecture 11. Virtual Memory Review: Memory Hierarchy

CS422 Computer Architecture

Outline. 1 Reiteration. 2 Cache performance optimization. 3 Bandwidth increase. 4 Reduce hit time. 5 Reduce miss penalty. 6 Reduce miss rate

EITF20: Computer Architecture Part4.1.1: Cache - 2

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Cache Performance (H&P 5.3; 5.5; 5.6)

Recap: The Big Picture: Where are We Now? The Five Classic Components of a Computer. CS152 Computer Architecture and Engineering Lecture 20.

Graduate Computer Architecture. Handout 4B Cache optimizations and inside DRAM

5008: Computer Architecture

Topics. Digital Systems Architecture EECE EECE Need More Cache?

COSC 6385 Computer Architecture. - Memory Hierarchies (II)

Performance (1/latency) Capacity Access Time Cost. CPU Registers 100s Bytes <1 ns. Cache K Bytes M Bytes 1-10 ns.01 cents/byte

Graduate Computer Architecture. Handout 4B Cache optimizations and inside DRAM

MEMORY HIERARCHY DESIGN. B649 Parallel Architectures and Programming

Lecture-18 (Cache Optimizations) CS422-Spring

Outline. EECS 252 Graduate Computer Architecture. Lec 16 Advanced Memory Hierarchy. Why More on Memory Hierarchy? Review: 6 Basic Cache Optimizations

Virtual memory why? Virtual memory parameters Compared to first-level cache Parameter First-level cache Virtual memory. Virtual memory concepts

CS 2461: Computer Architecture 1

Levels in a typical memory hierarchy

CS 252 Graduate Computer Architecture. Lecture 8: Memory Hierarchy

Memory Hierarchies 2009 DAT105

Review: Who Cares About the Memory Hierarchy? Processor Only Thus Far in Course: CPU cost/performance, ISA, Pipelined Execution 1000 Performance CPU

Q3: Block Replacement. Replacement Algorithms. ECE473 Computer Architecture and Organization. Memory Hierarchy: Set Associative Cache

Advanced Caching Techniques

High Performance Computer Architecture Prof. Ajit Pal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

! 11 Advanced Cache Optimizations! Memory Technology and DRAM optimizations! Virtual Machines

Memory Hierarchy Basics. Ten Advanced Optimizations. Small and Simple

Lecture 13: Memory Systems -- Cache Op7miza7ons CSE 564 Computer Architecture Summer 2017

Announcements. ECE4750/CS4420 Computer Architecture L6: Advanced Memory Hierarchy. Edward Suh Computer Systems Laboratory

COSC4201. Chapter 5. Memory Hierarchy Design. Prof. Mokhtar Aboelaze York University

Memory Hierarchy Basics

Lecture 10 Advanced Memory Hierarchy

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved.

COEN6741. Computer Architecture and Design

COSC4201. Chapter 4 Cache. Prof. Mokhtar Aboelaze York University Based on Notes By Prof. L. Bhuyan UCR And Prof. M. Shaaban RIT

Advanced cache optimizations. ECE 154B Dmitri Strukov

COSC 6385 Computer Architecture - Memory Hierarchy Design (III)

Memories. CPE480/CS480/EE480, Spring Hank Dietz.

Page 1. Review: Who Cares About the Memory Hierarchy? Review: Cache performance. Review: Reducing Misses. Review: Miss Rate Reduction

Chapter 2 (cont) Instructor: Josep Torrellas CS433. Copyright Josep Torrellas 1999, 2001, 2002,

Advanced Caching Techniques

Cache Optimisation. sometime he thought that there must be a better way

Caching Basics. Memory Hierarchies

COSC 6385 Computer Architecture - Memory Hierarchies (II)

CS3350B Computer Architecture

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

Lecture 20: Memory Hierarchy Main Memory and Enhancing its Performance. Grinch-Like Stuff

Lecture 18: Memory Hierarchy Main Memory and Enhancing its Performance Professor Randy H. Katz Computer Science 252 Spring 1996

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

A Cache Hierarchy in a Computer System

Lecture 7 - Memory Hierarchy-II

Advanced Caching Techniques (2) Department of Electrical Engineering Stanford University

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Chapter 2: Memory Hierarchy Design Part 2

Transcription:

#1 Lec # 9 Winter 2003 1-21-2004 Classification Steady-State Cache Misses: The Three C s of cache Misses: Compulsory Misses Capacity Misses Conflict Misses Techniques To Improve Cache Performance: Reduce Miss Rate Reduce one or more of the three C s. Reduce Cache Miss Penalty Reduce Cache Hit Time

Types of Cache Misses: The Three C s 1 Compulsory: On the first access to a block; the block must be brought into the cache; also called cold start misses, or first reference misses. 2 Capacity: Occur because blocks are being discarded from cache because cache cannot contain all blocks needed for program execution (program working set is much larger than cache capacity). 3 Conflict: In the case of set associative or direct mapped block placement strategies, conflict misses occur when several blocks are mapped to the same set or block frame; also called collision misses or interference misses. Chapter 5.4-5.7 #2 Lec # 9 Winter 2003 1-21-2004

The 3 Cs of Cache: Absolute Steady State Miss Rates (SPEC92) 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 1-way 2-way 4-way 8-way Capacity 16 32 64 128 1 2 4 8 Cache Size (KB) Compulsory #3 Lec # 9 Winter 2003 1-21-2004

#4 Lec # 9 Winter 2003 1-21-2004 The 3 Cs of Cache: Relative Steady State Miss Rates (SPEC92) 100% 80% 1-way Conflict Miss Rate per Type 60% 40% 20% 2-way 4-way 8-way Capacity 0% 1 16 2 4 8 Cache Size (KB) 32 64 128 Compulsory

Improving Cache Performance How? Reduce Miss Rate Reduce Cache Miss Penalty Reduce Cache Hit Time #5 Lec # 9 Winter 2003 1-21-2004

#6 Lec # 9 Winter 2003 1-21-2004 Improving Cache Performance Miss Rate Reduction Techniques: * Increased cache capacity * Larger block size * Higher associativity * Victim caches * Hardware prefetching of instructions and data * Pseudo-associative Caches * Compiler-controlled prefetching * Compiler optimizations Cache Miss Penalty Reduction Techniques: * Giving priority to read misses over writes * Sub-block placement * Early restart and critical word first * Non-blocking caches * Multiple Cache Levels (L 2, L 3..) Cache Hit Time Reduction Techniques: * Small and simple caches * Avoiding address translation during cache indexing * Pipelining writes for fast write hits

Miss Rate Reduction Techniques: 25% Larger Block Size A larger block size improves cache performance by taking taking advantage of spatial locality For a fixed cache size, larger block sizes mean fewer cache block frames Performance keeps improving to a limit when the fewer number of cache block frames increases conflict misses and thus overall cache miss rate Miss Rate 20% 15% 10% 5% 0% 1K 4K 16K 64K 256K For SPEC92 16 32 64 128 256 Block Size (bytes) #7 Lec # 9 Winter 2003 1-21-2004

Miss Rate Reduction Techniques: Higher Cache Associativity A higher degree of associativity reduces miss rates by reducing conflict misses. However, once conflict misses are reduced or eliminated, increasing the degree of associativity will not result in further miss rate reduction and may result in increased clock cycle time due to the additional complexity. Example: Percent miss rates for various cache sizes and degrees of associativity Cache Size Associativity (KB) 1-way 2-way 4-way 8-way 4 9.8 7.6 7.1 7.1 8 6.8 4.9 4.4 4.4 16 4.9 4.1 4.1 4.1 32 4.2 3.8 3.7 3.7 64 3.7 3.1 3.0 2.9 128 2.1 1.9 1.9 1.9 256 1.3 1.2 1.2 1.2 512 0.8 0.7 0.6 0.6 In this case most conflict misses are eliminated by 2-way or 4-way cache For SPEC2000 Summarized from Figure 5.14 page 424 #8 Lec # 9 Winter 2003 1-21-2004

Miss Rate Reduction Techniques: Victim Caches Data discarded from cache is placed in an added small buffer (victim cache). On a cache miss check victim cache for data before going to main memory Jouppi [1990]: A 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache Used in Alpha, HP PA-RISC CPUs. Address =? CPU Address In Out Tag =? Data Cache Victim Cache Write Buffer Lower Level Memory #9 Lec # 9 Winter 2003 1-21-2004

Miss Rate Reduction Techniques: Pseudo-Associative Cache Attempts to combine the fast hit time of Direct Mapped cache and have the lower conflict misses of 2-way set-associative cache. Divide cache in two parts: On a cache miss, check other half of cache to see if data is there, if so have a pseudo-hit (slow hit) Easiest way to implement is to invert the most significant bit of the index field to find other block in the pseudo set. Hit Time Pseudo Hit Time Miss Penalty Time Drawback: CPU pipelining is hard to implement effectively if L 1 cache hit takes 1 or 2 cycles. Better used for caches not tied directly to CPU (L 2 cache). Used in MIPS R1000 L 2 cache, also similar L 2 in UltraSPARC. #10 Lec # 9 Winter 2003 1-21-2004

Miss Rate Reduction Techniques: Hardware Prefetching of Instructions And Data Prefetch instructions and data before they are needed by the CPU either into cache or into an external buffer. Example: The Alpha APX 21064 fetches two blocks on a miss: The requested block into cache and the next consecutive block in an instruction stream buffer. The same concept is applicable to data accesses using a data buffer. Extended to use multiple data stream buffers prefetching at different addresses (four streams improve data hit rate by 43%). It has been shown that, in some cases, eight stream buffers that can handle data or instructions can capture 50-70% of all misses. #11 Lec # 9 Winter 2003 1-21-2004

Miss Rate Reduction Techniques: Compiler Optimizations Compiler cache optimizations improve access locality characteristics of the generated code and include: Reorder procedures in memory to reduce conflict misses. Merging Arrays: Improve spatial locality by single array of compound elements vs. 2 arrays. Loop Interchange: Change nesting of loops to access data in the order stored in memory. Loop Fusion: Combine 2 or more independent loops that have the same looping and some variables overlap. Blocking: Improve temporal locality by accessing blocks of data repeatedly vs. going down whole columns or rows. #12 Lec # 9 Winter 2003 1-21-2004

#13 Lec # 9 Winter 2003 1-21-2004 Miss Rate Reduction Techniques: Compiler-Based Cache Optimizations Merging Arrays Example /* Before: 2 sequential arrays */ int val[size]; int key[size]; Val array Key array /* After: 1 array of stuctures */ struct merge { }; int val; int key; struct merge merged_array[size]; Merged array Val Key Val Key Merging the two arrays: Reduces conflicts between val and key Improve spatial locality

#14 Lec # 9 Winter 2003 1-21-2004 Miss Rate Reduction Techniques: Compiler-Based Cache Optimizations /* Before */ for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1) /* After */ Loop Interchange Example Inner loop data access stride: 100 words for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; for (k = 0; k < 100; k = k+1) for (i = 0; i < 5000; i = i+1) for (j = 0; j < 100; j = j+1) x[i][j] = 2 * x[i][j]; 100 words Current Access Next Access Inner loop sequential data access stride: 1 word Current Access Next Access Sequential accesses instead of striding through memory every 100 words in this case improves spatial locality.

#15 Lec # 9 Winter 2003 1-21-2004 Miss Rate Reduction Techniques: Compiler-Based Cache Optimizations /* Before */ Loop Fusion Example for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) d[i][j] = a[i][j] + c[i][j]; /* After */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} Two misses per access to a & c versus one miss per access Improves spatial and temporal locality. Possible Misses Likely hits for a[i][j], c[i][j]

Miss Rate Reduction Techniques: Compiler-Based Cache Optimizations Data Access Blocking Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; Two Inner Loops: Read all NxN elements of z[ ] Read N elements of 1 row of y[ ] repeatedly Write N elements of 1 row of x[ ] Capacity Misses can be represented as a function of N & Cache Size: 3 NxNx4 => no capacity misses; otherwise... Example: Matrix Multiplication Inner Loop: Vector dot product to form a result matrix element X = Y x Z Matrix Sizes = N x N Idea: compute BxB submatrix that fits in cache #16 Lec # 9 Winter 2003 1-21-2004

#17 Lec # 9 Winter 2003 1-21-2004 Miss Rate Reduction Techniques: Compiler-Based Cache Optimizations Blocking Example (continued) /* After */ for (jj = 0; jj < N; jj = jj+b) for (kk = 0; kk < N; kk = kk+b) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+b,n); j = j+1) {r = 0; for (k = kk; k < min(kk+b,n); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; B X = Y x Z B Matrix Sizes = N x N B is called the Blocking Factor Capacity Misses from 2N 3 + N 2 to 2N 3 /B +N 2 May also affect conflict misses

Compiler-Based Cache Optimizations vpenta (nasa7) gmty (nasa7) tomcatv btrix (nasa7) mxm (nasa7) spice cholesky (nasa7) compress 1 1.5 2 2.5 3 Performance Improvement merged arrays loop interchange loop fusion blocking #18 Lec # 9 Winter 2003 1-21-2004

Miss Penalty Reduction Techniques: Giving Priority To Read Misses Over Writes Write-through cache with write buffers suffers from RAW conflicts with main memory reads on cache misses: Write buffer holds updated data needed for the read. One solution is to simply wait for the write buffer to empty, increasing read miss penalty (in old MIPS 1000 by 50% ). Check write buffer contents before a read; if no conflicts, let the memory access continue. For write-back cache, on a read miss replacing dirty block: Normally: Write dirty block to memory, and then do the read. Instead copy the dirty block to a write buffer, then do the read, and then do the write. CPU stalls less (from 2M down to M) since it restarts soon after the read. #19 Lec # 9 Winter 2003 1-21-2004

Miss Penalty Reduction Techniques: Sub-Block Placement Divide a cache block frame into a number of sub-blocks. Include a valid bit per sub-block of cache block frame to indicate validity of sub-block. Originally used to reduce tag storage (fewer block frames). No need to load a full block on a miss just the needed sub-block. Tag Valid Bits Sub-blocks #20 Lec # 9 Winter 2003 1-21-2004

#21 Lec # 9 Winter 2003 1-21-2004 Miss Penalty Reduction Techniques: Early Restart and Critical Word First Don t wait for full block to be loaded before restarting CPU: Early restart: As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution. Critical Word First: Request the missed word first from memory and send it to the CPU as soon as it arrives. Let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first. Generally useful only for caches with large block sizes. Programs with a high degree of spatial locality tend to require a number of sequential word, and may not benefit by early restart.

Miss Penalty Reduction Techniques: Non-Blocking Caches Non-blocking cache or lockup-free cache allows data cache to continue to supply cache hits during the processing of a miss: Requires an out-of-order execution CPU. hit under miss reduces the effective miss penalty by working during misses vs. ignoring CPU requests. hit under multiple miss or miss under miss may further lower the effective miss penalty by overlapping multiple misses. Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses. Requires multiple memory banks to allow multiple memory access requests. Example: Intel Pentium III allows up to 4 outstanding memory misses. #22 Lec # 9 Winter 2003 1-21-2004

#23 Lec # 9 Winter 2003 1-21-2004 Value of Hit Under Miss For SPEC Average Memory Access Time (A.M.A.T) 1.8 1.6 1.4 1.2 0.8 0.6 0.4 0.2 2 1 0 Hit Under i Misses 0->1 1->2 2->64 Base eqntott espresso xlisp compress mdljsp2 ear fpppp tomcatv swm256 doduc su2cor wave5 mdljdp2 hydro2d alvinn nasa7 spice2g6 ora

Cache Miss Penalty Reduction Techniques: Second-Level Cache (L 2 ) By adding another cache level between the original cache and memory: 1 The first level of cache (L 1 ) can be small enough to be placed on-chip to match the CPU clock rate. 2 The second level of cache (L 2 ) is large enough to capture a large percentage of accesses. When adding a second level of cache: Average memory access time = Hit time L1 + Miss rate L1 x Miss penalty L1 where: Miss penalty L1 = Hit time L2 + Miss rate L2 x Miss penalty L2 Local miss rate: the number of misses in the cache divided by the total number of accesses to this cache (i.e Miss rate L2 above). Global miss rate: The number of misses in the cache divided by the total accesses by the CPU (i.e. the global miss rate for the second level cache is Example: Miss rate L1 x Miss rate L2 Given 1000 memory references 40 misses occurred in L 1 and 20 misses in L 2 The miss rate for L 1 (local or global) = 40/1000 = 4% The global miss rate for L 2 = 20 / 1000 = 2% (Multiple cache levels already covered last Lecture in more detail) #24 Lec # 9 Winter 2003 1-21-2004

L 2 Performance Equations AMAT = Hit Time L1 + Miss Rate L1 x Miss Penalty L1 Miss Penalty L1 = Hit Time L2 + Miss Rate L2 x Miss Penalty L2 AMAT = Hit Time L1 + Miss Rate L1 x (Hit Time L2 + Miss Rate L2 x Miss Penalty L2 ) (Multiple cache levels already covered last Lecture in more detail) #25 Lec # 9 Winter 2003 1-21-2004

Cache Miss Penalty Reduction Techniques: 3 Levels of Cache, L 1, L 2, L 3 CPU L1 Cache L2 Cache Hit Rate= H 1, Hit time = 1 cycle Hit Rate= H 2, Hit time = T 2 cycles L3 Cache Hit Rate= H 3, Hit time = T 3 Main Memory Memory access penalty, M (Multiple cache levels already covered last Lecture in more detail) #26 Lec # 9 Winter 2003 1-21-2004

L3 Performance Equations AMAT = Hit Time L1 + Miss Rate L1 x Miss Penalty L1 Miss Penalty L1 = Hit Time L2 + Miss Rate L2 x Miss Penalty L2 Miss Penalty L2 = Hit Time L3 + Miss Rate L3 x Miss Penalty L3 AMAT = Hit Time L1 + Miss Rate L1 x (Hit Time L2 + Miss Rate L2 x (Hit Time L3 + Miss Rate L3 x Miss Penalty L3 ) (Multiple cache levels already covered last Lecture in more detail) #27 Lec # 9 Winter 2003 1-21-2004

Hit Time Reduction Techniques : Pipelined Writes Pipeline tag check and cache update as separate stages; current write tag check & previous write cache update Only STORES in the pipeline; empty during a miss Store r2, (r1) Check r1 Add -- Sub -- Store r4, (r3) M[r1]<r2& check r3 Shaded is Delayed Write Buffer ; which must be checked on reads; either complete write or read from buffer #28 Lec # 9 Winter 2003 1-21-2004

#29 Lec # 9 Winter 2003 1-21-2004 Hit Time Reduction Techniques : Avoiding Address Translation Send virtual address to cache: Called Virtually Addressed Cache or just Virtual Cache vs. Physical Cache Every time process is switched logically the cache must be flushed; otherwise it will return false hits Cost is time to flush + compulsory misses from empty cache Dealing with aliases (sometimes called synonyms); Two different virtual addresses map to same physical address I/O must interact with cache, so need virtual address Solution to aliases: HW guaranteess covers index field & direct mapped, they must be unique; this is called page coloring Solution to cache flush: Add process identifier tag that identifies a process as well as address within process: can t get a hit if wrong process

Hit Time Reduction Techniques : Virtually Addressed Caches CPU CPU CPU TB $ VA PA VA Tags $ TB VA VA PA Tags VA $ TB PA L2 $ PA PA MEM MEM Conventional Organization VA = Virtual Address PA = Physical Address MEM Virtually Addressed Cache Translate only on miss Synonym Problem Overlap $ access with VA translation: requires $ index to remain invariant across translation #30 Lec # 9 Winter 2003 1-21-2004

#31 Lec # 9 Winter 2003 1-21-2004 Cache Optimization Summary Miss rate Miss Penalty Hit time Technique MR MP HT Complexity Larger Block Size + 0 Higher Associativity + 1 Victim Caches + 2 Pseudo-Associative Caches + 2 HW Prefetching of Instr/Data + 2 Compiler Controlled Prefetching + 3 Compiler Reduce Misses + 0 Priority to Read Misses + 1 Subblock Placement + + 1 Early Restart & Critical Word 1st + 2 Non-Blocking Caches + 3 Second Level Caches + 2 Small & Simple Caches + 0 Avoiding Address Translation + 2 Pipelining Writes + 1