// CS 6C: Great Ideas in Computer Architecture (Machine Structures) Set- Associa+ve Caches Instructors: Randy H Katz David A PaFerson hfp://insteecsberkeleyedu/~cs6c/fa Cache Recap Recap: Components of a Computer Cache Recap Processor Control Datapath Devices Input Output Secondary (Disk) Main Cache 3 4 Recap: Typical Hierarchy Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology at the speed offered by the fastest technology On- Chip Components Control Datapath RegFile ITLB DTLB Instr Data Cache Cache Second Level Cache (SRAM) Main (DRAM) Secondary (Disk) Speed (%cycles): ½ s s s s, s Size (bytes): s K s M s G s T s Cost: highest lowest 5 Recap: Cache Performance and Average Access Time (AMAT) CPU Tme = IC CPI CC = IC (CPI ideal + - stall cycles) CC CPI stall - stall cycles = Read- stall cycles + Write- stall cycles Read- stall cycles = reads/program read miss rate read miss penalty Write- stall cycles = (writes/program write miss rate write miss penalty) + write buffer stalls AMAT is the average Tme to access memory considering both hits and misses AMAT = Time for a hit + Miss rate x Miss penalty 6
// Improving Cache Performance Reduce the Tme to hit in the cache Eg, Smaller cache, direct mapped cache, smaller blocks, special tricks for handling for writes Reduce the miss rate Eg, Bigger cache, larger blocks More flexible placement (increase associa+vity) Reduce the miss penalty Eg, Smaller blocks or critcal word first in large blocks, special tricks for handling for writes, faster/higher bandwidth memories Use multple cache levels Sources of Cache Misses: The 3Cs Compulsory (cold start or process migraton, st reference): First access to block impossible to avoid; small effect for long running programs SoluTon: increase block size (increases miss penalty; very large blocks could increase miss rate) Capacity: Cache cannot contain all blocks accessed by the program SoluTon: increase cache size (may increase access Tme) Conflict (collision): MulTple memory locatons mapped to the same cache locaton SoluTon : increase cache size SoluTon : increase associatvity (may increase access Tme) 7 8 Reducing Cache Misses Allow more flexible block placement Direct mapped $: memory block maps to exactly one cache block Fully associa+ve $: allow a memory block to be mapped to any cache block Compromise: divide $ into sets, each of which consists of n ways (n- way set associa+ve) to place memory block block maps to unique set determined by index field and is placed in any of the n- ways of that set CalculaTon: (block address) modulo (# sets in the cache) 9 AlternaTve Block Placement Schemes DM placement: mem block in 8 block cache: only one cache block where mem block can be found ( modulo 8) = 4 SA placement: four sets x - ways (8 cache blocks), memory block in set ( mod 4) = ; either element of the set FA placement: mem block can appear in any cache blocks Cache Recap MFlop/s 5 45 4 35 3 5 5 5 Late Breaking Results from Project #3, Part I sgemm (x5) 3 levels blocking, loop unrolling Class Data Peak Simple Blocked Thanks TA Andrew!
// Num Submissions 35 3 5 5 5 Late Breaking Results from Project #3, Part I sgemm (x5) Performance Histogram 5 5 5 3 35 4 MFlop/s Thanks TA Andrew! Administrivia Posted OpTonal Extra Logicsim At Home Lab this week Project 3: TLP+DLP+Cache Opt (Due /3) Project 4: Single Cycle Processor in Logicsim (by tonight) Due Part due Saturday /7 Face- to- Face in lab / EC: Fastest Project 3 (due /9) Final Review: Mon Dec 6, 3 hrs, aoernoon (TBD) Final: Mon Dec 3 8AM- AM (TBD) Like midterm: T/F, M/C, short answers Whole Course: readings, lectures, projects, labs, hw Emphasize nd half of 6C + midterm mistakes 3 // Fall - - Lecture #3 4 Cache Recap Set- AssociaTve Caches Cache Recap 5 6 Example: 4 Word Direct Mapped $ Worst Case Reference String Start with an empty cache - all blocks 4 4 4 4 4 4 Example: 4 Word Direct Mapped $ Worst Case Reference String Start with an empty cache - all blocks 4 4 4 4 miss 4 miss miss 4 miss 4 4 Mem() Mem() Mem(4) Mem() 4 4 miss 4 miss 4 miss 4 miss 4 Mem(4) Mem() Mem(4) Mem() 7 8 requests, 8 misses Ping pong effect due to conflict misses - two memory locatons that map into the same cache block 8 3
// Example: - Way Set AssociaTve $ Cache Way Set V (4 words = sets x ways per set) Q: Is it there? Data Compare all the cache tags in the set to the high order 3 memory address bits to tell if the memory block is in the cache xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx Main One word blocks Two low order bits define the byte in the word (3b words) Q: How do we find it? Use next low order memory address bit to determine which cache set (ie, modulo the number of sets in the cache) 9 Example: 4 Word - Way SA $ Same Reference String 4 4 4 4 Start with an empty cache - all blocks 4 4 Example: 4 Word - Way SA $ Same Reference String 4 4 4 4 Start with an empty cache - all blocks Example: Eight Block Cache with Different OrganizaTons miss 4 miss hit 4 hit Mem() Mem() Mem() Mem() Mem(4) Mem(4) Mem(4) 8 requests, misses Solves the ping pong effect in a direct mapped cache due to conflict misses since now two memory locatons that map into the same cache set can co- exist! Total size of $ in blocks is equal to number of sets x associa+vity For fixed $ size, increasing associatvity decreases number of sets while increasing number of elements per set With eight blocks, an 8- way set- associatve $ is same as a fully associatve $ // Fall - - Lecture #3 Four- Way Set AssociaTve Cache 8 = 56 sets each with four ways (each with one block) 3 3 3 Byte offset Index Index V Data V Data V Data V Data Way Way Way Way 3 53 53 53 53 54 54 54 54 55 55 55 55 8 Range of Set AssociaTve Caches For a fixed size cache, each increase by a factor of two in associatvity doubles the number of blocks per set (ie, the number or ways) and halves the number of sets decreases the size of the index by bit and increases the size of the tag by bit Index Block offset Byte offset 3 4x select 3 Hit Data 4 4
// Range of Set AssociaTve Caches For a fixed size cache, each increase by a factor of two in associatvity doubles the number of blocks per set (ie, the number or ways) and halves the number of sets decreases the size of the index by bit and increases the size of the tag by bit Used for tag compare Decreasing associatvity Direct mapped (only one way) Smaller tags, only a single comparator Selects the set Index Increasing associatvity Selects the word in the block Block offset Byte offset Fully associatve (only one set) is all the bits except block and byte offset 5 Costs of Set AssociaTve Caches When miss occurs, which way s block selected for replacement? Least Recently Used (LRU): one that has been unused the longest Must track when each way s block was used relatve to other blocks in the set For - way SA $, one bit per set set to when a block is referenced; reset the other way s bit (ie, last used ) N- way set associatve cache costs N comparators (delay and area) MUX delay (set selecton) before data is available Data available aoer set selecton (and Hit/Miss decision) DM $: block is available before the Hit/Miss decision Not possible to just assume a hit and contnue and recover later if it was a miss 6 Cache Block Replacement Policies Random Replacement Hardware randomly selects a cache item and throw it out Least Recently Used Hardware keeps track of access history Replace the entry that has not been used for the longest Tme For - way set- associatve cache, need one bit for LRU replacement Example of a Simple Pseudo LRU ImplementaTon Assume 64 Fully AssociaTve entries Hardware replacement pointer points to one cache entry Whenever access is made to the entry the pointer points to: Move the pointer to the next entry Otherwise: do not move the pointer Replacement Pointer Entry Entry : Entry 63 7 Benefits of Set AssociaTve Caches Choice of DM $ or SA $ depends on the cost of a miss versus the cost of implementaton Largest gains are in going from direct mapped to - way (%+ reducton in miss rate) 8 3Cs Revisted Three sources of misses (SPEC integer and floatng- point benchmarks) Compulsory misses 6%; not visible Capacity misses, functon of cache size Conflict porton depends on associatvity and cache size 9 Two Machines Cache Parameters L cache organization & size L associativity Intel Nehalem Split I$ and D$; 3KB for each per core; 64B blocks 4-way (I), 8-way (D) set assoc; ~LRU replacement AMD Barcelona Split I$ and D$; 64KB for each per core; 64B blocks -way set assoc; LRU replacement L write policy write-back, write-allocate write-back, write-allocate L cache organization & size Unified; 56MB (5MB) per core; 64B blocks Unified; 5KB (5MB) per core; 64B blocks L associativity 8-way set assoc; ~LRU 6-way set assoc; ~LRU L write policy write-back write-back L write policy write-back, write-allocate write-back, write-allocate L3 cache organization & size Unified; 89KB (8MB) shared by cores; 64B blocks Unified; 48KB (MB) shared by cores; 64B blocks L3 associativity 6-way set assoc 3-way set assoc; evict block shared by fewest cores L3 write policy write-back, write-allocate write-back; write-allocate 3 5
// Summary Name of the Game: Reduce Cache Misses Two different memory blocks mapping to same cache block could knock each other out as program bounces from one memory locaton to the next One way to do it: set- associatvity block maps into more than one cache block N- way: n possible places in the cache to hold a given memory block N- way Cache of N+M blocks: N ways x M sets 3 6