Agenda. Recap: Components of a Computer. Agenda. Recap: Cache Performance and Average Memory Access Time (AMAT) Recap: Typical Memory Hierarchy

Similar documents
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

Review. You Are Here! Agenda. Recap: Typical Memory Hierarchy. Recap: Components of a Computer 11/6/12

CS 61C: Great Ideas in Computer Architecture

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Mul$- Level Caches

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

Page 1. Memory Hierarchies (Part 2)

CS 61C: Great Ideas in Computer Architecture (Machine Structures) More Cache: Set Associa0vity. Smart Phone. Today s Lecture. Core.

Agenda. Cache-Memory Consistency? (1/2) 7/14/2011. New-School Machine Structures (It s a bit more complicated!)

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

EE 4683/5683: COMPUTER ARCHITECTURE

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

14:332:331. Week 13 Basics of Cache

CS3350B Computer Architecture

CS 61C: Great Ideas in Computer Architecture Caches Part 2

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches, Set Associative Caches, Cache Performance

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture. Cache Performance, Set Associative Caches

CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2

Course Administration

14:332:331. Week 13 Basics of Cache

CS 61C: Great Ideas in Computer Architecture (Machine Structures)

Improving Cache Performance

Instructors: Randy H. Katz David A. PaGerson hgp://inst.eecs.berkeley.edu/~cs61c/fa10. 10/4/10 Fall Lecture #16. Agenda

CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures)

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CMPT 300 Introduction to Operating Systems

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

ECE331: Hardware Organization and Design

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

COSC3330 Computer Architecture Lecture 19. Cache

CS3350B Computer Architecture

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Improving Cache Performance

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

CS 61C: Great Ideas in Computer Architecture Direct- Mapped Caches. Increasing distance from processor, decreasing speed.

CS 61C: Great Ideas in Computer Architecture. Multilevel Caches, Cache Questions

Direct-Mapped and Set Associative Caches. Instructor: Steven Ho

COSC 6385 Computer Architecture - Memory Hierarchies (I)

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 6: Memory Organization Part I

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

CPE 631 Lecture 04: CPU Caches

Memory hierarchy review. ECE 154B Dmitri Strukov

Transistor: Digital Building Blocks

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

ECE331: Hardware Organization and Design

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Modern Computer Architecture

ECE ECE4680

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Agenda. Project #3, Part I. Agenda. Project #3, Part I. No SSE 12/1/10

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Mo Money, No Problems: Caches #2...

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o

Q3: Block Replacement. Replacement Algorithms. ECE473 Computer Architecture and Organization. Memory Hierarchy: Set Associative Cache

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Topics. Digital Systems Architecture EECE EECE Need More Cache?

Memory. Lecture 22 CS301

Memory Hierarchy Design (Appendix B and Chapter 2)

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

LECTURE 11. Memory Hierarchy

Memory Hierarchy: Caches, Virtual Memory

Advanced Computer Architecture

Main Memory Supporting Caches

Caches II. CSE 351 Spring Instructor: Ruth Anderson

ECE 2300 Digital Logic & Computer Organization. More Caches Measuring Performance

ECE 2300 Digital Logic & Computer Organization. More Caches

Memory Hierarchy. Reading. Sections 5.1, 5.2, 5.3, 5.4, 5.8 (some elements), 5.9 (2) Lecture notes from MKP, H. H. Lee and S.

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

A Framework for Memory Hierarchies

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

MEMORY HIERARCHY BASICS. B649 Parallel Architectures and Programming

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Page 1. Multilevel Memories (Improving performance using a little cash )

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

Transcription:

// CS 6C: Great Ideas in Computer Architecture (Machine Structures) Set- Associa+ve Caches Instructors: Randy H Katz David A PaFerson hfp://insteecsberkeleyedu/~cs6c/fa Cache Recap Recap: Components of a Computer Cache Recap Processor Control Datapath Devices Input Output Secondary (Disk) Main Cache 3 4 Recap: Typical Hierarchy Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology at the speed offered by the fastest technology On- Chip Components Control Datapath RegFile ITLB DTLB Instr Data Cache Cache Second Level Cache (SRAM) Main (DRAM) Secondary (Disk) Speed (%cycles): ½ s s s s, s Size (bytes): s K s M s G s T s Cost: highest lowest 5 Recap: Cache Performance and Average Access Time (AMAT) CPU Tme = IC CPI CC = IC (CPI ideal + - stall cycles) CC CPI stall - stall cycles = Read- stall cycles + Write- stall cycles Read- stall cycles = reads/program read miss rate read miss penalty Write- stall cycles = (writes/program write miss rate write miss penalty) + write buffer stalls AMAT is the average Tme to access memory considering both hits and misses AMAT = Time for a hit + Miss rate x Miss penalty 6

// Improving Cache Performance Reduce the Tme to hit in the cache Eg, Smaller cache, direct mapped cache, smaller blocks, special tricks for handling for writes Reduce the miss rate Eg, Bigger cache, larger blocks More flexible placement (increase associa+vity) Reduce the miss penalty Eg, Smaller blocks or critcal word first in large blocks, special tricks for handling for writes, faster/higher bandwidth memories Use multple cache levels Sources of Cache Misses: The 3Cs Compulsory (cold start or process migraton, st reference): First access to block impossible to avoid; small effect for long running programs SoluTon: increase block size (increases miss penalty; very large blocks could increase miss rate) Capacity: Cache cannot contain all blocks accessed by the program SoluTon: increase cache size (may increase access Tme) Conflict (collision): MulTple memory locatons mapped to the same cache locaton SoluTon : increase cache size SoluTon : increase associatvity (may increase access Tme) 7 8 Reducing Cache Misses Allow more flexible block placement Direct mapped $: memory block maps to exactly one cache block Fully associa+ve $: allow a memory block to be mapped to any cache block Compromise: divide $ into sets, each of which consists of n ways (n- way set associa+ve) to place memory block block maps to unique set determined by index field and is placed in any of the n- ways of that set CalculaTon: (block address) modulo (# sets in the cache) 9 AlternaTve Block Placement Schemes DM placement: mem block in 8 block cache: only one cache block where mem block can be found ( modulo 8) = 4 SA placement: four sets x - ways (8 cache blocks), memory block in set ( mod 4) = ; either element of the set FA placement: mem block can appear in any cache blocks Cache Recap MFlop/s 5 45 4 35 3 5 5 5 Late Breaking Results from Project #3, Part I sgemm (x5) 3 levels blocking, loop unrolling Class Data Peak Simple Blocked Thanks TA Andrew!

// Num Submissions 35 3 5 5 5 Late Breaking Results from Project #3, Part I sgemm (x5) Performance Histogram 5 5 5 3 35 4 MFlop/s Thanks TA Andrew! Administrivia Posted OpTonal Extra Logicsim At Home Lab this week Project 3: TLP+DLP+Cache Opt (Due /3) Project 4: Single Cycle Processor in Logicsim (by tonight) Due Part due Saturday /7 Face- to- Face in lab / EC: Fastest Project 3 (due /9) Final Review: Mon Dec 6, 3 hrs, aoernoon (TBD) Final: Mon Dec 3 8AM- AM (TBD) Like midterm: T/F, M/C, short answers Whole Course: readings, lectures, projects, labs, hw Emphasize nd half of 6C + midterm mistakes 3 // Fall - - Lecture #3 4 Cache Recap Set- AssociaTve Caches Cache Recap 5 6 Example: 4 Word Direct Mapped $ Worst Case Reference String Start with an empty cache - all blocks 4 4 4 4 4 4 Example: 4 Word Direct Mapped $ Worst Case Reference String Start with an empty cache - all blocks 4 4 4 4 miss 4 miss miss 4 miss 4 4 Mem() Mem() Mem(4) Mem() 4 4 miss 4 miss 4 miss 4 miss 4 Mem(4) Mem() Mem(4) Mem() 7 8 requests, 8 misses Ping pong effect due to conflict misses - two memory locatons that map into the same cache block 8 3

// Example: - Way Set AssociaTve $ Cache Way Set V (4 words = sets x ways per set) Q: Is it there? Data Compare all the cache tags in the set to the high order 3 memory address bits to tell if the memory block is in the cache xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx Main One word blocks Two low order bits define the byte in the word (3b words) Q: How do we find it? Use next low order memory address bit to determine which cache set (ie, modulo the number of sets in the cache) 9 Example: 4 Word - Way SA $ Same Reference String 4 4 4 4 Start with an empty cache - all blocks 4 4 Example: 4 Word - Way SA $ Same Reference String 4 4 4 4 Start with an empty cache - all blocks Example: Eight Block Cache with Different OrganizaTons miss 4 miss hit 4 hit Mem() Mem() Mem() Mem() Mem(4) Mem(4) Mem(4) 8 requests, misses Solves the ping pong effect in a direct mapped cache due to conflict misses since now two memory locatons that map into the same cache set can co- exist! Total size of $ in blocks is equal to number of sets x associa+vity For fixed $ size, increasing associatvity decreases number of sets while increasing number of elements per set With eight blocks, an 8- way set- associatve $ is same as a fully associatve $ // Fall - - Lecture #3 Four- Way Set AssociaTve Cache 8 = 56 sets each with four ways (each with one block) 3 3 3 Byte offset Index Index V Data V Data V Data V Data Way Way Way Way 3 53 53 53 53 54 54 54 54 55 55 55 55 8 Range of Set AssociaTve Caches For a fixed size cache, each increase by a factor of two in associatvity doubles the number of blocks per set (ie, the number or ways) and halves the number of sets decreases the size of the index by bit and increases the size of the tag by bit Index Block offset Byte offset 3 4x select 3 Hit Data 4 4

// Range of Set AssociaTve Caches For a fixed size cache, each increase by a factor of two in associatvity doubles the number of blocks per set (ie, the number or ways) and halves the number of sets decreases the size of the index by bit and increases the size of the tag by bit Used for tag compare Decreasing associatvity Direct mapped (only one way) Smaller tags, only a single comparator Selects the set Index Increasing associatvity Selects the word in the block Block offset Byte offset Fully associatve (only one set) is all the bits except block and byte offset 5 Costs of Set AssociaTve Caches When miss occurs, which way s block selected for replacement? Least Recently Used (LRU): one that has been unused the longest Must track when each way s block was used relatve to other blocks in the set For - way SA $, one bit per set set to when a block is referenced; reset the other way s bit (ie, last used ) N- way set associatve cache costs N comparators (delay and area) MUX delay (set selecton) before data is available Data available aoer set selecton (and Hit/Miss decision) DM $: block is available before the Hit/Miss decision Not possible to just assume a hit and contnue and recover later if it was a miss 6 Cache Block Replacement Policies Random Replacement Hardware randomly selects a cache item and throw it out Least Recently Used Hardware keeps track of access history Replace the entry that has not been used for the longest Tme For - way set- associatve cache, need one bit for LRU replacement Example of a Simple Pseudo LRU ImplementaTon Assume 64 Fully AssociaTve entries Hardware replacement pointer points to one cache entry Whenever access is made to the entry the pointer points to: Move the pointer to the next entry Otherwise: do not move the pointer Replacement Pointer Entry Entry : Entry 63 7 Benefits of Set AssociaTve Caches Choice of DM $ or SA $ depends on the cost of a miss versus the cost of implementaton Largest gains are in going from direct mapped to - way (%+ reducton in miss rate) 8 3Cs Revisted Three sources of misses (SPEC integer and floatng- point benchmarks) Compulsory misses 6%; not visible Capacity misses, functon of cache size Conflict porton depends on associatvity and cache size 9 Two Machines Cache Parameters L cache organization & size L associativity Intel Nehalem Split I$ and D$; 3KB for each per core; 64B blocks 4-way (I), 8-way (D) set assoc; ~LRU replacement AMD Barcelona Split I$ and D$; 64KB for each per core; 64B blocks -way set assoc; LRU replacement L write policy write-back, write-allocate write-back, write-allocate L cache organization & size Unified; 56MB (5MB) per core; 64B blocks Unified; 5KB (5MB) per core; 64B blocks L associativity 8-way set assoc; ~LRU 6-way set assoc; ~LRU L write policy write-back write-back L write policy write-back, write-allocate write-back, write-allocate L3 cache organization & size Unified; 89KB (8MB) shared by cores; 64B blocks Unified; 48KB (MB) shared by cores; 64B blocks L3 associativity 6-way set assoc 3-way set assoc; evict block shared by fewest cores L3 write policy write-back, write-allocate write-back; write-allocate 3 5

// Summary Name of the Game: Reduce Cache Misses Two different memory blocks mapping to same cache block could knock each other out as program bounces from one memory locaton to the next One way to do it: set- associatvity block maps into more than one cache block N- way: n possible places in the cache to hold a given memory block N- way Cache of N+M blocks: N ways x M sets 3 6