Review. You Are Here! Agenda. Recap: Typical Memory Hierarchy. Recap: Components of a Computer 11/6/12

Similar documents
Agenda. Recap: Components of a Computer. Agenda. Recap: Cache Performance and Average Memory Access Time (AMAT) Recap: Typical Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

Agenda. Cache-Memory Consistency? (1/2) 7/14/2011. New-School Machine Structures (It s a bit more complicated!)

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Page 1. Memory Hierarchies (Part 2)

CS 61C: Great Ideas in Computer Architecture (Machine Structures) More Cache: Set Associa0vity. Smart Phone. Today s Lecture. Core.

EE 4683/5683: COMPUTER ARCHITECTURE

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS3350B Computer Architecture

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Mul$- Level Caches

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

CS 61C: Great Ideas in Computer Architecture. Cache Performance, Set Associative Caches

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

CS 61C: Great Ideas in Computer Architecture Caches Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Instructors: Randy H. Katz David A. PaGerson hgp://inst.eecs.berkeley.edu/~cs61c/fa10. 10/4/10 Fall Lecture #16. Agenda

CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2

CS 61C: Great Ideas in Computer Architecture (Machine Structures)

CS 61C: Great Ideas in Computer Architecture

Course Administration

CMPT 300 Introduction to Operating Systems

CS 61C: Great Ideas in Computer Architecture Direct- Mapped Caches. Increasing distance from processor, decreasing speed.

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

Improving Cache Performance

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches, Set Associative Caches, Cache Performance

14:332:331. Week 13 Basics of Cache

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture. Multilevel Caches, Cache Questions

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Improving Cache Performance

Mo Money, No Problems: Caches #2...

14:332:331. Week 13 Basics of Cache

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Modern Computer Architecture

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

CS3350B Computer Architecture

COSC3330 Computer Architecture Lecture 19. Cache

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

Show Me the $... Performance And Caches

Memory hierarchy review. ECE 154B Dmitri Strukov

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

CPE 631 Lecture 04: CPU Caches

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

ECE331: Hardware Organization and Design

Page 1. Multilevel Memories (Improving performance using a little cash )

Transistor: Digital Building Blocks

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

CS152 Computer Architecture and Engineering Lecture 17: Cache System

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

CS61C : Machine Structures

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

Memory. Lecture 22 CS301

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 6: Memory Organization Part I

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Computer Architecture Spring 2016

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

LECTURE 11. Memory Hierarchy

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

CS 61C: Great Ideas in Computer Architecture (Machine Structures)

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS61C : Machine Structures

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

UCB CS61C : Machine Structures

Caching Basics. Memory Hierarchies

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

Transcription:

/6/ Review CS 6C: Great Ideas in Computer Architecture More Cache: Set Associavity Instructors: Krste Asanovic, Randy H Katz hbp://insteecsberkeleyedu/~cs6c/fa Big Ideas of InstrucPon- Level Parallelism Pipelining, Hazards, and Stalls Forwarding, SpeculaPon to overcome Hazards MulPple issue to increase performance IPC instead of CPI Dynamic ExecuPon: Superscalar in- order issue, branch predicpon, register renaming, out- of- order execupon, in- order commit unroll loops in HW, hide cache misses /6/ Fall - - Lecture #3 Parallel Requests Assigned to computer eg, Search Katz Parallel Threads Assigned to core eg, Lookup, Ads So3ware Parallel InstrucPons > instrucpon @ one Pme eg, 5 pipelined instrucpons Parallel Data > data item @ one Pme eg, Add of 4 pairs of words Hardware descrippons All gates @ one Pme Programming Languages You Are Here! Harness Parallelism & Achieve High Performance Hardware Warehouse Scale Computer Core Input/Output InstrucPon Unit(s) Main Computer (Cache) Core Core FuncPonal Unit(s) A +B A +B A +B A 3 +B 3 Smart Phone Today s Lecture Logic Gates 3 Agenda Cache Recap Administrivia Set- AssociaPve Caches AMAT and MulPlevel Cache Review Nehalem Hierarchy 4 Recap: Components of a Computer Processor Control Datapath Cache Main Secondary (Disk) Devices Input Output 5 Recap: Typical Hierarchy Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology at the speed offered by the fastest technology On- Chip Components Control Datapath RegFile Instr Data Cache Cache Second Level Cache (SRAM) Main (DRAM) Secondary (Disk) Speed (cycles): ½ s s s s, s Size (bytes): s K s M s G s T s Cost: highest lowest 6

/6/ Recap: Cache Performance and Average Access Time (AMAT) CPU Pme = IC CPI CC = IC (CPI ideal + - stall cycles) CC CPI stall - stall cycles = Read- stall cycles + Write- stall cycles Read- stall cycles = reads/program read miss rate read miss penalty Write- stall cycles = (writes/program write miss rate write miss penalty) + write buffer stalls AMAT is the average Pme to access memory considering both hits and misses AMAT = Time for a hit + Miss rate x Miss penalty Improving Cache Performance Reduce the Pme to hit in the cache Eg, Smaller cache, direct- mapped cache, special tricks for handling writes Reduce the miss rate Eg, Bigger cache, larger blocks More flexible placement (increase associavity) Reduce the miss penalty Eg, Smaller blocks or cripcal word first in large blocks, special tricks for handling writes, faster/higher bandwidth memories Use mulpple cache levels 7 8 Sources of Cache Misses: The 3Cs Compulsory (cold start or process migrapon, st reference): First access to block impossible to avoid; small effect for long running programs SoluPon: increase block size (increases miss penalty; very large blocks could increase miss rate) Capacity: Cache cannot contain all blocks accessed by the program SoluPon: increase cache size (may increase access Pme) Conflict (collision): MulPple memory locapons mapped to the same cache locapon SoluPon : increase cache size SoluPon : increase associapvity (may increase access Pme) Reducing Cache Misses Allow more flexible block placement Direct mapped $: memory block maps to exactly one cache block Fully associave $: allow a memory block to be mapped to any cache block Compromise: divide $ into sets, each of which consists of n ways (n- way set associave) to place memory block block maps to unique set determined by index field and is placed in any of the n- ways of that set CalculaPon: (block address) modulo (# sets in the cache) 9 AlternaPve Block Placement Schemes Example: 4- Word Direct- Mapped $ Worst- Case Reference String Consider the main memory word reference string Start with an empty cache - all blocks 4 4 4 4 inipally marked as not valid 4 4 4 4 DM placement: mem block in 8 block cache: only one cache block where mem block can be found ( modulo 8) = 4 SA placement: four sets x - ways (8 cache blocks), memory block in set ( mod 4) = ; either element of the set FA placement: mem block can appear in any cache blocks

/6/ Example: 4- Word Direct- Mapped $ Worst- Case Reference String Consider the main memory word reference string Start with an empty cache - all blocks 4 4 4 4 inipally marked as not valid miss 4 miss miss 4 miss 4 4 Mem() Mem() Mem(4) Mem() miss 4 miss 4 miss 4 miss 4 Mem(4) Mem() Mem(4) Mem() 8 requests, 8 misses Ping pong effect due to conflict misses - two memory locapons that map into the same cache block 3 Example: - Way Set AssociaPve $ Cache Way Set V (4 words = sets x ways per set) Tag Q: Is it there? Data Compare all the cache tags in the set to the high order 3 memory address bits to tell if the memory block is in the cache xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx Main One word blocks Two low order bits define the byte in the word (3b words) Q: How do we find it? Use next low order memory address bit to determine which cache set (ie, modulo the number of sets in the cache) 4 Example: 4 Word - Way SA $ Same Reference String Consider the main memory word reference string 4 4 4 4 Start with an empty cache - all blocks inipally marked as not valid 4 4 Example: 4- Word - Way SA $ Same Reference String Consider the main memory word reference string 4 4 4 4 Start with an empty cache - all blocks inipally marked as not valid miss 4 miss hit 4 hit Mem() Mem() Mem() Mem() Mem(4) Mem(4) Mem(4) 8 requests, misses Solves the ping pong effect in a direct mapped cache due to conflict misses since now two memory locapons that map into the same cache set can co- exist! 5 6 Example: Eight- Block Cache with Different OrganizaPons Four- Way Set- AssociaPve Cache 8 = 56 sets each with four ways (each with one block) 3 3 3 Byte offset Tag Index Index V Tag Data V Tag Data V Tag Data V Tag Data Way Way Way Way 3 53 53 53 53 54 54 54 54 55 55 55 55 8 Total size of $ in blocks is equal to number of sets x associavity For fixed $ size, increasing associapvity decreases number of sets while increasing number of elements per set With eight blocks, an 8- way set- associapve $ is same as a fully associapve $ 7 4x select 8 Hit Data 3 3

/6/ Flashcard Quiz: For fixed capacity and fixed block size, how does increasing associapvity effect AMAT? Range of Set- AssociaPve Caches For a fixed- size cache, each increase by a factor of two in associapvity doubles the number of blocks per set (ie, the number or ways) and halves the number of sets decreases the size of the index by bit and increases the size of the tag by bit Tag Index Block offset Byte offset 9 Range of Set- AssociaPve Caches For a fixed- size cache, each increase by a factor of two in associapvity doubles the number of blocks per set (ie, the number or ways) and halves the number of sets decreases the size of the index by bit and increases the size of the tag by bit Used for tag compare Tag Decreasing associapvity Direct mapped (only one way) Smaller tags, only a single comparator Selects the set Index Increasing associapvity Selects the word in the block Block offset Byte offset Fully associapve (only one set) Tag is all the bits except block and byte offset Costs of Set- AssociaPve Caches When miss occurs, which way s block selected for replacement? Least Recently Used (LRU): one that has been unused the longest Must track when each way s block was used relapve to other blocks in the set For - way SA $, one bit per set set to when a block is referenced; reset the other way s bit (ie, last used ) N- way set- associapve cache costs N comparators (delay and area) MUX delay (set selecpon) before data is available Data available a er set selecpon (and Hit/Miss decision) DM $: block is available before the Hit/Miss decision In Set- AssociaPve, not possible to just assume a hit and conpnue and recover later if it was a miss Cache Block Replacement Policies Random Replacement Hardware randomly selects a cache item and throw it out Least Recently Used Hardware keeps track of access history Replace the entry that has not been used for the longest Pme For - way set- associapve cache, need one bit for LRU replacement Example of a Simple Pseudo LRU ImplementaPon Assume 64 Fully AssociaPve entries Hardware replacement pointer points to one cache entry Whenever access is made to the entry the pointer points to: Move the pointer to the next entry Otherwise: do not move the pointer Replacement Pointer Entry Entry 3 Entry 63 : Benefits of Set- AssociaPve Caches Choice of DM $ or SA $ depends on the cost of a miss versus the cost of implementapon Largest gains are in going from direct mapped to - way (%+ reducpon in miss rate) 4 4

/6/ Administrivia Final Exam Monday Dec : :3- :3 /3/4 Hearst Gym CS6c in the News: End of MIPS? ARM, ImaginaMon divvy up MIPS Junko Yoshida, EE Times /6/ 9:4 AM EST NEW YORK - - MIPS Technologies, which had been on the block for almost a year, finally found buyers in a complicated deal involving ImaginaPon Technologies and ARM ImaginaPon said Tuesday (Nov 6) it has agreed to buy MIPS' operapng business for $6 million Under terms of the deal, the UK graphics IP vendor will gain 6 engineers and 8 MIPS patents The move is viewed as a way for ImaginaPon to beef up its CPU core experpse while defending its graphics lead It would also posipon ImaginaPon to compepng against ARM, which has been pursuing its integrated CPU- GPU solupon strategy Separately, ARM said it will lead a consorpum buying the rights to the MIPS por olio of 498 patents The consorpum, called Bridge Crossing LLC, will pay $35 million in cash to purchase the rights to the por olio, of which ARM will contribute $675 million 5 6 How to Calculate 3C s using Cache Simulator Compulsory: set cache size to infinity and fully associapve, and count number of misses Capacity: Change cache size from infinity, usually in powers of, and count misses for each reducpon in size 6 MB, 8 MB, 4 MB, 8 KB, 64 KB, 6 KB 3 Conflict: Change from fully associapve to n- way set associapve while counpng misses Fully associapve, 6- way, 8- way, 4- way, - way, - way 7 3Cs Revisted Three sources of misses (SPEC integer and floapng- point benchmarks) Compulsory misses 6%; not visible Capacity misses, funcpon of cache size Conflict porpon depends on associapvity and cache size 8 Improving Cache Performance Reduce the Pme to hit in the cache Eg, Smaller cache, direct- mapped cache, special tricks for handling writes Reduce the miss rate Eg, Bigger cache, larger blocks More flexible placement (increase associavity) Reduce the miss penalty Eg, Smaller blocks or crical word first in large blocks, special tricks for handling for writes, faster/ higher bandwidth memories Use mulple cache levels Reduce AMAT Use mulpple levels of cache As technology advances, more room on IC die for larger L$ or for addiponal levels of cache (eg, L$ and L3$) Normally the higher cache levels are unified, holding both instrucpons and data 9 3 5

/6/ Design ConsideraPons Different design considerapons for L$ and L$ L$ focuses on fast access: minimize hit Pme to achieve shorter clock cycle, eg, smaller $ L$, L3$ focus on low miss rate: reduce penalty of long main memory access Pmes: eg, Larger $ with larger block sizes/ higher levels of associapvity Miss penalty of L$ is significantly reduced by presence of L$, so can be smaller/faster even with higher miss rate For the L$, fast hit Pme is less important than low miss rate L$ hit Pme determines L$ s miss penalty L$ local miss rate >> than the global miss rate Flashcard Quiz: In a machine with two levels of cache, what effect does increasing L capacity have on L*? 3 3 Two Machines Cache Parameters Nehalem Hierarchy Overview L cache organization & size L associativity Intel Nehalem Split I$ and D$; 3KB for each per core; 64B blocks 4-way (I), 8-way (D) set assoc; ~LRU replacement AMD Barcelona Split I$ and D$; 64KB for each per core; 64B blocks -way set assoc; LRU replacement L write policy write-back, write-allocate write-back, write-allocate L cache organization & size Unified; 56KB (5MB) per core; 64B blocks Unified; 5KB (5MB) per core; 64B blocks L associativity 8-way set assoc; ~LRU 6-way set assoc; ~LRU L write policy write-back write-back L write policy write-back, write-allocate write-back, write-allocate L3 cache organization & size Unified; 89KB (8MB) shared by cores; 64B blocks Unified; 48KB (MB) shared by cores; 64B blocks L3 associativity 6-way set assoc 3-way set assoc; evict block shared by fewest cores L3 write policy write-back, write-allocate write-back; write-allocate 33 Private L/L per core Local memory access latency ~6ns /6/ 3KB L I$ CPU Core 3KB L D$ 56KB L$ 8MB Shared L3$ 4-8 Cores 3 DDR3 DRAM Controllers Each DRAM Channel is 64/7b wide at up to 33Gb/s 3KB L I$ CPU Core 3KB L D$ 56KB L$ QuickPath System Interconnect Fall - - Lecture #3 L3 fully inclusive of higher levels (but L not inclusive of L) Other sockets caches kept coherent using QuickPath messages Each direcpon is b@64gb/s 34 Intel Nehalem Die Photo Cache Hierarchy Latencies L I & D 3KB 8- way, latency 4 cycles, 64B blocks L 56 KB 8- way, latency < cycles L3 8 MB, 6- way, latency 3-4 cycles DRAM, latency ~8- cycles 4 cores, 3KB I$/3- KB D$, 5KB L$ Share one 8- MB L3$ 35 /6/ Fall - - 36 Lecture #3 6

/6/ Core s Private System Load queue 48 entries Store queue 3 entries Divided stapcally between threads Up to 6 outstanding misses in flight per core - - 37 Lecture /6/ Fall - - 38 Lecture #3 /6/ Fall #3 39 4 All Sockets can Access all Data ~6ns How ensure that get data allocated to local DRAM? Lunix doesn t allocate pages to physical memory a er malloc unpl first access to page Be sure to touch what each CPU wants nearby 4 Such systems called NUMA for Non Uniform Access: some addresses are slower than others /6/ Fall - - Lecture #3 ~ns 4 7

/6/ Cache Design Space Summary Several interacpng dimensions Cache size Block size AssociaPvity Replacement policy Write- through vs write- back Write allocapon OpPmal choice is a compromise Depends on access characterispcs Workload Use (I- cache, D- cache) Depends on technology / cost Simplicity o en wins Cache Size AssociaMvity Block Size Bad Good Factor A Factor B Less More Name of the Game: Reduce Cache Misses memory blocks mapping to same block knock each other out as program bounces from memory locapon to next One way to do it: set- associapvity block maps into more than cache block N- way: n possible places in cache to hold a memory block N- way Cache of N+M blocks: N ways x M sets MulP- level caches OpPmize first level to be fast! OpPmize nd and 3 rd levels to minimize the memory access penalty 43 44 8