Spring 2016 :: CSE 502 Computer Architecture. Caches. Nima Honarmand

Similar documents
CSE502: Computer Architecture CSE 502: Computer Architecture

Spring 2018 :: CSE 502. Cache Design Basics. Nima Honarmand

Computer Architecture Spring 2016

Caches and Prefetching

Computer Architecture Spring 2016

EECS 470 Lecture 13. Basic Caches. Fall 2018 Jon Beaumont

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

Page 1. Multilevel Memories (Improving performance using a little cash )

Fall 2007 Prof. Thomas Wenisch

Key Point. What are Cache lines

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

LECTURE 11. Memory Hierarchy

Lecture 12: Memory hierarchy & caches

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

EE 4683/5683: COMPUTER ARCHITECTURE

CS3350B Computer Architecture

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Caches. Samira Khan March 23, 2017

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy. Mehran Rezaei

Course Administration

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

ECE 571 Advanced Microprocessor-Based Design Lecture 10

Portland State University ECE 587/687. Caches and Memory-Level Parallelism

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

EN1640: Design of Computing Systems Topic 06: Memory System

Advanced Memory Organizations

1/19/2009. Data Locality. Exploiting Locality: Caches

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2017 Lecture 15

Slide Set 5. for ENCM 501 in Winter Term, Steve Norman, PhD, PEng

CS356: Discussion #9 Memory Hierarchy and Caches. Marco Paolieri Illustrations from CS:APP3e textbook

Memory Hierarchy: Caches, Virtual Memory

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

CSE502: Computer Architecture CSE 502: Computer Architecture

Page 1. Memory Hierarchies (Part 2)

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Memory Hierarchy. Slides contents from:

Background. Memory Hierarchies. Register File. Background. Forecast Memory (B5) Motivation for memory hierarchy Cache ECC Virtual memory.

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Logical Diagram of a Set-associative Cache Accessing a Cache

EN1640: Design of Computing Systems Topic 06: Memory System

Caches 3/23/17. Agenda. The Dataflow Model (of a Computer)

Memory Hierarchy. Slides contents from:

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

CMPSC 311- Introduction to Systems Programming Module: Caching

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

Mo Money, No Problems: Caches #2...

Lecture-14 (Memory Hierarchy) CS422-Spring

CHAPTER 4 MEMORY HIERARCHIES TYPICAL MEMORY HIERARCHY TYPICAL MEMORY HIERARCHY: THE PYRAMID CACHE PERFORMANCE MEMORY HIERARCHIES CACHE DESIGN

EEC 483 Computer Organization

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

Memory Hierarchy. 2/18/2016 CS 152 Sec6on 5 Colin Schmidt

Sarah L. Harris and David Money Harris. Digital Design and Computer Architecture: ARM Edition Chapter 8 <1>

12 Cache-Organization 1

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

COMP 3221: Microprocessors and Embedded Systems

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Advanced Computer Architecture

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Caches (Writing) P & H Chapter 5.2 3, 5.5. Hakim Weatherspoon CS 3410, Spring 2013 Computer Science Cornell University

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Chapter 7 Large and Fast: Exploiting Memory Hierarchy. Memory Hierarchy. Locality. Memories: Review

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

MEMORY HIERARCHY DESIGN

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2015 Lecture 15

Improving Cache Performance

1. Memory technology & Hierarchy

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

CS3350B Computer Architecture

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

Memory Hierarchy. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Caches (Writing) P & H Chapter 5.2 3, 5.5. Hakim Weatherspoon CS 3410, Spring 2013 Computer Science Cornell University

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

CS 136: Advanced Architecture. Review of Caches

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Computer Architecture and System Software Lecture 09: Memory Hierarchy. Instructor: Rob Bergen Applied Computer Science University of Winnipeg

Review: Computer Organization

UCB CS61C : Machine Structures

CMPSC 311- Introduction to Systems Programming Module: Caching

CS61C : Machine Structures

Agenda. EE 260: Introduction to Digital Design Memory. Naive Register File. Agenda. Memory Arrays: SRAM. Memory Arrays: Register File

Memory hierarchy review. ECE 154B Dmitri Strukov

Transcription:

Caches Nima Honarmand

Motivation 10000 Performance 1000 100 10 Processor Memory 1 1985 1990 1995 2000 2005 2010 Want memory to appear: As fast as CPU As large as required by all of the running applications

Storage Hierarchy Make common case fast: Common: temporal & spatial locality Fast: smaller more expensive memory Bigger Transfers Registers More Bandwidth Controlled by Hardware Larger Cheaper Caches (SRAM) Memory (DRAM) Faster Controlled by Software (OS) [SSD? (Flash)] Disk (Magnetic Media) What is S(tatic)RAM vs D(dynamic)RAM?

Caches An automatically managed hierarchy Break memory into blocks (several bytes) and transfer data to/from cache in blocks spatial locality Core $ Keep recently accessed blocks temporal locality Memory

Cache Terminology block (cache line): minimum unit that may be cached frame: cache storage location to hold one block hit: block is found in the cache miss: block is not found in the cache miss ratio: fraction of references that miss hit time: time to access the cache miss penalty: time to replace block on a miss

Cache Example Address sequence from core: (assume 8-byte lines) Core 0x10000 0x10004 0x10120 0x10008 0x10124 0x10004 Miss Hit Miss Miss Hit Hit 0x10000 ( data ) 0x10008 ( data ) 0x10120 ( data ) Memory Final miss ratio is 50%

Average Memory Access Time (1/2) Or AMAT Very powerful tool to estimate performance If cache hit is 10 cycles (core to L1 and back) memory access is 100 cycles (core to mem and back) Then at 50% miss ratio, avg. access: 0.5 10+0.5 100 = 55 at 10% miss ratio, avg. access: 0.9 10+0.1 100 = 19 at 1% miss ratio, avg. access: 0.99 10+0.01 100 11

Average Memory Access Time (2/2) Generalizes nicely to hierarchies of any depth If L1 cache hit is 5 cycles (core to L1 and back) L2 cache hit is 20 cycles (core to L2 and back) memory access is 100 cycles (core to mem and back) Then at 20% miss ratio in L1 and 40% miss ratio in L2 avg. access: 0.8 5+0.2 (0.6 20+0.4 100) 14

Memory Organization (1/3) L1 is split separate I$ (inst. cache) and D$ (data cache) L2 and L3 are unified Processor Registers I-TLB L1 I-Cache L1 D-Cache D-TLB L2 Cache L3 Cache (LLC) Main Memory (DRAM)

Memory Organization (2/3) L1 and L2 are private L3 is shared Processor Core 0 Registers Core 1 Registers I-TLB L1 I-Cache L1 D-Cache D-TLB I-TLB L1 I-Cache L1 D-Cache D-TLB L2 Cache L2 Cache L3 Cache (LLC) Main Memory (DRAM) Multi-core replicates the top of the hierarchy

Intel Nehalem (3.3GHz, 4 cores, 2 threads per core) Spring 2016 :: CSE 502 Computer Architecture Memory Organization (3/3) 32K L1-D 256K L2 32K L1-I

SRAM Overview 1 1 01 01 b Chained inverters maintain a stable state Access gates provide access to the cell Writing to cell involves over-powering storage inverters b 6T SRAM cell 2 access gates 2T per inverter

8-bit SRAM Array wordline bitlines

wordlines Spring 2016 :: CSE 502 Computer Architecture 8 8-bit SRAM Array bitlines

Fully-Associative Cache 63 tag[63:6] address block offset[5:0] 0 Keep blocks in cache frames data state (e.g., valid) address tag state state state tag tag tag = = = data data data state tag = data Content Addressable Memory (CAM) hit? multiplexor What happens when the cache runs out of space?

The 3 C s of Cache Misses Compulsory: Never accessed before Capacity: Accessed long ago and already replaced Conflict: Neither compulsory nor capacity (later today) Coherence: (Will discuss in multi-core lecture)

hit rate Spring 2016 :: CSE 502 Computer Architecture Cache Size Cache size is data capacity (don t count tag and state) Bigger can exploit temporal locality better Not always better Too large a cache Smaller is faster bigger is slower Access time may hurt critical path Too small a cache Limited temporal locality Useful data constantly replaced capacity working set size

hit rate Spring 2016 :: CSE 502 Computer Architecture Block Size Block size is the data that is Associated with an address tag Not necessarily the unit of transfer between hierarchies Too small a block Don t exploit spatial locality well Excessive tag overhead Too large a block Useless data transferred Too few total blocks Useful data frequently replaced block size

Direct-Mapped Cache tag[63:16] index[15:6] block offset[5:0] Use middle bits as index Only one tag comparison decoder data data data state state state tag tag tag data state tag multiplexor = tag match hit? Why take index bits out of the middle?

Cache Conflicts What if two blocks alias on a frame? Same index, but different tags Address sequence: 0xDEADBEEF 11011110101011011011111011101111 0xFEEDBEEF 11111110111011011011111011101111 0xDEADBEEF 11011110101011011011111011101111 tag index block offset 0xDEADBEEF experiences a Conflict miss Not Compulsory (seen it before) Not Capacity (lots of other indexes available in cache)

Associativity (1/2) Where does block index 12 (b 1100) go? Frame 0 1 2 3 4 5 6 7 Set/Frame 0 1 2 3 0 1 0 1 0 1 0 1 Set 0 1 2 3 4 5 6 7 Fully-associative block goes in any frame (all frames in 1 set) Set-associative block goes in any frame in one set (frames grouped in sets) Direct-mapped block goes in exactly one frame (1 frame per set)

hit rate Spring 2016 :: CSE 502 Computer Architecture Associativity (2/2) Larger associativity lower miss rate (fewer conflicts) higher power consumption Smaller associativity lower cost faster hit time holding cache and block size constant ~5 for L1-D associativity

N-Way Set-Associative Cache tag[63:15] index[14:6] block offset[5:0] way set decoder data data data state state state tag tag tag decoder data data data state state state tag tag tag data state tag data state tag multiplexor = multiplexor = multiplexor Note the additional bit(s) moved from index to tag hit?

Associative Block Replacement Which block in a set to replace on a miss? Ideal replacement (Belady s Algorithm) Replace block accessed farthest in the future Trick question: How do you implement it? Least Recently Used (LRU) Optimized for temporal locality (expensive for >2-way) Not Most Recently Used (NMRU) Track MRU, random select among the rest Same as LRU for 2-sets Random Nearly as good as LRU, sometimes better (when?) Pseudo-LRU Used in caches with high associativity Examples: Tree-PLRU, Bit-PLRU

Victim Cache (1/2) Associativity is expensive Performance overhead from extra muxes Power overhead from reading and checking more tags and data Conflicts are expensive Performance from extra mises Observation: Conflicts don t occur in all sets

Victim Cache (2/2) Access Sequence: 4-way Set-Associative L1 Cache 4-way Set-Associative L1 Cache + Fully-Associative Victim Cache E A B N J C K L D M DE A BA CB DC X Y Z MN J KJ KL ML NJ KJ KL ML P Q R P Q R Every access is a miss! ABCDE and JKLMN do not fit in a 4-way set associative cache AE BA CB DC X Y Z Victim cache provides a fifth way so long as only four sets overflow into it at the same time DC AB JKLM Can even provide 6 th or 7 th ways Provide extra associativity, but not for all sets

Parallel vs. Serial Caches Tag and Data usually separate (tag is smaller & faster) State bits stored along with tags Valid bit, LRU bit(s), Parallel access to Tag and Data reduces latency (good for L1) Serial access to Tag and Data reduces power (good for L2+) enable = = = = = = = = valid? valid? hit? data hit? data

Physically-Indexed Caches Assume 8KB pages & 512 cache sets 13-bit page offset 9-bit cache index Core requests are VAs Cache index is PA[14:6] PA[12:6] == VA[12:6] VA passes through TLB D-TLB on critical path PA[14:13] from TLB Cache tag is PA[63:15] If index falls completely within page offset, can use just VA for index tag[63:15] index[14:6] block offset[5:0] virtual page[63:13] D-TLB / physical tag (higher-bits of physical page number) page offset[12:0] / physical index[6:0] (lower-bits of index from VA) physical index[8:0] / / physical index[8:7] (lower-bit of physical page number) = = = = Simple, but slow. Can we do better? Virtual Address

Virtually-Indexed Caches Core requests are VAs Cache index is VA[14:6] Cache tag is PA[63:13] Why not PA[63:15]? Why not tag with VA? VA does not uniquely determine the memory location Would need cache flush on ctxt switch tag[63:15] index[14:6] block offset[5:0] virtual page[63:13] D-TLB / physical tag page offset[12:0] / virtual index[8:0] = = = = Virtual Address

Virtually-Indexed Caches Main problem: Virtual aliases Different virtual addresses for the same physical location Different virtual addrs map to different sets in the cache Solution: ensure they don t exist by invalidating all aliases when a miss happens If page offset is p bits, block offset is b bits and index is m bits, an alias might exist in any of 2 m-(p-b) sets. Search all those sets and remove aliases (alias = same physical tag) Fast, but complicated tag m b page number p p - b m - (p - b) Same in VA 1 Different in and VA 2 VA 1 and VA 2

Multiple Accesses per Cycle Need high-bandwidth access to caches Core can make multiple access requests per cycle Multiple cores can access LLC at the same time Must either delay some requests, or Design SRAM with multiple ports Big and power-hungry Split SRAM into multiple banks Can result in delays, but usually not

Multi-Ported SRAMs Wordline 2 Wordline 1 b 2 b 1 b 1 b 2 Wordlines = 1 per port Bitlines = 2 per port Area = O(ports 2 )

Multi-Porting vs. Banking Decoder Decoder Decoder Decoder SRAM Array Decoder S SRAM Array Decoder S SRAM Array Decoder SRAM Array Decoder SRAM Array Sense Sense Sense Sense Column Muxing S S 4 ports Big (and slow) Guarantees concurrent access 4 banks, 1 port each Each bank small (and fast) Conflicts (delays) possible How to decide which bank to go to?

Bank Conflicts Banks are address interleaved For block size b cache with N banks Bank = (Address / b) % N Looks more complicated than is: just low-order bits of index tag index offset no banking tag index bank offset w/ banking Banking can provide high bandwidth But only if all accesses are to different banks For 4 banks, 2 accesses, chance of conflict is 25%

Write Policies Writes are more interesting On reads, tag and data can be accessed in parallel On writes, needs two steps Is access time important for writes? Choices of Write Policies On write hits, update memory? Yes: write-through (higher bandwidth) No: write-back (uses Dirty bits to identify blocks to write back) On write misses, allocate a cache block frame? Yes: write-allocate No: no-write-allocate

Inclusion Core often accesses blocks not present in any $ Should block be allocated in L3, L2, and L1? Called Inclusive caches Waste of space Requires forced evict (e.g., force evict from L1 on evict from L2+) Only allocate blocks in L1 Called Non-inclusive caches (why not exclusive?) Some processors combine both L3 is inclusive of L1 and L2 L2 is non-inclusive of L1 (like a large victim cache)