Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory

Similar documents
Cache Memories. From Bryant and O Hallaron, Computer Systems. A Programmer s Perspective. Chapter 6.

Cache memories are small, fast SRAM based memories managed automatically in hardware.

Today Cache memory organization and operation Performance impact of caches

Agenda Cache memory organization and operation Chapter 6 Performance impact of caches Cache Memories

Memory Hierarchy. Computer Systems Organization (Spring 2017) CSCI-UA 201, Section 3. Instructor: Joanna Klukowska

Memory Hierarchy. Cache Memory Organization and Access. General Cache Concept. Example Memory Hierarchy Smaller, faster,

Today. Cache Memories. General Cache Concept. General Cache Organization (S, E, B) Cache Memories. Example Memory Hierarchy Smaller, faster,

Cache Memory and Performance

Cache Memories /18-213/15-513: Introduction to Computer Systems 12 th Lecture, October 5, Today s Instructor: Phil Gibbons

Cache Memories. Andrew Case. Slides adapted from Jinyang Li, Randy Bryant and Dave O Hallaron

Denison University. Cache Memories. CS-281: Introduction to Computer Systems. Instructor: Thomas C. Bressoud

Carnegie Mellon. Cache Memories. Computer Architecture. Instructor: Norbert Lu1enberger. based on the book by Randy Bryant and Dave O Hallaron

Carnegie Mellon. Cache Memories

Cache Memories. Topics. Next time. Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance

Cache Memory and Performance

CS429: Computer Organization and Architecture

CS429: Computer Organization and Architecture

211: Computer Architecture Summer 2016

Last class. Caches. Direct mapped

Systems Programming and Computer Architecture ( ) Timothy Roscoe

Memory hierarchies: caches and their impact on the running time

Cache Memories October 8, 2007

CS 33. Architecture and Optimization (3) CS33 Intro to Computer Systems XVI 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

Lecture 15: Caches and Optimization Computer Architecture and Systems Programming ( )

CISC 360. Cache Memories Exercises Dec 3, 2009

Roadmap. Java: Assembly language: OS: Machine code: Computer system:

Cache Memories. EL2010 Organisasi dan Arsitektur Sistem Komputer Sekolah Teknik Elektro dan Informatika ITB 2010

CS 33. Caches. CS33 Intro to Computer Systems XVIII 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

CISC 360. Cache Memories Nov 25, 2008

Memory Hierarchy. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

CS 240 Stage 3 Abstractions for Practical Systems

Giving credit where credit is due

Advanced Memory Organizations

Cache Memories. Cache Memories Oct. 10, Inserting an L1 Cache Between the CPU and Main Memory. General Org of a Cache Memory

ν Hold frequently accessed blocks of main memory 2 CISC 360, Fa09 Cache is an array of sets. Each set contains one or more lines.

Cache memories The course that gives CMU its Zip! Cache Memories Oct 11, General organization of a cache memory

CS 261 Fall Caching. Mike Lam, Professor. (get it??)

Computer Systems CSE 410 Autumn Memory Organiza:on and Caches

Locality. CS429: Computer Organization and Architecture. Locality Example 2. Locality Example

Problem: Processor- Memory Bo<leneck

Problem: Processor Memory BoJleneck

Computer Architecture and System Software Lecture 09: Memory Hierarchy. Instructor: Rob Bergen Applied Computer Science University of Winnipeg

Cache Memories : Introduc on to Computer Systems 12 th Lecture, October 6th, Instructor: Randy Bryant.

Basic Memory Hierarchy Principles. Appendix C (Not all will be covered by the lecture; studying the textbook is recommended!)

Types of Cache Misses. The Hardware/So<ware Interface CSE351 Winter Cache Read. General Cache OrganizaJon (S, E, B) Memory and Caches II

The Hardware/So<ware Interface CSE351 Winter 2013

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2017 Lecture 15

1. Creates the illusion of an address space much larger than the physical memory

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

CS356: Discussion #9 Memory Hierarchy and Caches. Marco Paolieri Illustrations from CS:APP3e textbook

Chapter 6 Caches. Computer System. Alpha Chip Photo. Topics. Memory Hierarchy Locality of Reference SRAM Caches Direct Mapped Associative

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

ECE 30 Introduction to Computer Engineering

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2015 Lecture 15

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

Key Point. What are Cache lines

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Caches III CSE 351 Spring

ECE331: Hardware Organization and Design

CS 31: Intro to Systems Caching. Kevin Webb Swarthmore College March 24, 2015

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Page 1. Memory Hierarchies (Part 2)

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Solutions for Chapter 7 Exercises

A Cache Hierarchy in a Computer System

Caches Spring 2016 May 4: Caches 1

Caches Concepts Review

Caches III. CSE 351 Spring Instructor: Ruth Anderson

Kathryn Chan, Kevin Bi, Ryan Wong, Waylon Huang, Xinyu Sui

Computer Organization: A Programmer's Perspective

How to Write Fast Numerical Code

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

Computer Systems. Memory Hierarchy. Han, Hwansoo

High-Performance Parallel Computing

Page 1. Multilevel Memories (Improving performance using a little cash )

How to Write Fast Numerical Code

Memory Hierarchy. Announcement. Computer system model. Reference

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

LECTURE 12. Virtual Memory

MEMORY. Objectives. L10 Memory

CS3350B Computer Architecture

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS3350B Computer Architecture

Memory Hierarchy Design (Appendix B and Chapter 2)

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

6.004 Recitation Problems L13 Caches

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Transcription:

Cache Memories Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory CPU looks first for data in caches (e.g., L1, L2, and L3), then in main memory. Typical system structure: CPU chip Register file Cache memories ALU System bus Memory bus Bus interface I/O bridge Main memory 1

General Cache Organization (S, E, B) E = 2 e lines per set set line S = 2 s sets v tag 0 1 2 B-1 Cache size: C = S x E x B data bytes valid bit B = 2 b bytes per cache block (the data) 2

Cache Read E = 2 e lines per set Locate set Check if any line in set has matching tag Yes + line valid: hit Locate data starting at offset Address of word: t bits s bits b bits S = 2 s sets tag set index block offset data begins at this offset v tag 0 1 2 B-1 valid bit B = 2 b bytes per cache block (the data) 3

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Assume: cache block size 8 bytes S = 2 s sets v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 Address of int: t bits 0 01 100 find set v tag 0 1 2 3 4 5 6 7 4

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Assume: cache block size 8 bytes valid? + match: assume yes = hit Address of int: t bits 0 01 100 v tag 0 1 2 3 4 5 6 7 block offset 5

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Assume: cache block size 8 bytes valid? + match: assume yes = hit Address of int: t bits 0 01 100 v tag 0 1 2 3 4 5 6 7 block offset int (4 Bytes) is here No match: old line is evicted and replaced 6

Direct-Mapped Cache Simulation t=1 s=2 b=1 x xx x 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Set 0 Set 1 Set 2 Set 3 M=16 byte addresses, B=2 bytes/block, S=4 sets, E=1 Blocks/set Address trace (reads, one byte per read): 0 [0000 2 ], miss 1 [0001 2 ], hit 7 [0111 2 ], miss 8 [1000 2 ], miss, evict 12 [1100 2 ] miss v Tag Block 0 1 10? M[0-1] M[8-9]? 0 1 1 M[12-13] 1 0 M[6-7] Tag 1 Block M[8-9] 7

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size 8 bytes Address of short int: t bits 0 01 100 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 find set v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 8

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size 8 bytes compare both Address of short int: t bits 0 01 100 valid? + match: yes = hit v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 block offset 9

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size 8 bytes compare both Address of short int: t bits 0 01 100 valid? + match: yes = hit v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 short int (2 Bytes) is here block offset No match: One line in set is selected for eviction and replacement Replacement policies: random, least recently used (LRU), 10

2-Way Set Associative Cache Simulation t=2 s=1 b=1 xx x x 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 M=16 byte addresses, B=2 bytes/block, S=2 sets, E=2 blocks/set Address trace (reads, one byte per read): 0 [0000 2 ], miss 1 [0001 2 ], hit 7 [0111 2 ], miss 8 [1000 2 ], miss 14 [1110 2 ] miss Set 0 Set 1 v Tag Block 01? 00? M[0-1] 10 10 M[8-9] 10 01 M[6-7] 1 11 M[14-15] 11

What about writes? Multiple copies of data exist: L1, L2, Main Memory, Disk What to do on a write-hit? Write-through (write immediately to memory) Write-back (defer write to memory until replacement of line) Need a dirty bit (line different from memory or not) What to do on a write-miss? Write-allocate (load into cache, update line in cache) Good if more writes to the location follow No-write-allocate (writes immediately to memory) Typical Write-through + No-write-allocate Write-back + Write-allocate 12

Intel Core i7 Cache Hierarchy Processor package Core 0 Regs L1 d-cache L1 i-cache L2 unified cache L3 unified cache (shared by all cores) Core 3 Regs L1 d-cache L1 i-cache L2 unified cache L1 i-cache and d-cache: 32 KB each i 4 way, d 8-way, Access: 4 cycles L2 unified cache: 256 KB, 8-way, Access: 11 cycles L3 unified cache: 8 MB, 16-way, Access: 30-40 cycles Block size: 64 bytes for all caches. Main memory 13

Cache Performance Metrics Miss Rate Fraction of memory references not found in cache (misses / accesses) = 1 hit rate Typical numbers (in percentages): 3-10% for L1 can be quite small (e.g., < 1%) for L2, depending on size, etc. Hit Time Time to deliver a line in the cache to the processor includes time to determine whether the line is in the cache Typical numbers: 1-2 clock cycle for L1 5-20 clock cycles for L2 Miss Penalty Additional time required because of a miss typically 50-200 cycles for main memory (Trend: increasing!) 14

Lets think about those numbers Huge difference between a hit and a miss Could be 100x, if just L1 and main memory Would you believe 99% hits is twice as good as 97%? Consider: cache hit time of 1 cycle miss penalty of 100 cycles Average access time: 97% hits: 1 cycle + 0.03 * 100 cycles = 4 cycles 99% hits: 1 cycle + 0.01 * 100 cycles = 2 cycles This is why miss rate is used instead of hit rate 15

Writing Cache Friendly Code Make the common case go fast Focus on the inner loops of the core functions Minimize the misses in the inner loops Repeated references to variables are good (temporal locality) Stride-1 reference patterns are good (spatial locality) Key idea: Our qualitative notion of locality is quantified through our understanding of cache memories. 16

Concluding Observations Programmer can optimize for cache performance How data structures are organized How data are accessed Nested loop structure Blocking is a general technique All systems favor cache friendly code Getting absolute optimum performance is very platform specific Cache sizes, line sizes, associativities, etc. Can get most of the advantage with generic code Keep working set reasonably small (temporal locality) Use small strides (spatial locality) 17