Memory Hierarchy. Cache Memory Organization and Access. General Cache Concept. Example Memory Hierarchy Smaller, faster,

Similar documents
Memory Hierarchy. Computer Systems Organization (Spring 2017) CSCI-UA 201, Section 3. Instructor: Joanna Klukowska

Agenda Cache memory organization and operation Chapter 6 Performance impact of caches Cache Memories

Today. Cache Memories. General Cache Concept. General Cache Organization (S, E, B) Cache Memories. Example Memory Hierarchy Smaller, faster,

Cache memories are small, fast SRAM based memories managed automatically in hardware.

Carnegie Mellon. Cache Memories

Today Cache memory organization and operation Performance impact of caches

Cache Memories. From Bryant and O Hallaron, Computer Systems. A Programmer s Perspective. Chapter 6.

Memory Hierarchy. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Cache Memories /18-213/15-513: Introduction to Computer Systems 12 th Lecture, October 5, Today s Instructor: Phil Gibbons

Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory

211: Computer Architecture Summer 2016

Denison University. Cache Memories. CS-281: Introduction to Computer Systems. Instructor: Thomas C. Bressoud

Cache Memories. Topics. Next time. Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance

Cache Memories. Andrew Case. Slides adapted from Jinyang Li, Randy Bryant and Dave O Hallaron

Carnegie Mellon. Cache Memories. Computer Architecture. Instructor: Norbert Lu1enberger. based on the book by Randy Bryant and Dave O Hallaron

CS 33. Caches. CS33 Intro to Computer Systems XVIII 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Giving credit where credit is due

Cache Memories : Introduc on to Computer Systems 12 th Lecture, October 6th, Instructor: Randy Bryant.

Last class. Caches. Direct mapped

Cache Memories October 8, 2007

CISC 360. Cache Memories Nov 25, 2008

Cache Memories. EL2010 Organisasi dan Arsitektur Sistem Komputer Sekolah Teknik Elektro dan Informatika ITB 2010

Cache Memories. Cache Memories Oct. 10, Inserting an L1 Cache Between the CPU and Main Memory. General Org of a Cache Memory

CSCI-UA.0201 Computer Systems Organization Memory Hierarchy

Cache memories The course that gives CMU its Zip! Cache Memories Oct 11, General organization of a cache memory

The course that gives CMU its Zip! Memory System Performance. March 22, 2001

ν Hold frequently accessed blocks of main memory 2 CISC 360, Fa09 Cache is an array of sets. Each set contains one or more lines.

Memory hierarchies: caches and their impact on the running time

High-Performance Parallel Computing

Memory Hierarchy. Announcement. Computer system model. Reference

Systems I. Optimizing for the Memory Hierarchy. Topics Impact of caches on performance Memory hierarchy considerations

Cache Memory and Performance

Cache Memory and Performance

Cache Memories. Lecture, Oct. 30, Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Systems Programming and Computer Architecture ( ) Timothy Roscoe

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Lecture 15: Caches and Optimization Computer Architecture and Systems Programming ( )

Example. How are these parameters decided?

Read-only memory (ROM): programmed during production Programmable ROM (PROM): can be programmed once SRAM (Static RAM)

CSCI 402: Computer Architectures. Performance of Multilevel Cache

Memory Hierarchy. Instructor: Adam C. Champion, Ph.D. CSE 2431: Introduction to Operating Systems Reading: Chap. 6, [CSAPP]

Roadmap. Java: Assembly language: OS: Machine code: Computer system:

CS 33. Architecture and Optimization (3) CS33 Intro to Computer Systems XVI 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

VM as a cache for disk

How to Write Fast Numerical Code

Computer Organization - Overview

Problem: Processor Memory BoJleneck

CS 240 Stage 3 Abstractions for Practical Systems

Computer Architecture and System Software Lecture 09: Memory Hierarchy. Instructor: Rob Bergen Applied Computer Science University of Winnipeg

How to Write Fast Numerical Code

Princeton University. Computer Science 217: Introduction to Programming Systems. The Memory/Storage Hierarchy and Virtual Memory

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

Chapter 6 Caches. Computer System. Alpha Chip Photo. Topics. Memory Hierarchy Locality of Reference SRAM Caches Direct Mapped Associative

Problem: Processor- Memory Bo<leneck

Introduction to Computer Systems

Introduction to Computer Systems

CS356: Discussion #9 Memory Hierarchy and Caches. Marco Paolieri Illustrations from CS:APP3e textbook

Computer Organization: A Programmer's Perspective

Computer Systems CSE 410 Autumn Memory Organiza:on and Caches

CS 261 Fall Caching. Mike Lam, Professor. (get it??)

Advanced Memory Organizations

Locality. CS429: Computer Organization and Architecture. Locality Example 2. Locality Example

F28HS Hardware-Software Interface: Systems Programming

Storage Management 1

211: Computer Architecture Summer 2016

Caches Concepts Review

CS3350B Computer Architecture

Lecture 17: Memory Hierarchy and Cache Coherence Concurrent and Mul7core Programming

Cache Performance II 1

Lecture 1: Course Overview

Basic Memory Hierarchy Principles. Appendix C (Not all will be covered by the lecture; studying the textbook is recommended!)

CS 31: Intro to Systems Caching. Kevin Webb Swarthmore College March 24, 2015

Computer Organization: A Programmer's Perspective

EE108B Lecture 13. Caches Wrap Up Processes, Interrupts, and Exceptions. Christos Kozyrakis Stanford University

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Caches Spring 2016 May 4: Caches 1

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2017 Lecture 15

Memory Hierarchies &

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Memory Management! Goals of this Lecture!

Caches III. CSE 351 Spring Instructor: Ruth Anderson

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

Memory Management! How the hardware and OS give application pgms:" The illusion of a large contiguous address space" Protection against each other"

CMPSC 311- Introduction to Systems Programming Module: Caching

How to Write Fast Numerical Code Spring 2011 Lecture 7. Instructor: Markus Püschel TA: Georg Ofenbeck

CISC 360. Cache Memories Exercises Dec 3, 2009

+ Random-Access Memory (RAM)

5. Memory Hierarchy Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3. Emil Sekerinski, McMaster University, Fall Term 2015/16

Announcements. EE108B Lecture 13. Caches Wrap Up Processes, Interrupts, and Exceptions. Measuring Performance. Memory Performance

Memory Management. Goals of this Lecture. Motivation for Memory Hierarchy

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

A Cache Hierarchy in a Computer System

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Kathryn Chan, Kevin Bi, Ryan Wong, Waylon Huang, Xinyu Sui

Page 1. Memory Hierarchies (Part 2)

CS429: Computer Organization and Architecture

Lecture 12: Memory hierarchy & caches

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

CS429: Computer Organization and Architecture

LECTURE 11. Memory Hierarchy

Transcription:

Memory Hierarchy Computer Systems Organization (Spring 2017) CSCI-UA 201, Section 3 Cache Memory Organization and Access Instructor: Joanna Klukowska Slides adapted from Randal E. Bryant and David R. O Hallaron (CMU) Mohamed Zahran (NYU) 2 Example Memory Hierarchy Smaller, faster, and costlier (per byte) storage devices Larger, slower, and cheaper (per byte) storage L5: devices L6: L4: L3: L2: L1: L0: Regs L1 cache (SRAM) L2 cache (SRAM) L3 cache (SRAM) Main memory (DRAM) Local secondary storage (local disks) Remote secondary storage (e.g., Web servers) CPU registers hold words retrieved from the L1 cache. L1 cache holds cache lines retrieved from the L2 cache. L2 cache holds cache lines retrieved from L3 cache L3 cache holds cache lines retrieved from main memory. Main memory holds disk blocks retrieved from local disks. Local disks hold files retrieved from disks on remote servers General Cache Concept Cache Memory 84 9 14 10 3 10 4 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Data is copied in block-sized transfer units Smaller, faster, more expensive memory caches a subset of the blocks Larger, slower, cheaper memory viewed as partitioned into blocks

Cache Memories General Cache Organization (S, E, B) Cache memories are small, fast SRAM-based memories managed automatically in hardware E = 2 e lines per set Hold frequently accessed blocks of main memory CPU looks first for data in cache Typical system structure: set line CPU chip Register file S = 2 s sets Cache memory AL U System bus Memory bus Bus interface I/O bridge Main memor y v tag 0 1 2 B-1 Cache size: C = S x E x B data bytes 5 B = 2 b 6 valid bit bytes per cache block (the data) Cache Read E = 2 e lines per set Locate set Check if any line in set has matching tag Yes + line valid: hit Locate data starting at offset Example: Direct Mapped Cache (E = 1) Direct mapped: one line per set Address of word: t bits s bits b bits Address of int: S = 2 s sets tag set index block offset S = 2 s sets find set data begins at this offset v tag 0 1 2 B-1 valid bit B = 2 b bytes per cache block (the data) 7 8

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set valid? + match: assume yes = hit Address of int: valid? + match: assume yes = hit Address of int: block offset block offset int (4 Bytes) is here If tag doesn t match: old line is evicted and replaced 9 10 Example: Direct-Mapped Cache Simulation E-way Set Associative Cache (Here: E = 2) t=1 s=2 b=1 x xx x M=16 bytes (4-bit addresses), B=2 bytes/block, S=4 sets, E=1 Blocks/set E = 2: Two lines per set Address of short int: Address trace (reads, one byte per read): 0 [0000 2 1 [0001 2 hit 7 [0111 2 8 [1000 2 0 [0000 2 ] find set Set 0 Set 1 Set 2 Set 3 v Tag Block 0 1 10? M[8-9] M[0-1]? 1 0 M[6-7] 11 12

E-way Set Associative Cache (Here: E = 2) E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set compare both Address of short int: E = 2: Two lines per set compare both Address of short int: valid? + match: yes = hit valid? + match: yes = hit block offset short int (2 Bytes) is here block offset 13 No match: One line in set is selected for eviction and replacement Replacement policies: random, least recently used (LRU), 14 Example: 2-Way Set Associative Cache Simulation What about writes? t=2 s=1 b=1 xx x x M=16 bytes (4-bit addresses), B=2 bytes/block, S=2 sets, E=2 blocks/set Address trace (reads, one byte per read): 0 [0000 2 1 [0001 2 hit 7 [0111 2 8 [1000 2 0 [0000 2 ] hit Set 0 Set 1 v Tag Block 01? 00? M[0-1] 01 10 M[8-9] 01 01 M[6-7] 0 15 Multiple copies of data exist: L1, L2, L3, Main Memory, Disk What to do on a write-hit? Write-through (write immediately to memory) Write-back (defer write to memory until replacement of line) Need a dirty bit (line different from memory or not) What to do on a write-? Write-allocate (load into cache, update line in cache) Good if more writes to the location follow No-write-allocate (writes straight to memory, does not load into cache) Typical Write-through + No-write-allocate Write-back + Write-allocate 16

Intel Core i7 Cache Hierarchy Example Cache Performance Metrics Processor package Core 0 Regs L1 d-cache L1 i-cache L2 unified cache L3 unified cache (shared by all cores) Main memory Core 3 Regs L1 d-cache L1 i-cache L2 unified cache L1 i-cache and d-cache: 32 KB, 8-way, Access: 4 cycles L2 unified cache: 256 KB, 8-way, Access: 10 cycles L3 unified cache: 8 MB, 16-way, Access: 40-75 cycles Block size: 64 bytes for all caches. 17 Miss Rate Fraction of memory references not found in cache (es / accesses) = 1 hit rate Typical numbers (in percentages): 3-10% for L1 can be very small (e.g., < 1%) for L2, depending on size, etc. Hit Time Time to deliver a line in the cache to the processor includes time to determine whether the line is in the cache Typical numbers: 4 clock cycle for L1 10 clock cycles for L2 Miss Penalty Additional time required because of a typically 50-200 cycles for main memory Ouch! 18 Let s think about those numbers Writing Cache Friendly Code Huge difference between a hit and a Could be 100x, if just L1 and main memory Make the common case go fast Focus on the inner loops of the core functions Would you believe 99% hits is twice as good as 97%? Consider: cache hit time of 1 cycle penalty of 100 cycles Average access time: 97% hits: 0.97*1 cycle + 0.03 * 100 cycles 1 cycle + 3 cycles = 4 cycles 99% hits: 0.99*1 cycle + 0.01 * 100 cycles 1 cycle + 1 cycle = 2 cycles Minimize the es in the inner loops Repeated references to variables are good (temporal locality) because there is a good chance that they are stored in registers. Stride-1 reference patterns are good (spatial locality) because subsequent references to elements in the same block will be able to hit the cache (one cache followed by many cache hits). This is why rate is used instead of hit rate 19 20

Matrix Multiplication Example Rearranging Loops to Improve Spatial Locality Variable sum held in register /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; Description: Multiply N x N matrices Matrix elements are doubles (8 bytes) O(N 3 ) total operations N reads per source element N values summed per destination 21 but may be able to hold in register 22 Miss Rate Analysis for Matrix Multiply Layout of C Arrays in Memory (review) Assume: Block size = 32B (big enough for four doubles) Matrix dimension (N) is very large Approximate 1/N as 0.0 Cache is not even big enough to hold multiple rows Analysis Method: Look at access pattern of inner loop C arrays allocated in row-major order each row in contiguous memory locations Stepping through columns in one row: for (i = 0; i < N; i++) sum += a[0][i]; accesses successive elements if block size (B) > sizeof(a ij ) bytes, exploit spatial locality rate = sizeof(a ij ) / B i j C k = i x A k j B 23 Stepping through rows in one column: for (i = 0; i < n; i++) sum += a[i][0]; accesses distant elements no spatial locality! rate = 1 (i.e. 100%) 24

Matrix Multiplication (ijk) /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; Inner loop: (i,*) (*,j) (i,j) A B C Row-wise Columnwise We can reorganize the loops in several different ways. Which organization gives the best cache performance? Fixed Matrix Multiplication? ijk (& jik) for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; jki (& kji) kij (& ikj) 25 26 Measuring Cache Misses valgrind with cachegrind - cache simulator Core i7 Matrix Multiply Performance D1mr D1mw function ---------------------------------------- 2,977,062,519 0 ijk 4,004,000,016 0 jki 1,253,500,008 0 jik 750,750,003 0 kji 252,004,004 12 ikj 252,004,004 12 kij L1 cache = 1024B 16 way associative 32 B cache line jki / kji ijk / jik L1 cache = 1024B 4 way associative 32 B cache line D1mr D1mw function ----------------------------------------3,133,750,020 0 ijk 5,005,000,020 0 jki 5,005,000,020 0 kji 2,820,375,018 0 jik 189,003,003 9 ikj 189,003,003 9 kij kij / ikj 27 28

Learn about your machine's cache Cache Summary lshw command - list hardware information sudo lshw -C memory lscpu command - display information about the CPU architecture lscpu dmidecode command - sudo dmidecode -t cache Cache memories can have significant performance impact You can write your programs to exploit this! Focus on the inner loops, where bulk of computations and memory accesses occur. Try to maximize spatial locality by reading data objects with sequentially with stride 1. Try to maximize temporal locality by using a data object as often as possible once it s read from memory. Note: some of these may not work well in a virtual machine environment.