EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

Similar documents
CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

14:332:331. Week 13 Basics of Cache

Course Administration

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

EE 4683/5683: COMPUTER ARCHITECTURE

14:332:331. Week 13 Basics of Cache

CS3350B Computer Architecture

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

Page 1. Memory Hierarchies (Part 2)

Memory Hierarchy Design (Appendix B and Chapter 2)

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 6: Memory Organization Part I

CS 61C: Great Ideas in Computer Architecture (Machine Structures)

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

Modern Computer Architecture

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Handout 4 Memory Hierarchy

CS152 Computer Architecture and Engineering Lecture 17: Cache System

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

CPE 631 Lecture 04: CPU Caches

LECTURE 11. Memory Hierarchy

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

EECS150 - Digital Design Lecture 11 SRAM (II), Caches. Announcements

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

ECE ECE4680

CSE 2021: Computer Organization

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

CSE 2021: Computer Organization

Computer Architecture Spring 2016

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

Computer Systems Laboratory Sungkyunkwan University

Memory Hierarchy Review

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

COSC 6385 Computer Architecture - Memory Hierarchies (I)

ארכיטקטורת יחידת עיבוד מרכזי ת

CS 61C: Great Ideas in Computer Architecture Caches Part 2

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Page 1. Multilevel Memories (Improving performance using a little cash )

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Memory Hierarchy: Caches, Virtual Memory

Memory Technologies. Technology Trends

CMPT 300 Introduction to Operating Systems

Chapter 7 Large and Fast: Exploiting Memory Hierarchy. Memory Hierarchy. Locality. Memories: Review

EEC 483 Computer Organization

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

CS 61C: Great Ideas in Computer Architecture (Machine Structures)

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Instructors: Randy H. Katz David A. PaGerson hgp://inst.eecs.berkeley.edu/~cs61c/fa10. 10/4/10 Fall Lecture #16. Agenda

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

Memory Hierarchy Technology. The Big Picture: Where are We Now? The Five Classic Components of a Computer

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

CS3350B Computer Architecture

Chapter 5. Memory Technology

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

LECTURE 10: Improving Memory Access: Direct and Spatial caches

CS162 Operating Systems and Systems Programming Lecture 13. Caches and TLBs. Page 1

V. Primary & Secondary Memory!

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Advanced Computer Architecture

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Speicherarchitektur. Who Cares About the Memory Hierarchy? Technologie-Trends. Speicher-Hierarchie. Referenz-Lokalität. Caches

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

CHAPTER 4 MEMORY HIERARCHIES TYPICAL MEMORY HIERARCHY TYPICAL MEMORY HIERARCHY: THE PYRAMID CACHE PERFORMANCE MEMORY HIERARCHIES CACHE DESIGN

Lecture 11 Cache. Peng Liu.

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Pick a time window size w. In time span w, are there, Multiple References, to nearby addresses: Spatial Locality

ECE468 Computer Organization and Architecture. Memory Hierarchy

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Cache Memory - II. Some of the slides are adopted from David Patterson (UCB)

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches, Set Associative Caches, Cache Performance

ECE232: Hardware Organization and Design

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

Recap: The Big Picture: Where are We Now? The Five Classic Components of a Computer. CS152 Computer Architecture and Engineering Lecture 20.

ECE331: Hardware Organization and Design

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

Transcription:

EEC 17 Computer Architecture Fall 25 Introduction Review Review: The Hierarchy Take advantage of the principle of locality to present the user with as much memory as is available in the cheapest technology at the speed offered by the fastest technology Increasing distance from the processor in access time Processor 4-8 bytes (word) L1$ 8-32 bytes (block) L2$ 1 to 4 blocks Main 1,24+ bytes (disk sector = page) Inclusive what is in L1$ is a subset of what is in L2$ is a subset of what is in MM that is a subset of is in SM Secondary (Relative) size of the memory at each level Courtesy of Prof Mary Jane Irwin (Penn State University) Typical Reference Patterns The Hierarchy: Why Does it Work? Temporal Locality (Locality in Time): Instruction fetches Address n loop iterations linear sequence Keep most recently accessed data items closer to the processor Spatial Locality (Locality in Space): Move blocks consisting of contiguous words to the upper levels Stack accesses accesses To Processor From Processor Upper Level Blk X Lower Level Blk Y Time The Hierarchy: Terminology Hit: data appears in some block in the upper level (Blk X) Hit Rate: the fraction of memory accesses found in the upper level Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss: data needs to be retrieve from a block in the lower level (Blk Y) Miss Rate = 1 - (Hit Rate) To Processor From Processor Upper Level Blk X Lower Level Blk Y Miss Penalty: Time to replace a block in the upper level + Time to deliver the block the processor Hit Time << Miss Penalty How is the Hierarchy Managed? registers memory by compiler (programmer?) cache main memory by the cache controller hardware main memory disks by the operating system (virtual memory) by the programmer (files) virtual to physical address mapping assisted by the hardware (TLB)

Two questions to answer (in hardware): Q1: How do we know if a data item is in the cache? Q2: If it is, how do we find it? Direct mapped For each item of data at the lower level, there is exactly one location in the cache where it might be - so lots of items at the lower level share locations in the upper level Address mapping: First consider when block is one word Caching: A Simple First Example 1 1 11 Q1: Is it there? Compare the cache tag to the high order 2 memory address bits to tell if the memory block is in the cache xx 1xx 1xx 11xx 1xx 11xx 11xx 111xx 1xx 11xx 11xx 111xx 11xx 111xx 111xx 1111xx Main Two low order bits define the byte in the word (32-b words) Q2: How do we find it? Use next 2 low order memory address bits the index to determine which cache block (ie, modulo the number of blocks in the cache) Caching: A Simple First Example 1 1 11 Q1: Is it there? Compare the cache tag to the high order 2 memory address bits to tell if the memory block is in the cache xx 1xx 1xx 11xx 1xx 11xx 11xx 111xx 1xx 11xx 11xx 111xx 11xx 111xx 111xx 1111xx Main Two low order bits define the byte in the word (32b words) Q2: How do we find it? Use next 2 low order memory address bits theindex to determine which cache block (ie, modulo the number of blocks in the cache) Direct Mapped 1 2 3 4 3 4 14 1 2 3 4 3 4 15 Direct Mapped 1 2 3 4 3 4 15 MIPS Direct Mapped Example One word/block, cache size = 1K words Byte 31 3 13 12 11 2 1 offset miss 1 miss 2 miss 3 miss Mem() Mem() Mem() Mem() Mem(1) Mem(1) Mem(1) Mem(2) Mem(2) Mem(3) 4 miss 3 hit 4 hit 15 miss 1 4 Mem() 1 Mem(4) 1 Mem(4) 1 Mem(4) Mem(1) Mem(1) Mem(1) Mem(1) Mem(2) Mem(2) Mem(2) Mem(2) Mem(3) Mem(3) Mem(3) 11 Mem(3) 15 Hit Tag 2 1 Index Index Valid 1 2 121 122 123 Tag 2 32 What kind of locality are we taking advantage of?

Handling Hits Read hits (I$ and D$) this is what we want! Write hits (D$ only) allow cache and memory to be inconsistent - write the data only into the cache block (write-back the cache contents to the next level in the memory hierarchy when that cache block is evicted ) - need a dirty bit for each data cache block to tell if it needs to be written back to memory when it is evicted require the cache and memory to be consistent - always write the data into both the cache block and the next level in the memory hierarchy (write-through) so don t need a dirty bit - writes run at the speed of the next level in the memory hierarchy so slow! or use a write buffer, a buffer that holds the data that is waiting to be written, so would only have to stall if the write buffer is full Write Buffer for Write-Through Caching Processor Write buffer between the cache and memory Processor: writes data into the cache and the write buffer controller: writes contents of the write buffer to memory The write buffer is just a FIFO: Typical number of entries: 4 Works fine if Store frequency (wrt time) << 1 / DRAM write cycle system designer s nightmare: Store frequency (wrt time) 1 / DRAM write cycle write buffer saturation write buffer DRAM - One solution is to use a write-back cache; another is to use an L2 cache (next lecture) I n s t r O r d e r Review: Why Pipeline? For Throughput! To avoid a structural hazard need two caches on-chip: one for instructions (I$) and one for data (D$) Inst Inst 1 Inst 2 Inst 3 Inst 4 Time (clock cycles) To keep the pipeline running at its maximum rate both I$ and D$ need to satisfy a request from the datapath every cycle What happens when they can t do that? Another Reference String Mapping 4 4 4 4 4 4 4 4 Another Reference String Mapping 4 4 4 4 miss 4 miss miss 4 miss 1 4 Mem() Mem() 1 Mem(4) 1 Mem() 4 miss 4 miss miss 4 1 miss 4 1 4 1 Mem(4) Mem() 1 Mem(4) Mem() Ping pong effect due to conflict misses - two memory locations that map into the same cache block Sources of Misses Compulsory (cold start or process migration, first reference): First access to a block, cold fact of life, not a whole lot you can do about it If you are going to run billions of instruction, compulsory misses are insignificant Conflict (collision): Multiple memory locations mapped to the same cache location Solution 1: increase cache size Solution 2: increase associativity Capacity: cannot contain all blocks accessed by the program Solution: increase cache size

Handling Misses Read misses (I$ and D$) stall the entire pipeline, fetch the block from the next level in the memory hierarchy, install it in the cache, then restart the pipeline and reissue the read request Write misses (D$ only) 1 stall the pipeline, fetch the block from next level in the memory hierarchy, install it in the cache (which may involve having to evict a dirty block if a write-back cache), then restart the pipeline and reissue the write request or (normally used in write-back caches) 2 Write allocate just write the word into the cache updating both the tag and data, no need to check for cache hit, no need to stall or (normally used in write-through caches with a write buffer) 3 No-write allocate skip the cache write and just write the word to the write buffer (and eventually to the next memory level), no need to stall if the write buffer isn t full; must invalidate the cache block since it will become inconsistent (on a hit) Multiword Block Direct Mapped Four words/block, cache size = 1K words Hit Tag 1 2 253 254 255 2 Byte 31 3 13 12 11 4 3 2 1 offset 2 8 Index Block offset What kind of locality are we taking advantage of? 32 Taking Advantage of Spatial Locality Let cache block hold more than one word 1 2 3 4 3 4 15 Taking Advantage of Spatial Locality Let cache block hold more than one word 1 2 3 4 3 4 15 1 2 miss Mem(1) Mem() 1 hit 2 miss Mem(1) Mem() Mem(1) Mem() Mem(3) Mem(2) 3 4 3 4 15 3 hit 4 miss 3hit 1 5 4 Mem(1) Mem() Mem(3) Mem(2) Mem(1) Mem() Mem(3) Mem(2) 1 Mem(5) Mem(4) Mem(3) Mem(2) 4 hit 15 miss 1 Mem(5) Mem(4) 111 Mem(5) Mem(4) 15 14 Mem(3) Mem(2) Mem(3) Mem(2) Miss Rate vs vs Size Miss rate (%) 1 5 8 KB 16 KB 64 KB 256 KB 8 16 32 64 128 256 Block size (bytes) Miss rate goes up if the block size becomes a significant fraction of the cache size because the number of blocks that can be held in the same size cache is smaller (increasing capacity misses) Tradeoff Larger block sizes take advantage of spatial locality but If the block size is too big relative to the cache size, the miss rate will go up - Too few cache blocks Larger block size means larger miss penalty - Takes longer time to fill up the block Miss Rate Exploits Spatial Locality Fewer blocks compromises Temporal Locality Miss Penalty Average Access Time In general, Average Access Time = Hit Time + Miss Penalty x Miss Rate Increased Miss Penalty & Miss Rate

Multiword Block Considerations Read misses (I$ and D$) Processed the same as for single word blocks a miss returns the entire block from memory Miss penalty grows as block size grows - Latency to first word in block + transfer time for remaining words - Early restart resume execution as soon as the requested word of the block is returned - Requested word first requested word is transferred from the memory to the cache first Nonblocking cache allows the processor to access the cache while the cache is handling an earlier miss Write misses (D$) Can t use write allocate or will end up with a garbled block in the cache (eg, for 4 word blocks, a new tag, one word of data from the new block, and three words of data from the old block), so must fetch the block from memory first and pay the stall time Summary The Principle of Locality: Program likely to access a relatively small portion of the address space at any instant of time - Temporal Locality: Locality in Time - Spatial Locality: Locality in Space Three major categories of cache misses: Compulsory misses: sad facts of life Example: cold start misses Conflict misses: increase cache size and/or associativity Nightmare Scenario: ping pong effect! Capacity misses: increase cache size design space total size, block size, associativity (replacement policy) write-hit policy (write-through, write-back) write-miss policy (write allocate, write buffers)