Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

Similar documents
Question?! Processor comparison!

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Modern Computer Architecture

COSC 6385 Computer Architecture - Memory Hierarchies (I)

EE 4683/5683: COMPUTER ARCHITECTURE

Course Administration

Lecture 29 Review" CPU time: the best metric" Be sure you understand CC, clock period" Common (and good) performance metrics"

14:332:331. Week 13 Basics of Cache

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Memory Hierarchy: Motivation

Memory Hierarchy: The motivation

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Advanced Computer Architecture

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Topics. Digital Systems Architecture EECE EECE Need More Cache?

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

LECTURE 11. Memory Hierarchy

Page 1. Multilevel Memories (Improving performance using a little cash )

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

ECE468 Computer Organization and Architecture. Memory Hierarchy

Caching Basics. Memory Hierarchies

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

The Memory Hierarchy & Cache

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Memory Hierarchy Technology. The Big Picture: Where are We Now? The Five Classic Components of a Computer

CS61C : Machine Structures

CS61C : Machine Structures

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Page 1. Memory Hierarchies (Part 2)

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Review : Pipelining. Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

Topics. Computer Organization CS Improving Performance. Opportunity for (Easy) Points. Three Generic Data Hazards

Computer Architecture Spring 2016

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CS161 Design and Architecture of Computer Systems. Cache $$$$$

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Handout 4 Memory Hierarchy

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Lecture-14 (Memory Hierarchy) CS422-Spring

ECE ECE4680

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

CS61C : Machine Structures

CPE 631 Lecture 04: CPU Caches

Memory hierarchy review. ECE 154B Dmitri Strukov

V. Primary & Secondary Memory!

14:332:331. Week 13 Basics of Cache

The Memory Hierarchy & Cache The impact of real memory on CPU Performance. Main memory basic properties: Memory Types: DRAM vs.

Memory Hierarchy: Caches, Virtual Memory

The University of Adelaide, School of Computer Science 13 September 2018

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

COSC3330 Computer Architecture Lecture 19. Cache

Memory Hierarchy Motivation, Definitions, Four Questions about Memory Hierarchy

TDT 4260 lecture 3 spring semester 2015

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

CMPSC 311- Introduction to Systems Programming Module: Caching

ECE331: Hardware Organization and Design

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

CSE 2021: Computer Organization

ארכיטקטורת יחידת עיבוד מרכזי ת

CSE 2021: Computer Organization

Chapter 6 Objectives

Memory Hierarchy. 2/18/2016 CS 152 Sec6on 5 Colin Schmidt

CMPT 300 Introduction to Operating Systems

CS 31: Intro to Systems Caching. Kevin Webb Swarthmore College March 24, 2015

CS 136: Advanced Architecture. Review of Caches

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

1/19/2009. Data Locality. Exploiting Locality: Caches

Memory. Objectives. Introduction. 6.2 Types of Memory

UC Berkeley CS61C : Machine Structures

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS3350B Computer Architecture

CS61C : Machine Structures

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Computer Systems C S Cynthia Lee Today s materials adapted from Kevin Webb at Swarthmore College

Cache Memory and Performance

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Caches. Samira Khan March 23, 2017

Transcription:

Processor components" Multicore processors and programming" Processor comparison" vs." Lecture 17 Introduction to Memory Hierarchies" CSE 30321" Suggested reading:" (HP Chapter 5.1-5.2)" Writing more " efficient code" The right HW for the right application" HLL code translation" 1" Fundamental lesson(s)" 2" Why it s important " When you load data from disk or store data in memory, where does it go? How does the processor find it?" Memory hierarchies prevent programs from having absolutely abysmal performance" In lab 4, you ll see how a detailed understanding of the memory organization can help you write code that is at least 2X more efficient" In a pipelined datapath, we assume that a memory reference takes 1 CC. But an off-chip memory access requires 100s of CCs." " How can a memory reference truly be accomplished in 1 CC?" 3" 4"

Question?" How much of a chip is memory?" 10%" 25%" 50%" 75%" 85%" Memory and Pipelining" In our 5 stage pipe, we ve constantly been assuming that we can access our operand from memory in 1 clock cycle " We d like this to be the case, and its possible...but its also complicated" We ll discuss how this happens in the next several lectures" We ll talk about " Memory Technology" Memory Hierarchy" Caches" Memory" Virtual Memory" 5" 6" If I say Memory what do you think of?" Is there a problem with DRAM?" Memory Comes in Many Flavors" SRAM (Static Random Access Memory)" DRAM (Dynamic Random Access Memory)" ROM, Flash, etc." Disks, Tapes, etc." Difference in speed, price and size " Fast is small and/or expensive" Large is slow and/or inexpensive" The search is on for a universal memory " What s a universal memory " Fast and non-volatile." May be MRAM, PCRAM, etc. etc." Let s start with DRAM." Its generally the largest piece of RAM." Performance" Processor-DRAM Memory Gap (latency)" 1000" 100" 10" 1" Why is this a problem?" 1980" 1981" 1982" 1983" 1984" 1985" 1986" 1987" 1988" Moore s Law " 1989" 1990" 1991" 1992" 1993" 1994" 1995" 1996" 1997" 1998" 1999" 2000" Time" µproc" 60%/yr." (2X/1.5yr)" DRAM" 9%/yr." (2X/10yrs)" 7" 8" CPU" Processor-Memory" Performance Gap: grows 50% / year" DRAM"

More on DRAM" DRAM access on order of 100 ns..." What if clock rate for our 5 stage pipeline was 1 GHZ?" Would the memory and fetch stages stall for 100 cycles?" Actually, yes... if only DRAM" Caches come to the rescue" As transistors have gotten smaller, its allowed us to put more (faster) memory closer to the processing logic" The idea is to mask the latency of having to go off to main memory" Caches and the principle of locality " The principle of locality " says that most programs don t access all code or data uniformly" i.e. in a loop, small subset of instructions might be executed over and over again " " and a block of memory addresses might be accessed sequentially " This has led to memory hierarchies " Some important things to note:" Fast memory is expensive in cost/size" Levels of memory usually smaller/faster than previous" Levels of memory usually subset one another" All the stuff in a higher level is in some level below it" Part A" 9" 10" Common memory hierarchy:" Terminology Summary" CPU Registers" 100s Bytes" <10s ns" Registers" Upper Level" faster" Hit: data appears in block in upper level (i.e. block X in cache) " Hit Rate: fraction of memory access found in upper level" Hit Time: time to access upper level which consists of" Cache" K Bytes" 10-100 ns" 1-0.1 cents/bit" Cache" RAM access time + Time to determine hit/miss" Miss: data needs to be retrieved from a block in the lower level " Main Memory" M Bytes" 200ns- 500ns" $.0001-.00001 cents /bit" Memory" (i.e. block Y in memory)" Miss Rate = 1 - (Hit Rate)" Miss Penalty: Extra time to replace a block in the upper level + " Disk" G Bytes, 10 ms (10,000,000 ns)" 10-5 - 10-6 cents/bit" Disk" Time to deliver the block the processor" Hit Time << Miss Penalty (500 instructions on some microprocessors)" Tape" infinite" sec-min" 10" -8" Tape" Larger" Lower Level" To Processor" From Processor" Upper Level" Memory" Blk X" Lower Level" Memory" Blk Y" 11" 12"

Average Memory Access Time" AMAT = (Hit Time) + (1 - h) x (Miss Penalty)" Hit time:" basic time of every access." Hit rate (h):" fraction of access that hit" Miss penalty:" extra time to fetch a block from lower level, including time to replace in CPU" Cache Basics" Cache entries usually referred to as blocks " Block is minimum amount of information that can be in cache" Line size typically power of two" Typically 16 to 128 bytes in size" Part B,C" 13" 14" Some initial questions to consider" Where can a block be placed in an upper level of memory hierarchy (i.e. a cache)?" How is a block found in an upper level of memory hierarchy?" Which cache block should be replaced on a cache miss if entire cache is full and we want to bring in new data?" What happens if a you want to write back to a memory location?" Do you just write to the cache?" Do you write somewhere else?" Where can a block be placed in a cache?" 3 schemes for block placement in a cache:" Direct mapped cache:" Block (or data to be stored) can go to only 1 place in cache" Usually: (Block address) MOD (# of blocks in the cache)" Fully associative cache:" Block can be placed anywhere in cache" Set associative cache:" Set = a group of blocks in the cache" Block mapped onto a set & then block can be placed anywhere within that set" Usually: (Block address) MOD (# of sets in the cache)" If n blocks, we call it n-way set associative" 15" Part D" 16"

Where can a block be placed in a cache?" Cache:" Fully Associative" Direct Mapped" Set Associative" 0 1 2 3 4 5 6 7" 0 1 2 3 4 5 6 7" 0 1 2 3 4 5 6 7" Block 12 can go" anywhere" 0 1 2 3 4 5 6 7 8..." Block 12 can go" only into Block 4" (12 mod 8)" Memory:" 12" Set 0" Set 1" Set 2" Set 3" Block 12 can go" anywhere in set 0" (12 mod 4)" Associativity" If you have associativity > 1 you have to have a replacement policy" FIFO" LRU" Random" " Full or Full-map associativity means you check every tag in parallel and a memory block can go into any cache block" Virtual memory is effectively fully associative" (But don t worry about virtual memory yet)" 17" 18" How is a block found in the cache?" Cache s have address tag on each block frame that provides block address" Tag of every cache block that might have entry is examined against CPU address (in parallel! why?)" Each entry usually has a valid bit" Tells us if cache data is useful/not garbage" If bit is not set, there can t be a match " How does address provided to CPU relate to entry in cache?" Entry divided between block address & block offset " and further divided between tag field & index field" How is a block found in the cache?" Tag" Block Address" Index" Block offset field selects data from block" (i.e. address of desired data within block)" Index field selects a specific set" Tag field is compared against it for a hit" Block" Offset" Could we compare on more of address than the tag?" Not necessary; checking index is redundant" Used to select set to be checked" Ex.: Address stored in set 0 must have 0 in index field" Offset not necessary in comparison entire block is present or not and all block offsets must match" Part E-F" 19" 20"

Which block is replaced on a cache miss?" If we look something up in cache and entry not there, generally want to get data from memory and put it in cache" B/c principle of locality says we ll probably use it again" Direct mapped caches have 1 choice of what block to replace" Fully associative or set associative offer more choices" Usually 2 strategies:" Random pick any possible block and replace it" LRU stands for Least Recently Used " Why not throw out the block not used for the longest time" Usually approximated, not much better than random i.e. 5.18% vs. 5.69% for 16KB 2-way set associative" What happens on a write?" FYI most accesses to a cache are reads:" Used to fetch instructions (reads)" Most instructions don t write to memory" For MIPS only about 7% of memory traffic involve writes" Translates to about 25% of cache data traffic" Make common case fast! Optimize cache for reads!" Actually pretty easy to do " Can read block while comparing/reading tag" Block read begins as soon as address available" If a hit, address just passed right on to CPU" 21" 22" What happens on a write?" Generically, there are 2 kinds of write policies:" Write through" With write through, information written to block in cache and to block in lower-level memory" Write back" With write back, information written only to cache. It will be written back to lower-level memory when cache block is replaced" The dirty bit:" Each cache entry usually has a bit that specifies if a write has occurred in that block or not " Helps reduce frequency of writes to lower-level memory upon block replacement" What happens on a write?" Write back versus write through:" Write back advantageous because:" Writes occur at the speed of cache and don t incur delay of lower-level memory" Multiple writes to cache block result in only 1 lower-level memory access" Write through advantageous because:" Lower-levels of memory have most recent copy of data" If CPU has to wait for a write, we have write stall" 1 way around this is a write buffer" Ideally, CPU shouldn t have to stall during a write" Instead, data written to buffer which sends it to lowerlevels of memory hierarchy" 23" 24"

What happens on a write?" What if we want to write and block we want to write to isn t in cache?" There are 2 common policies:" Write allocate:" The block is loaded on a write miss" The idea behind this is that subsequent writes will be captured by the cache (ideal for a write back cache)" No-write allocate:" Block modified in lower-level and not loaded into cache" Usually used for write-through caches " (subsequent writes still have to go to memory)" Memory access equations" Using what we defined on previous slide, we can say:" Memory stall clock cycles = " Reads x Read miss rate x Read miss penalty + " " Writes x Write miss rate x Write miss penalty" Often, reads and writes are combined/averaged:" Memory stall cycles = " Memory access x Miss rate x Miss penalty (approximation)" Also possible to factor in instruction count to get a complete formula:" 25" 26" Reducing cache misses" Obviously, we want data accesses to result in cache hits, not misses this will optimize performance" Start by looking at ways to increase % of hits." but first look at 3 kinds of misses!" Compulsory misses:" Very 1st access to cache block will not be a hit the data s not there yet!" Capacity misses:" Cache is only so big. Won t be able to store every block accessed in a program must swap out!" Conflict misses:" Result from set-associative or direct mapped caches" Blocks discarded/retrieved if too many map to a location" 27"