Question?! Processor comparison!

Similar documents
Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Modern Computer Architecture

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Advanced Computer Architecture

EE 4683/5683: COMPUTER ARCHITECTURE

Topics. Digital Systems Architecture EECE EECE Need More Cache?

Course Administration

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Memory Hierarchy: Motivation

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Memory Hierarchy: The motivation

1/19/2009. Data Locality. Exploiting Locality: Caches

14:332:331. Week 13 Basics of Cache

ECE468 Computer Organization and Architecture. Memory Hierarchy

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

Page 1. Multilevel Memories (Improving performance using a little cash )

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

The Memory Hierarchy & Cache

Caching Basics. Memory Hierarchies

Lecture 29 Review" CPU time: the best metric" Be sure you understand CC, clock period" Common (and good) performance metrics"

Memory Hierarchy Technology. The Big Picture: Where are We Now? The Five Classic Components of a Computer

CS61C : Machine Structures

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

LECTURE 11. Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Topics. Computer Organization CS Improving Performance. Opportunity for (Easy) Points. Three Generic Data Hazards

Computer Architecture Spring 2016

Page 1. Memory Hierarchies (Part 2)

CS152 Computer Architecture and Engineering Lecture 17: Cache System

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

Handout 4 Memory Hierarchy

CMPSC 311- Introduction to Systems Programming Module: Caching

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

CS161 Design and Architecture of Computer Systems. Cache $$$$$

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

Lecture-14 (Memory Hierarchy) CS422-Spring

CPE 631 Lecture 04: CPU Caches

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

ECE ECE4680

ECE331: Hardware Organization and Design

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

The Memory Hierarchy & Cache The impact of real memory on CPU Performance. Main memory basic properties: Memory Types: DRAM vs.

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Caches. Samira Khan March 23, 2017

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Memory System Design Part II. Bharadwaj Amrutur ECE Dept. IISc Bangalore.

CS61C : Machine Structures

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

CS 31: Intro to Systems Caching. Kevin Webb Swarthmore College March 24, 2015

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

Memory Hierarchy Motivation, Definitions, Four Questions about Memory Hierarchy

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Review : Pipelining. Memory Hierarchy

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

Memory Hierarchy: Caches, Virtual Memory

CSC Memory System. A. A Hierarchy and Driving Forces

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

V. Primary & Secondary Memory!

Memory hierarchy review. ECE 154B Dmitri Strukov

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

ארכיטקטורת יחידת עיבוד מרכזי ת

MEMORY. Objectives. L10 Memory

The University of Adelaide, School of Computer Science 13 September 2018

14:332:331. Week 13 Basics of Cache

Q3: Block Replacement. Replacement Algorithms. ECE473 Computer Architecture and Organization. Memory Hierarchy: Set Associative Cache

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

Agenda. EE 260: Introduction to Digital Design Memory. Naive Register File. Agenda. Memory Arrays: SRAM. Memory Arrays: Register File

Caches 3/23/17. Agenda. The Dataflow Model (of a Computer)

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

CS61C : Machine Structures

CS 136: Advanced Architecture. Review of Caches

ECE468 Computer Organization and Architecture. Virtual Memory

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

ECEC 355: Cache Design

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

ECE4680 Computer Organization and Architecture. Virtual Memory

Memory. Objectives. Introduction. 6.2 Types of Memory

TDT 4260 lecture 3 spring semester 2015

Cache Memory and Performance

CS3350B Computer Architecture

Transcription:

1! 2! Suggested Readings!! Readings!! H&P: Chapter 5.1-5.2!! (Over the next 2 lectures)! Lecture 18" Introduction to Memory Hierarchies! 3! Processor components! Multicore processors and programming! Question?! Processor comparison!! How much of a chip is memory?!! 10%! vs.!! 25%!! 50%! Goal:" Describe the fundamental components required in a single core of a modern microprocessor as well as how they interact with each other, with main memory, and with external storage media."! 75%!! 85%! CSE 30321! Writing more! efficient code! The right HW for the right application! HLL code translation! 4!

5! Memory and Pipelining!! In our 5 stage pipe, we#ve constantly been assuming that we can access our operand from memory in 1 clock cycle!! We#d like this to be the case, and its possible...but its also complicated!! We#ll discuss how this happens in the next several lectures!! We#ll talk about!! Memory Technology!! Memory Hierarchy!! Caches!! Memory!! Virtual Memory! 6! If I say Memory what do you think of?!! Memory Comes in Many Flavors!! SRAM (Static Random Access Memory)!! DRAM (Dynamic Random Access Memory)!! ROM, Flash, etc.!! Disks, Tapes, etc.!! Difference in speed, price and size!! Fast is small and/or expensive!! Large is slow and/or inexpensive!! The search is on for a universal memory!! What#s a universal memory!! Fast and non-volatile.!! May be MRAM, PCRAM, etc. etc.! Let#s start with DRAM.! Its generally the largest piece of RAM.! Performance! 1000" 100" 10" 7! Is there a problem with DRAM?! 1" Processor-DRAM Memory Gap (latency)! Why is this a problem?! 1980" 1981" 1982" 1983" 1984" 1985" 1986" 1987" 1988" Moore#s Law " CPU" DRAM" 1989" 1990" 1991" 1992" 1993" 1994" 1995" 1996" 1997" 1998" 1999" 2000" µproc" 60%/yr." (2X/1.5yr)" Processor-Memory! Performance Gap:" grows 50% / year! DRAM" 9%/yr." (2X/10yrs)" 8! More on DRAM!! DRAM access on order of 100 ns...!! What if clock rate for our 5 stage pipeline was 1 GHZ?!! Would the memory and fetch stages stall for 100 cycles?!! Actually, yes... if only DRAM!! Caches come to the rescue!! As transistors have gotten smaller, its allowed us to put more (faster) memory closer to the processing logic!! The idea is to mask the latency of having to go off to main memory! Time! Part A!

Caches and the principle of locality!! The principle of locality!! says that most programs don#t access all code or data uniformly!! i.e. in a loop, small subset of instructions might be executed over and over again!! & a block of memory addresses might be accessed sequentially!! This has led to memory hierarchies!! Some important things to note:!! Fast memory is expensive in cost/size!! Levels of memory usually smaller/faster than previous!! Levels of memory usually subset one another!! All the stuff in a higher level is in some level below it! 9! Common memory hierarchy:! CPU Registers! 100s Bytes! <10s ns! Cache! K Bytes! 10-100 ns! 1-0.1 cents/bit! Main Memory! M Bytes! 200ns- 500ns! $.0001-.00001 cents /bit! Disk! G Bytes, 10 ms " (10,000,000 ns)! 10-5 - 10-6 cents/bit! Tape! infinite! sec-min! 10! -8! Registers! Cache! Memory! Disk! Tape! Upper Level! faster" Larger" Lower Level! 10! 11! 12! Terminology Summary! Let#s talk about caches more on the " chalkboard" Remember:" Caches are generally the memory between registers and DRAM!! Hit: data appears in block in upper level (i.e. block X in cache)!!! Hit Rate: fraction of memory access found in upper level!! Hit Time: time to access upper level which consists of!! RAM access time + Time to determine hit/miss! Miss: data needs to be retrieved from a block in the lower level (i.e. block Y in memory)!! Miss Rate = 1 - (Hit Rate)!! Miss Penalty: Extra time to replace a block in the upper level +!! Time to deliver the block the processor!! Hit Time << Miss Penalty (500 instructions on 21264)! To Processor! From Processor! Upper Level! Memory! Blk X" Lower Level! Memory! Blk Y" Part B!

13! Average Memory Access Time! AMAT = HitTime + (1 - h) x MissPenalty! Hit time: basic time of every access.!! Hit rate (h): fraction of access that hit!! Miss penalty: extra time to fetch a block from lower level, including time to replace in CPU! A brief description of a cache!! Cache = next level of memory up from register file!! All values in register file should be in cache!! Cache entries usually referred to as blocks!! Block is minimum amount of information that can be in cache!! If we#re looking for item in a cache and find it, have a cache hit; it not a cache miss!! Cache miss rate = fraction of accesses not in the cache!! Miss penalty is # of clock cycles required because of the miss! 14! Part C! 15! Cache Basics!! Fast (but small) memory close to processor!! When data referenced!! If in cache, use cache instead of memory!! If not in cache, bring into cache" (actually, bring entire block of data, too)!! Maybe have to kick something else out to do it!!! Important decisions!! Placement: where in the cache can a block go!! Identification: how do we find a block in cache!! Replacement: what to kick out to make room in cache!! Write policy: What do we do about writes! 16! Cache Basics!! Cache consists of block-sized lines!! Line size typically power of two!! Typically 16 to 128 bytes in size!! Example!! Suppose block size is 128 bytes!! Lowest seven bits determine offset within block!! Read data at address A=0x7fffa3f4!! Address begins to block with base address 0x7fffa380!

17! Some initial questions to consider! 18! Where can a block be placed in a cache?!! Where can a block be placed in an upper level of memory hierarchy (i.e. a cache)?!! How is a block found in an upper level of memory hierarchy?!! Which cache block should be replaced on a cache miss if entire cache is full and we want to bring in new data?!! What happens if a you want to write back to a memory location?!! Do you just write to the cache?!! Do you write somewhere else?!! 3 schemes for block placement in a cache:!! Direct mapped cache:!! Block (or data to be stored) can go to only 1 place in cache!! Usually: (Block address) MOD (# of blocks in the cache)!! Fully associative cache:!! Block can be placed anywhere in cache!! Set associative cache:!! Set = a group of blocks in the cache!! Block mapped onto a set & then block can be placed anywhere within that set!! Usually: (Block address) MOD (# of sets in the cache)!! If n blocks, we call it n-way set associative! Part D! 19! Where can a block be placed in a cache?! 20! Associativity! Cache:! Fully Associative! Direct Mapped! Set Associative! 0 1 2 3 4 5 6 7" 0 1 2 3 4 5 6 7" 0 1 2 3 4 5 6 7" Set 0! Set 1! Set 2! Set 3!! If you have associativity > 1 you have to have a replacement policy!! FIFO!! LRU!! Random! Block 12 can go! anywhere! 0 1 2 3 4 5 6 7 8..." Block 12 can go! only into Block 4! (12 mod 8)! Memory:! 12" Block 12 can go! anywhere in set 0! (12 mod 4)!! Full or Full-map associativity means you check every tag in parallel and a memory block can go into any cache block!! Virtual memory is effectively fully associative!! (But don#t worry about virtual memory yet)!

21! How is a block found in the cache?!! Cache#s have address tag on each block frame that provides block address!! Tag of every cache block that might have entry is examined against CPU address (in parallel! why?)!! Each entry usually has a valid bit!! Tells us if cache data is useful/not garbage!! If bit is not set, there can#t be a match!! How does address provided to CPU relate to entry in cache?!! Entry divided between block address & block offset!! and further divided between tag field & index field! Part E-F! 22! How is a block found in the cache?! Tag! Block Address! Index!! Block offset field selects data from block!! (i.e. address of desired data within block)!! Index field selects a specific set!! Tag field is compared against it for a hit! Block! Offset!! Could we compare on more of address than the tag?!! Not necessary; checking index is redundant!! Used to select set to be checked!! Ex.: Address stored in set 0 must have 0 in index field!! Offset not necessary in comparison entire block is present or not and all block offsets must match! 23! Which block should be replaced on a cache miss?!! If we look something up in cache and entry not there, generally want to get data from memory and put it in cache!! B/c principle of locality says we#ll probably use it again! What happens on a write?!! FYI most accesses to a cache are reads:!! Used to fetch instructions (reads)!! Most instructions don#t write to memory!! For MIPS only about 7% of memory traffic involve writes!! Translates to about 25% of cache data traffic! 24!! Direct mapped caches have 1 choice of what block to replace!! Fully associative or set associative offer more choices!! Usually 2 strategies:!! Random pick any possible block and replace it!! LRU stands for Least Recently Used!! Why not throw out the block not used for the longest time!! Usually approximated, not much better than random i.e. 5.18% vs. 5.69% for 16KB 2-way set associative!! Make common case fast! Optimize cache for reads!!! Actually pretty easy to do!! Can read block while comparing/reading tag!! Block read begins as soon as address available!! If a hit, address just passed right on to CPU!! Writes take longer. Any idea why?!

What happens on a write?!! Generically, there are 2 kinds of write policies:!! Write through!! With write through, information written to block in cache and to block in lower-level memory!! Write back!! With write back, information written only to cache. It will be written back to lower-level memory when cache block is replaced!! The dirty bit:!! Each cache entry usually has a bit that specifies if a write has occurred in that block or not!! Helps reduce frequency of writes to lower-level memory upon block replacement! 25! 26! What happens on a write?!! Write back versus write through:!! Write back advantageous because:!! Writes occur at the speed of cache and don#t incur delay of lower-level memory!! Multiple writes to cache block result in only 1 lower-level memory access!! Write through advantageous because:!! Lower-levels of memory have most recent copy of data!! If CPU has to wait for a write, we have write stall!! 1 way around this is a write buffer!! Ideally, CPU shouldn#t have to stall during a write!! Instead, data written to buffer which sends it to lowerlevels of memory hierarchy! 27! 28! What happens on a write?! Memory access equations!! What if we want to write and block we want to write to isn#t in cache?!! Using what we defined on previous slide, we can say:!! Memory stall clock cycles =!! Reads x Read miss rate x Read miss penalty +!! There are 2 common policies:!!! Writes x Write miss rate x Write miss penalty!! Write allocate:!! The block is loaded on a write miss!! Often, reads and writes are combined/averaged:!! The idea behind this is that subsequent writes will be captured by the cache (ideal for a write back cache)!! No-write allocate:!! Memory stall cycles =!! Memory access x Miss rate x Miss penalty (approximation)!! Block modified in lower-level and not loaded into cache!! Usually used for write-through caches!! (subsequent writes still have to go to memory)!! Also possible to factor in instruction count to get a complete formula:!

29! Reducing cache misses!! Obviously, we want data accesses to result in cache hits, not misses this will optimize performance!! Start by looking at ways to increase % of hits.!! but first look at 3 kinds of misses!!! Compulsory misses:!! Very 1st access to cache block will not be a hit the data#s not there yet!!! Capacity misses:!! Cache is only so big. Won#t be able to store every block accessed in a program must swap out!!! Conflict misses:!! Result from set-associative or direct mapped caches!! Blocks discarded/retrieved if too many map to a location!