COSC 6385 Computer Architecture - Memory Hierarchies (I)

Similar documents
COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Modern Computer Architecture

CPE 631 Lecture 04: CPU Caches

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

ארכיטקטורת יחידת עיבוד מרכזי ת

Computer Architecture Spring 2016

Advanced Computer Architecture

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

CS152 Computer Architecture and Engineering Lecture 17: Cache System

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

Handout 4 Memory Hierarchy

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

ECE ECE4680

Memory Hierarchy Motivation, Definitions, Four Questions about Memory Hierarchy

EE 4683/5683: COMPUTER ARCHITECTURE

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Memory hierarchy review. ECE 154B Dmitri Strukov

EECS150 - Digital Design Lecture 11 SRAM (II), Caches. Announcements

Page 1. Memory Hierarchies (Part 2)

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Question?! Processor comparison!

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

CPS101 Computer Organization and Programming Lecture 13: The Memory System. Outline of Today s Lecture. The Big Picture: Where are We Now?

Course Administration

Lecture 11. Virtual Memory Review: Memory Hierarchy

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

Topics. Computer Organization CS Improving Performance. Opportunity for (Easy) Points. Three Generic Data Hazards

CS3350B Computer Architecture

COSC 6385 Computer Architecture. - Memory Hierarchies (II)

CSE 502 Graduate Computer Architecture. Lec 6-7 Memory Hierarchy Review

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

MEMORY HIERARCHY BASICS. B649 Parallel Architectures and Programming

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

CSE 502 Graduate Computer Architecture. Lec 5-6 Memory Hierarchy Review

LECTURE 11. Memory Hierarchy

Memory Hierarchy Review

14:332:331. Week 13 Basics of Cache

CSE 502 Graduate Computer Architecture. Lec 7-10 App B: Memory Hierarchy Review

CS61C : Machine Structures

Aleksandar Milenkovich 1

Topics. Digital Systems Architecture EECE EECE Need More Cache?

CS161 Design and Architecture of Computer Systems. Cache $$$$$

ECE468 Computer Organization and Architecture. Virtual Memory

ECE4680 Computer Organization and Architecture. Virtual Memory

14:332:331. Week 13 Basics of Cache

ECE331: Hardware Organization and Design

UCB CS61C : Machine Structures

Lecture 33 Caches III What to do on a write hit? Block Size Tradeoff (1/3) Benefits of Larger Block Size

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

Memory Technologies. Technology Trends

The Memory Hierarchy & Cache

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

UC Berkeley CS61C : Machine Structures

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

EEC 483 Computer Organization

COMP 3221: Microprocessors and Embedded Systems

CENG 3420 Computer Organization and Design. Lecture 08: Cache Review. Bei Yu

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

Introduction. The Big Picture: Where are We Now? The Five Classic Components of a Computer

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

LECTURE 10: Improving Memory Access: Direct and Spatial caches

UCB CS61C : Machine Structures

CS61C : Machine Structures

CS654 Advanced Computer Architecture. Lec 2 - Introduction

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley.

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency)

CS 136: Advanced Architecture. Review of Caches

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy: Caches, Virtual Memory

COSC4201. Chapter 4 Cache. Prof. Mokhtar Aboelaze York University Based on Notes By Prof. L. Bhuyan UCR And Prof. M. Shaaban RIT

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

Chapter Seven. SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors)

Why memory hierarchy

COSC4201. Chapter 5. Memory Hierarchy Design. Prof. Mokhtar Aboelaze York University

Why Care About Memory Hierarchy?

12 Cache-Organization 1

Review from last lecture. EECS 252 Graduate Computer Architecture. Lec 4 Memory Hierarchy Review. Outline. Example Standard Deviation: Last time

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Introduction to cache memories

And in Review! ! Locality of reference is a Big Idea! 3. Load Word from 0x !

www-inst.eecs.berkeley.edu/~cs61c/

1" 0" d" c" b" a" ...! ! Benefits of Larger Block Size. " Very applicable with Stored-Program Concept " Works well for sequential array accesses

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

A Cache Hierarchy in a Computer System

Lecture 11: Memory Systems -- Cache Organiza9on and Performance CSE 564 Computer Architecture Summer 2017

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

Improving Cache Performance

Transcription:

COSC 6385 Computer Architecture - Memory Hierarchies (I) Edgar Gabriel Spring 2018 Some slides are based on a lecture by David Culler, University of California, Berkley http//www.eecs.berkeley.edu/~culler/courses/cs252-s05 Capacity Access Time Cost CPU Registers 100s Bytes <10s ns Cache K Bytes 10-100 ns 1-0.1 cents/bit Levels of the Memory Hierarchy Main Memory M Bytes 200ns- 500ns $.0001-.00001 cents /bit Disk G Bytes, 10 ms (10,000,000 ns) -5-6 10-10 cents/bit Tape infinite sec-min -8 10 Registers Cache Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntl 8-128 bytes OS 512-4K bytes user/operator Mbytes Upper Leve faster Larger Lower Level 1

The Principle of Locality The Principle of Locality Program access a relatively small portion of the address space at any instant of time. Two Different Types of Locality Temporal Locality (Locality in Time) If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) Spatial Locality (Locality in Space) If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) Last 15 years, HW relied on locality for speed It is a property of programs which is exploited in machine design. Memory Hierarchy Terminology Hit data appears in some block in the upper level (example Block X) Hit Rate the fraction of memory access found in the upper level Hit Time Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss data needs to be retrieve from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) Miss Penalty Time to replace a block in the upper level + Time to deliver the block the processor Hit Time << Miss Penalty (500 instructions on 21264!) To Processor From Processor Upper Level Memory Blk X Lower Level Memory Blk Y 2

Cache Measures Hit rate fraction found in that level So high that usually talk about Miss rate Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) Miss penalty time to replace a block from lower level, including time to replace in CPU access time time to lower level = f(latency to lower level) transfer time time to transfer block =f(bw between upper & lower levels) Memory Address 0 1 2 3 4 5 6 7 8 9 A B C D E F Simplest Cache Direct Mapped Memory 4 Byte Direct Mapped Cache Cache Index 0 1 2 3 Location 0 can be occupied by data from Memory location 0, 4, 8,... etc. In general any memory location whose 2 LSBs of the address are 0s Address<10> => cache index Which one should we place in the cache? How can we tell which one is in the cache? 3

1 KB Direct Mapped Cache, 32B blocks For a 2 n byte cache The uppermost (32 - n) bits are always the Cache Tag The lowest M bits are the Byte Select (Block Size = 2 m ) 31 Cache Tag Example 0x50 Stored as part of the cache state 9 Cache Index Ex 0x01 4 Byte Select Ex 0x00 0 Valid Bit Cache Tag 0x50 Cache Data Byte 31 Byte 63 Byte 1 Byte 33 Byte 0 Byte 32 0 1 2 3 Byte 1023 Byte 992 31 Valid Two-way Set Associative Cache N-way set associative N entries for each Cache Index N direct mapped caches operates in parallel Example Two-way set associative cache Cache Index selects a set from the cache The two tags in the set are compared in parallel Data is selected based on the tag result Cache Index Cache Tag Cache Data Cache Data Cache Block 0 Cache Block 0 Cache Tag Valid Adr Tag Compare Sel1 1 Mux 0 Sel0 Compare OR Hit Cache Block 4

Disadvantage of Set Associative Cache N-way Set Associative Cache v. Direct Mapped Cache N comparators vs. 1 Extra MUX delay for the data Data comes AFTER Hit/Miss Valid Cache Tag Cache Data Cache Block 0 Cache Index Cache Data Cache Block 0 Cache Tag Valid Adr Tag Compare Sel1 1 Mux 0 Sel0 Compare OR Hit Cache Block 4 Questions for Memory Hierarchy Q1 Where can a block be placed in the upper level? (Block placement) Q2 How is a block found if it is in the upper level? (Block identification) Q3 Which block should be replaced on a miss? (Block replacement) Q4 What happens on a write? (Write strategy) 5

Q1 Where can a block be placed in the upper level? Block 12 placed in 8 block cache Fully associative, direct mapped, 2-way set associative S.A. Mapping = Block Number Modulo Number Sets Cache 2-Way Assoc Full Mapped Direct Mapped (12 mod 4) = 0 (12 mod 8) = 4 01234567 01234567 01234567 Memory 1111111111222222222233 01234567890123456789012345678901 Q2 How is a block found if it is in the upper level? Tag on each block No need to check index or block offset Increasing associativity shrinks index, expands tag Block Address Tag Index Block Offset 6

Q3 Which block should be replaced on a miss? Easy for Direct Mapped Set Associative or Fully Associative Random LRU (Least Recently Used) Assoc 2-way 4-way 8-way Size LRU Ran LRU Ran LRU Ran 16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0% 64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12% Q4 What happens on a write? Write through The information is written to both the block in the cache and to the block in the lower-level memory. Write back The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced. is block clean or dirty? Pros and Cons of each? WT read misses cannot result in writes WB no repeated writes to same location WT always combined with write buffers so that don t wait for lower level memory 7

Write Buffer for Write Through Processor Cache DRAM Write Buffer A Write Buffer is needed between the Cache and Memory Processor writes data into the cache and the write buffer Memory controller write contents of the buffer to memory Write buffer is just a FIFO Typical number of entries 4 Works fine if Store frequency (w.r.t. time) << 1 / DRAM write cycle Memory system designer s nightmare Store frequency (w.r.t. time) -> 1 / DRAM write cycle Write buffer saturation Cache Performance Avg. memory access time = Hit time + Miss rate x Miss penalty with Hit time time to access a data item which is available in the cache Miss rate ratio of no. of memory access leading to a cache miss to the total number of instructions Miss penalty time/cycles required for making a data item in the cache 8

Split vs. unified cache Assume two machines Machine 1 16KB instruction cache + 16 KB data cache Machine 2 32KB unified cache Assume for both machines 36% of instructions are memory references/data transfers 74% of memory references are instruction references Misses per 1000 instructions 16 KB instruction cache 3.82 16 KB data cache 40.9 32 KB unified cache 43.3 Hit time 1 clock cycle for machine 1 1 additional clock cycle for machine 2 for data accesses ( structural hazard) Miss penalty 100 clock cycles Split vs. unified cache (II) Questions 1. Which architecture has a lower miss-rate? 2. What is the average memory access time for both machines? Miss-rate per instruction can be calculated as Miss rate = (Misses) 1000 Instructions Memory access Instruction /1000 9

Split vs. unified cache (III) Machine 1 since every instruction access requires exactly one memory access to be loaded into the CPU Miss rate 16 KB instruction = (3.82/1000)/1.0 = 0.00382 0.004 Since 36% of the instructions are data transfer (LOAD or STORE) Miss rate 16 KB data = (40.9/1000)/0.36 = 0.114 Overall miss rate since 74% of memory access are instructions references Miss rate split cache = (0.74 * 0.004) + (0.26* 0.114) = 0.0324 Split vs. unified cache (IV) Machine 2 Unified cache needs to account for the instruction fetch and data access Miss rate 32KB unified = (43.4/1000)/(1 + 0.36) = 0.0318 Answer to question 1 the 2 nd architecture has a lower miss rate 10

Split vs. unified cache (V) Average memory access time (AMAT) AMAT = %instructions x (Hit time + Instruction Miss rate x Miss penalty) + %data x ( Hit time + Data Miss rate x Miss penalty) Machine 1 AMAT 1 = 0.74 (1 + 0.004x100) + 0.26 (1 + 0.114 x 100) = 4.24 Machine 2 AMAT 2 = 0.74 (1 + 0.0318x100) + 0.26 (1 + 1 + 0.0318 x 100) =4.44 Answer to question 2 the 1st machine has a lower average memory access time Six basic cache optimizations Reducing cache miss rate Larger block size, larger cache size, higher associativity Reducing miss penalty Multilevel caches, giving priority to read misses over writes Reducing cache hit time Avoid address translation when indexing the cache 11

Larger block size Reduces compulsory misses For a constant cache size reduces the number of blocks Increases conflict misses Larger caches Reduces capacity misses Might increase hit time ( e.g. if implemented as offchip caches) Cost limitations 12

Higher Associativity Reduces miss rate Increases hit-time Multilevel caches (I) Dilemma should the cache be fast or should it be large? Compromise multi-level caches 1 st level small, but at the speed of the CPU 2 nd level larger but slower Avg. memory access time = Hit time L1 + Miss rate L1 x Miss penalty L1 and Miss penalty L1 = Hit time L2 + Miss rate L2 x Miss penalty L2 13

Multilevel caches (II) Local miss rate rate of number of misses in a cache to total number of accesses to the cache Global miss rate ratio of number of misses in a cache number of memory access generated by the CPU 1 st level cache global miss rate = local miss rate 2 nd level cache global miss rate = Miss rate L1 x Miss rate L2 Design decision for the 2 nd level cache 1. Direct mapped or n-way set associative? 2. Size of the 2 nd level cache? Multilevel caches (II) Assumptions in order to decide question 1 Hit time L2 cache Direct mapped cache10 clock cycles 2-way set associative cache 10.1 clock cycles Local miss rate L2 Direct mapped cache 25% 2-way set associative 20% Miss penalty L2 cache 200 clock cycles Miss penalty direct mapped L2 = 10 + 0.25 x 200 = 60 clock cycles Miss penalty 2-way assoc. L2 = 10.1 + 0.2 x 200 = 50.1 clock cycles If L2 cache synchronized with L1 cache hit time = 11clock cycles Miss penalty 2-way assoc. L2 = 11 + 0.2 x 200 = 51 clock cycles 14

Multilevel caches (III) Multilevel inclusion 2 nd level cache includes all data items which are in the 1 st level cache Applied if size of 2 nd level cache >> size of 1 st level cache Multilevel exclusion Data of L1 cache is never in the L2 cache Applied if size of 2 nd level cache only slightly bigger than size of 1 st level cache Cache miss in L1 often leads to a swap of an L1 block with an L2 block Giving priority to read misses over writes Write-through caches use a write-buffer to speed up write operations Write-buffer might contain a value required by a subsequent load operations Two possibilities for ensuring consistency A read resulting in a cache miss has to wait until write buffer is empty Check the contents of the write buffer and take the data item from the write buffer if it is available Similar technique used in case of a cache-line replacement for n-way set associative caches 15

Avoiding address translation Translation of virtual address to physical address required to access memory/caches Using of virtual address in caches would avoid address translation Problems Two processes might use the same virtual address without meaning the same physical address -> adding of a process identifier tag (PID) to the cache address tag Page protection cache has to be flushed for every process switch Separation of indexing of cache vs. comparing address Use virtual address for indexing and physical address for tag comparison 16