The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

Similar documents
Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

Main Memory Supporting Caches

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Cache Optimization. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

COMPUTER ORGANIZATION AND DESIGN

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy. Reading. Sections 5.1, 5.2, 5.3, 5.4, 5.8 (some elements), 5.9 (2) Lecture notes from MKP, H. H. Lee and S.

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

Improving Cache Performance

Chapter 5. Large and Fast: Exploiting Memory Hierarchy. Jiang Jiang

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface

Improving Cache Performance

CS/ECE 3330 Computer Architecture. Chapter 5 Memory

Memory Hierarchy. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Memory Hierarchy Y. K. Malaiya

EN1640: Design of Computing Systems Topic 06: Memory System

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Advanced Memory Organizations

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

EN1640: Design of Computing Systems Topic 06: Memory System

ECE331: Hardware Organization and Design

Cache Memory and Performance

Caches. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

CSE 2021: Computer Organization

CSE 2021: Computer Organization

Virtual Memory. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

EEC 483 Computer Organization. Chapter 5.3 Measuring and Improving Cache Performance. Chansu Yu

5. Memory Hierarchy Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3. Emil Sekerinski, McMaster University, Fall Term 2015/16

4.1 Introduction 4.3 Datapath 4.4 Control 4.5 Pipeline overview 4.6 Pipeline control * 4.7 Data hazard & forwarding * 4.

CS3350B Computer Architecture

Chapter 5. Memory Technology

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Module Outline. CPU Memory interaction Organization of memory modules Cache memory Mapping and replacement policies.

ECE232: Hardware Organization and Design

Chapter 5 (Part II) Large and Fast: Exploiting Memory Hierarchy. Baback Izadi Division of Engineering Programs

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

Lecture 11 Cache. Peng Liu.

COMPUTER ORGANIZATION AND DESIGN ARM

Transistor: Digital Building Blocks

ECE232: Hardware Organization and Design

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

COMPUTER ORGANIZATION AND DESIGN

EE 4683/5683: COMPUTER ARCHITECTURE

Mainstream Computer System Components

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Mainstream Computer System Components CPU Core 2 GHz GHz 4-way Superscaler (RISC or RISC-core (x86): Dynamic scheduling, Hardware speculation

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Computer Systems Laboratory Sungkyunkwan University

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

Page 1. Memory Hierarchies (Part 2)

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Chapter 7-1. Large and Fast: Exploiting Memory Hierarchy (part I: cache) 臺大電機系吳安宇教授. V1 11/24/2004 V2 12/01/2004 V3 12/08/2004 (minor)

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

Memory Hierarchy. Slides contents from:

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

MEMORY HIERARCHY BASICS. B649 Parallel Architectures and Programming

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

COSC3330 Computer Architecture Lecture 19. Cache

Virtual Memory - Objectives

Memory Hierarchy. Slides contents from:

Page 1. Multilevel Memories (Improving performance using a little cash )

Computer Systems Laboratory Sungkyunkwan University

V. Primary & Secondary Memory!

Chapter 5. Large and Fast: Exploiting Memory Hierarchy. Part II Virtual Memory

Lecture 7 - Memory Hierarchy-II

Lecture 11: Memory Systems -- Cache Organiza9on and Performance CSE 564 Computer Architecture Summer 2017

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Caches Concepts Review

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved.

Introduction to OpenMP. Lecture 10: Caches

COSC 6385 Computer Architecture - Memory Hierarchies (II)

Pollard s Attempt to Explain Cache Memory

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 7: Memory Organization Part II

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

EC 513 Computer Architecture

Cache Optimisation. sometime he thought that there must be a better way

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Slide Set 5. for ENCM 501 in Winter Term, Steve Norman, PhD, PEng

Transcription:

The Memory Hierarchy Cache, Main Memory, and Virtual Memory (Part 2) Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

Cache Line Replacement The cache memory is always smaller than the main memory (else why have a cache?). For this reason, it is often the case that a memory block being placed into the cache must replace a memory block already there. The process is called cache replacement and the method to choose the block to replace is the cache replacement policy.

Replacement Policy Direct mapped: no choice Set associative Prefer non-valid entry, if there is one Otherwise, choose among entries in the set Least-recently used (LRU) Choose the one unused for the longest time Simple for 2-way, manageable for 4-way, too hard beyond that Random Gives approximately the same performance as LRU for high associativity Chapter 5 Large and Fast: Exploiting Memory Hierarchy 3

The Dirty Bit and Replacement Consider a cache line. If the valid bit V = 0, no data has ever been placed in the cache line. This is a great place to put a new block. (This does not apply to direct mapped caches). In some cache organizations, the dirty bit can be used to select the cache line to replace if all cache lines have V = 1. If a cache line has D = 0 (is not dirty ), it is not necessary to write its contents back to main memory in order to avoid data loss.

Writing to a Cache Suppose that the CPU writes to memory. The data written will be sent to the cache. What happens next depends on whether or not the target memory block is present in the cache. If the block is present, there is a hit. On a hit, the dirty bit for the block is set: D = 1. If the block is not present, a block is chosen for replacement, and the target block is read into the cache. The write proceeds and D = 1.

Write Policy Write-through Update both upper and lower levels Simplifies replacement, but may require write buffer Write-back Update upper level only Update lower level when block is replaced Need to keep more state Virtual memory Only write-back is feasible, given disk write latency Chapter 5 Large and Fast: Exploiting Memory Hierarchy 6

Cache Misses On cache hit, CPU proceeds normally On cache miss Stall the CPU pipeline Fetch block from next level of hierarchy Instruction cache miss Restart instruction fetch Data cache miss Complete data access Chapter 5 Large and Fast: Exploiting Memory Hierarchy 7

Sources of Misses Compulsory misses (aka cold start misses) First access to a block Capacity misses Due to finite cache size A replaced block is later accessed again Conflict misses (aka collision misses) In a non-fully associative cache Due to competition for entries in a set Would not occur in a fully associative cache of the same total size Chapter 5 Large and Fast: Exploiting Memory Hierarchy 8

Write-Through On data-write hit, could just update the block in cache But then cache and memory would be inconsistent Write through: also update memory But makes writes take longer e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles Effective CPI = 1 + 0.1 100 = 11 Solution: write buffer Holds data waiting to be written to memory CPU continues immediately Only stalls on write if write buffer is already full Chapter 5 Large and Fast: Exploiting Memory Hierarchy 9

Write-Back Alternative: On data-write hit, just update the block in cache Keep track of whether each block is dirty When a dirty block is replaced Write it back to memory Can use a write buffer to allow replacing block to be read first Chapter 5 Large and Fast: Exploiting Memory Hierarchy 10

Write Allocation What should happen on a write miss? Alternatives for write-through Allocate on miss: fetch the block Write around: don t fetch the block Since programs often write a whole block before reading it (e.g., initialization) For write-back Usually fetch the block Chapter 5 Large and Fast: Exploiting Memory Hierarchy 11

Cache Design Trade-offs Design change Effect on miss rate Negative performance effect Increase cache size Increase associativity Increase block size Decrease capacity misses Decrease conflict misses Decrease compulsory misses May increase access time May increase access time Increases miss penalty. For very large block size, may increase miss rate due to pollution. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 12

Block Size Considerations Larger blocks should reduce miss rate Due to spatial locality But in a fixed-sized cache Larger blocks fewer of them More competition increased miss rate Larger blocks pollution Larger miss penalty Can override benefit of reduced miss rate Early restart and critical-word-first can help Chapter 5 Large and Fast: Exploiting Memory Hierarchy 13

Multilevel Caches Primary cache attached to CPU Small, but fast Level-2 cache services misses from primary cache Larger, slower, but still faster than main memory Main memory services L-2 cache misses Some high-end systems include L-3 cache Chapter 5 Large and Fast: Exploiting Memory Hierarchy 14

Multilevel Cache Example Given CPU base CPI = 1, clock rate = 4GHz Miss rate/instruction = 2% Main memory access time = 100ns With just primary cache Miss penalty = 100ns/0.25ns = 400 cycles Effective CPI = 1 + 0.02 400 = 9 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 15

Example (cont.) Now add L-2 cache Access time = 5ns Global miss rate to main memory = 0.5% Primary miss with L-2 hit Penalty = 5ns/0.25ns = 20 cycles Primary miss with L-2 miss Extra penalty = 500 cycles CPI = 1 + 0.02 20 + 0.005 400 = 3.4 Performance ratio = 9/3.4 = 2.6 Chapter 5 Large and Fast: Exploiting Memory Hierarchy 16

Multilevel Cache Considerations Primary cache Focus on minimal hit time L-2 cache Focus on low miss rate to avoid main memory access Hit time has less overall impact Results L-1 cache usually smaller than a single cache L-1 block size smaller than L-2 block size Chapter 5 Large and Fast: Exploiting Memory Hierarchy 17

Interactions with Advanced CPUs Out-of-order CPUs can execute instructions during cache miss Pending store stays in load/store unit Dependent instructions wait in reservation stations Independent instructions continue Effect of miss depends on program data flow Much harder to analyse Use system simulation Chapter 5 Large and Fast: Exploiting Memory Hierarchy 18

Main Memory Supporting Caches Use DRAMs for main memory Fixed width (e.g., 1 word) Connected by fixed-width clocked bus Bus clock is typically slower than CPU clock Example cache block read 1 bus cycle for address transfer 15 bus cycles per DRAM access 1 bus cycle per data transfer For 4-word block, 1-word-wide DRAM Miss penalty = 1 + 4 15 + 4 1 = 65 bus cycles Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle Chapter 5 Large and Fast: Exploiting Memory Hierarchy 19

Increasing Memory Bandwidth 4-word wide memory Miss penalty = 1 + 15 + 1 = 17 bus cycles Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle 4-bank interleaved memory Miss penalty = 1 + 15 + 4 1 = 20 bus cycles Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle Chapter 5 Large and Fast: Exploiting Memory Hierarchy 20

Advanced DRAM Organization Bits in a DRAM are organized as a rectangular array DRAM accesses an entire row Burst mode: supply successive words from a row with reduced latency Double data rate (DDR) DRAM Transfer on rising and falling clock edges Quad data rate (QDR) DRAM Separate DDR inputs and outputs Chapter 5 Large and Fast: Exploiting Memory Hierarchy 21

DRAM Generations Year Capacity $/GB 1980 64Kbit $1500000 1983 256Kbit $500000 1985 1Mbit $200000 1989 4Mbit $50000 1992 16Mbit $15000 1996 64Mbit $10000 1998 128Mbit $4000 2000 256Mbit $1000 2004 512Mbit $250 2007 1Gbit $50 300 250 200 150 100 50 0 '80 '83 '85 '89 '92 '96 '98 '00 '04 '07 Trac Tcac Chapter 5 Large and Fast: Exploiting Memory Hierarchy 22

Measuring Cache Performance Components of CPU time Program execution cycles Includes cache hit time Memory stall cycles Mainly from cache misses With simplifying assumptions: Memory Memory stall Program cycles accesses Miss rate Miss penalty 5.3 Measuring and Improving Cache Performance Instructio Program ns Misses Instructio n Miss penalty Chapter 5 Large and Fast: Exploiting Memory Hierarchy 23

Cache Performance Example Given I-cache miss rate = 2% D-cache miss rate = 4% Miss penalty = 100 cycles Base CPI (ideal cache) = 2 Load & stores are 36% of instructions Miss cycles per instruction I-cache: 0.02 100 = 2 D-cache: 0.36 0.04 100 = 1.44 Actual CPI = 2 + 2 + 1.44 = 5.44 Ideal CPU is 5.44/2 =2.72 times faster Chapter 5 Large and Fast: Exploiting Memory Hierarchy 24

Average Access Time Hit time is also important for performance Average memory access time (AMAT) AMAT = Hit time + Miss rate Miss penalty Example CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5% AMAT = 1 + 0.05 20 = 2ns 2 cycles per instruction Chapter 5 Large and Fast: Exploiting Memory Hierarchy 25

Performance Summary When CPU performance increased Miss penalty becomes more significant Decreasing base CPI Greater proportion of time spent on memory stalls Increasing clock rate Memory stalls account for more CPU cycles Can t neglect cache behavior when evaluating system performance Chapter 5 Large and Fast: Exploiting Memory Hierarchy 26

Cache Control Example cache characteristics Direct-mapped, write-back, write allocate Block size: 4 words (16 bytes) Cache size: 16 KB (1024 blocks) 32-bit byte addresses Valid bit and dirty bit per block Blocking cache CPU waits until access is complete 31 10 9 4 3 0 Tag Index Offset 18 bits 10 bits 4 bits 5.7 Using a Finite State Machine to Control A Simple Cache Chapter 5 Large and Fast: Exploiting Memory Hierarchy 27

Interface Signals Read/Write Read/Write Valid Valid Address 32 Address 32 CPU Write Data 32 Cache Write Data 128 Memory Read Data 32 Read Data 128 Ready Ready Multiple cycles per access Chapter 5 Large and Fast: Exploiting Memory Hierarchy 28

Interactions with Software Misses depend on memory access patterns Algorithm behavior Compiler optimization for memory access Chapter 5 Large and Fast: Exploiting Memory Hierarchy 29

Multilevel On-Chip Caches Intel Nehalem 4-core processor Per core: 32KB L1 I-cache, 32KB L1 D-cache, 512KB L2 cache 5.10 Real Stuff: The AMD Opteron X4 and Intel Nehalem Chapter 5 Large and Fast: Exploiting Memory Hierarchy 30

3-Level Cache Organization L1 caches (per core) L2 unified cache (per core) L3 unified cache (shared) Intel Nehalem L1 I-cache: 32KB, 64-byte blocks, 4-way, approx LRU replacement, hit time n/a L1 D-cache: 32KB, 64-byte blocks, 8-way, approx LRU replacement, writeback/allocate, hit time n/a 256KB, 64-byte blocks, 8-way, approx LRU replacement, writeback/allocate, hit time n/a 8MB, 64-byte blocks, 16-way, replacement n/a, writeback/allocate, hit time n/a AMD Opteron X4 L1 I-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, hit time 3 cycles L1 D-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, writeback/allocate, hit time 9 cycles 512KB, 64-byte blocks, 16-way, approx LRU replacement, writeback/allocate, hit time n/a 2MB, 64-byte blocks, 32-way, replace block shared by fewest cores, write-back/allocate, hit time 32 cycles n/a: data not available Chapter 5 Large and Fast: Exploiting Memory Hierarchy 31

Virtual Memory and Cache Memory Any modern computer supports both virtual memory and cache memory. Consider the following example, based on results in previous lectures. Byte addressable memory A 32 bit logical address, giving a logical address space of 2 32 bytes. 2 24 bytes of physical memory, requiring 24 bits to address. Virtual memory implemented using page sizes of 2 12 = 4096 bytes. Cache memory implemented using a fully associative cache with cache line size of 16 bytes.

The Two Address Spaces The logical address is divided as follows: Bits 31 28 27 24 23 20 19 16 15 12 11 8 7 4 3 0 Field Page Number Offset in Page The physical address is divided as follows: Bits 23 20 19 16 15 12 11 8 7 4 3 0 Field Memory Tag Offset

VM and Cache: The Complete Process

The Virtually Mapped Cache

Problems with Virtually Mapped Caches Cache misses have to invoke the virtual memory system (more on that later). This is not a problem. One problem with virtual mapping is that the translation from virtual addresses to physical addresses varies between processes. This is called the aliasing problem. A solution is to extend the virtual address by a process id.