Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Similar documents
CS 152 Computer Architecture and Engineering. Lecture 6 - Memory

Page 1. Multilevel Memories (Improving performance using a little cash )

Lecture 6 - Memory. Dr. George Michelogiannakis EECS, University of California at Berkeley CRD, Lawrence Berkeley National Laboratory

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS 152 Computer Architecture and Engineering. Lecture 6 - Memory. Last =me in Lecture 5

Lecture-14 (Memory Hierarchy) CS422-Spring

EC 513 Computer Architecture

CS252 Spring 2017 Graduate Computer Architecture. Lecture 11: Memory

Agenda. EE 260: Introduction to Digital Design Memory. Naive Register File. Agenda. Memory Arrays: SRAM. Memory Arrays: Register File

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Advanced Computer Architecture

CSC 631: High-Performance Computer Architecture

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Advanced Memory Organizations

Lecture 7 - Memory Hierarchy-II

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Lecture 11 Cache. Peng Liu.

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy. Slides contents from:

CS650 Computer Architecture. Lecture 9 Memory Hierarchy - Main Memory

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Mainstream Computer System Components CPU Core 2 GHz GHz 4-way Superscaler (RISC or RISC-core (x86): Dynamic scheduling, Hardware speculation

Mainstream Computer System Components

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

Memory Hierarchy and Caches

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

EEM 486: Computer Architecture. Lecture 9. Memory

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

EE414 Embedded Systems Ch 5. Memory Part 2/2

The Memory Hierarchy & Cache

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 7: Memory Organization Part II

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Introduction to cache memories

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

Memory Hierarchy. 2/18/2016 CS 152 Sec6on 5 Colin Schmidt

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

The University of Adelaide, School of Computer Science 13 September 2018

Memory latency: Affects cache miss penalty. Measured by:

Memory Hierarchy. Slides contents from:

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Memory latency: Affects cache miss penalty. Measured by:

CS252 Graduate Computer Architecture Spring 2014 Lecture 10: Memory

Caches. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Computer Architecture. Memory Hierarchy. Lynn Choi Korea University

Memory systems. Memory technology. Memory technology Memory hierarchy Virtual memory

Memory. Lecture 22 CS301

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Memory Hierarchy Y. K. Malaiya

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Main Memory. EECC551 - Shaaban. Memory latency: Affects cache miss penalty. Measured by:

Introduction to memory system :from device to system

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

Chapter 8 Memory Basics

Spring 2018 :: CSE 502. Main Memory & DRAM. Nima Honarmand

COSC 6385 Computer Architecture - Memory Hierarchies (II)

EN1640: Design of Computing Systems Topic 06: Memory System

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface

COMPUTER ORGANIZATION AND DESIGN

Views of Memory. Real machines have limited amounts of memory. Programmer doesn t want to be bothered. 640KB? A few GB? (This laptop = 2GB)

Chapter 2: Memory Hierarchy Design (Part 3) Introduction Caches Main Memory (Section 2.2) Virtual Memory (Section 2.4, Appendix B.4, B.

Lecture 18: DRAM Technologies

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

Memory Hierarchy Design

CSCI-UA.0201 Computer Systems Organization Memory Hierarchy

Adapted from David Patterson s slides on graduate computer architecture

The Memory Hierarchy & Cache The impact of real memory on CPU Performance. Main memory basic properties: Memory Types: DRAM vs.

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Recap: Machine Organization

The DRAM Cell. EEC 581 Computer Architecture. Memory Hierarchy Design (III) 1T1C DRAM cell

CENG3420 Lecture 08: Memory Organization

ECE 485/585 Microprocessor System Design

,e-pg PATHSHALA- Computer Science Computer Architecture Module 25 Memory Hierarchy Design - Basics

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Topic 21: Memory Technology

Topic 21: Memory Technology

The Memory Hierarchy. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 3, 2018 L13-1

Chapter 7 Large and Fast: Exploiting Memory Hierarchy. Memory Hierarchy. Locality. Memories: Review

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

MEMORY SYSTEM MEMORY TECHNOLOGY SUMMARY DESIGNING MEMORY SYSTEM. The goal in designing any memory system is to provide

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Computer System Components

LECTURE 11. Memory Hierarchy

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

CPE300: Digital System Architecture and Design

CS 33. Architecture and Optimization (3) CS33 Intro to Computer Systems XVI 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

5. Memory Hierarchy Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3. Emil Sekerinski, McMaster University, Fall Term 2015/16

Chapter Seven. SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors)

The Memory Hierarchy 1

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

The Memory Hierarchy. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

CS152 Computer Architecture and Engineering Lecture 16: Memory System

Transcription:

1 Multilevel Memories Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Based on the material prepared by Krste Asanovic and Arvind

CPU-Memory Bottleneck 6.823 L7-2 CPU Memory Performance of high-speed computers is usually limited by memory bandwidth & latency Latency (time for a single access) Memory access time >> Processor cycle time Bandwidth (number of accesses per unit time) if fraction m of instructions access memory, 1+m memory references / instruction CPI = 1 requires 1+m memory refs / cycle

Core Memory 6.823 L7-3 Core memory was first large scale reliable main memory invented by Forrester in late 40s at MIT for Whirlwind project Bits stored as magnetization polarity on small ferrite cores threaded onto 2 dimensional grid of wires Coincident current pulses on X and Y wires would write cell and also sense original state (destructive reads) Robust, non-volatile storage Used on space shuttle computers until recently Cores threaded onto wires by hand (25 billion a year at peak production) Core access time ~ 1µs Image removed due to copyright restrictions. DEC PDP-8/E Board, 4K words x 12 bits, (1968)

6.823 L7-4 Semiconductor Memory, DRAM Semiconductor memory began to be competitive in early 1970s Intel formed to exploit market for semiconductor memory First commercial DRAM was Intel 1103 1Kbit of storage on single chip charge on a capacitor used to hold value Semiconductor memory quickly replaced core in 1970s

One Transistor Dynamic RAM 6.823 L7-5 TiN top electrode (V REF ) 1-T DRAM Cell Ta 2 O 5 dielectric word access FET Image removed due to copyright restrictions. bit Explicit storage poly W bottom TiN/Ta2O5/W Capacitor capacitor (FET word electrode gate, trench, line access fet stack)

6.823 L7-6 Performance Processor-DRAM Gap (latency) 1000 100 10 1 1980 1981 1982 1983 1984 µproc 60%/year Moore s Law DRAM 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Time Four-issue superscalar could execute 800 instructions during cache miss! CPU Processor-Memory Performance Gap: (grows 50% / year) DRAM 7%/year [From David Patterson, UC Berkeley]

Little s Law 6.823 L7-7 Throughput (T) = Number in Flight (N) / Latency (L) CPU Misses in flight table Memory Example: --- Assume infinite bandwidth memory --- 100 cycles / memory reference --- 1 + 0.2 memory references / instruction Table size = 1.2 * 100 = 120 entries 120 independent memory operations in flight!

DRAM Architecture 6.823 L7-8 N Row Address Decoder Col. 1 bit lines Col. 2 M word lines Row 1 Row 2 N N+M M Column Decoder & Sense Amplifiers Memory cell (one bit) Data D Bits stored in 2-dimensional arrays on chip Modern chips have around 4 logical banks on each chip each logical bank physically implemented as many smaller arrays

DRAM Operation 6.823 L7-9 Three steps in read/write access to a given bank Row access (RAS) decode row address, enable addressed row (often multiple Kb in row) bitlines share charge with storage cell small change in voltage detected by sense amplifiers which latch whole row of bits sense amplifiers drive bitlines full rail to recharge storage cells Column access (CAS) decode column address to select small number of sense amplifier latches (4, 8, 16, or 32 bits depending on DRAM package) on read, send latched bits out to chip pins on write, change sense amplifier latches which then charge storage cells to required value can perform multiple column accesses on same row without another row access (burst mode) Precharge charges bit lines to known value, required before next row access Each step has a latency of around 20ns in modern DRAMs Various DRAM standards (DDR, RDRAM) have different ways of encoding the signals for transmission to the DRAM, but all share the same core architecture

Multilevel Memory Strategy: Hide latency using small, fast memories called caches. Caches are a mechanism to hide memory latency based on the empirical observation that the patterns of memory references made by a processor are often highly predictable: PC 96 loop: ADD r2, r1, r1 100 What is the pattern SUBI r3, r3, #1 104 of instruction BNEZ r3, loop 108 memory addresses? 112

Typical Memory Reference Patterns Instruction fetches Address n loop iterations linear sequence Stack accesses Data accesses Time

Common Predictable Patterns Two predictable properties of memory references: Temporal Locality: If a location is referenced it is likely to be referenced again in the near future. Spatial Locality: If a location is referenced it is likely that locations near it will be referenced in the near future.

Caches Caches exploit both types of predictability: Exploit temporal locality by remembering the contents of recently accessed locations. Exploit spatial locality by fetching blocks of data around recently accessed locations.

Memory Hierarchy 6.823 L7-14 CPU A Small, Fast Memory (RF, SRAM) B holds frequently used data Big, Slow Memory (DRAM) size: Register << SRAM << DRAM why? latency: Register << SRAM << DRAM why? bandwidth: on-chip >> off-chip why? On a data access: hit (data fast memory) low latency access miss (data fast memory) long latency access (DRAM) Fast mem. effective only if bandwidth requirement at B << A

Management of Memory Hierarchy 6.823 L7-15 Small/fast storage, e.g., registers Address usually specified in instruction Generally implemented directly as a register file but hardware might do things behind software s back, e.g., stack management, register renaming Large/slower storage, e.g., memory Address usually computed from values in register Generally implemented as a cache hierarchy hardware decides what is kept in fast memory but software may provide hints, e.g., don t cache or prefetch

6.823 L7-16 A Typical Memory Hierarchy c.2003 Split instruction & data primary caches (on-chip SRAM) Multiple interleaved memory banks (DRAM) CPU RF L1 Instruction Cache L1 Data Cache Unified L2 Cache Memory Memory Memory Memory Multiported register file (part of CPU) Large unified secondary cache (on-chip SRAM)

Workstation Memory System (Apple PowerMac G5, 2003) 6.823 L7-17 Image removed due to copyright restrictions. To view image, visit http://www.apple.com/powermac/pciexpress.html Dual 2GHz processors, each with 64KB I- cache, 32KB D-cache, and 512KB L2 unified cache 1GB/s1GHz, 2x32-bit bus, 16GB/s North Bridge Chip Up to 8GB DRAM, 400MHz, 128-bit bus, 6.4GB/s AGP Graphics Card, 533MHz, 32-bit bus, 2. PCI-X Expansion, 133MHz, 64-bit bus, 1 GB/s

Five-minute break to stretch your legs 18

Inside a Cache Address Address Processor Data CACHE Data Main Memory copy of main memory location 100 copy of main memory location 101 100 304 Data Byte Data Byte Data Byte Line Address Tag 6848 Data Block

Cache Algorithm (Read) Look at Processor Address, search cache tags to find match. Then either Found in cache a.k.a. HIT Not in cache a.k.a. MISS Return copy of data from cache Read block of data from Main Memory Wait Return data to processor and update cache Q: Which line do we replace?

Placement Policy 6.823 L7-21 Block Number 0 1 2 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1 1 0 1 2 3 4 5 6 7 8 9 2 2 2 2 2 2 2 2 2 2 0 1 2 3 4 5 6 7 8 9 3 3 0 1 Memory Set Number 0 1 2 3 0 1 2 3 4 5 6 7 Cache Fully (2-way) Set Direct Associative Associative Mapped block 12 anywhere anywhere in only into can be placed set 0 block 4 (12 mod 4) (12 mod 8)

Direct-Mapped Cache Tag Index Block Offset t V Tag k Data Block b 2 k lines = t HIT Data Word or Byte

Direct Map Address Selection higher-order vs. lower-order address bits Index Tag Block Offset k V Tag t Data Block b 2 k lines = t HIT Data Word or Byte

2-Way Set-Associative Cache t Tag Index k V Tag Data Block Block Offset b V Tag Data Block t = = Data Word or Byte HIT

Fully Associative Cache V Tag Data Block t = Block Offset Tag b t = = Data Word or Byte HIT

Replacement Policy 6.823 L7-26 In an associative cache, which block from a set should be evicted when the set becomes full? Random Least Recently Used (LRU) LRU cache state must be updated on every access true implementation only feasible for small sets (2-way) pseudo-lru binary tree often used for 4-8 way First In, First Out (FIFO) a.k.a. Round-Robin used in highly associative caches Not Least Recently Used (NLRU) FIFO with exception for most recently used block This is a second-order effect. Why?

Block Size and Spatial Locality 6.823 L7-27 Block is unit of transfer between the cache and memory Tag Word0 Word1 Word2 Word3 4 word block, b=2 Split CPU address block address offset 32-b bits b bits 2b = block size a.k.a line size (in bytes) b Larger block size has distinct hardware advantages less tag overhead exploit fast burst transfers from DRAM exploit fast burst transfers over wide busses What are the disadvantages of increasing block size?

Average Cache Read Latency α is HIT RATIO: Fraction of references in cache 1 - α is MISS RATIO: Remaining references Average access time for serial search: Processor Addr Data CACHE Addr Data Main Memory t c + (1 - α) t m Average access time for parallel search: Processor Addr Data CACHE Data Main Memory α t c + (1 - α) t m t c is smallest for which type of cache?

Improving Cache Performance 6.823 L7-29 Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: reduce the miss rate (e.g., larger cache) reduce the miss penalty (e.g., L2 cache) reduce the hit time What is the simplest design strategy?

Write Performance 6.823 L7-30 t Tag V Tag Index k Block Offset b Data 2 k lines = t WE HIT Data Word or Byte

6.823 L7-31 Write Policy Cache hit: write through: write both cache & memory generally higher traffic but simplifies cache coherence write back: write cache only (memory is written only when the entry is evicted) a dirty bit per block can further reduce the traffic Cache miss: no write allocate: only write to main memory write allocate (aka fetch on write): fetch into cache Common combinations: write through and no write allocate write back with write allocate

Thank you! 32

Backup 33

DRAM Packaging 6.823 L7-34 ~7 Clock and control signals Address lines multiplexed row/column address ~12 Data bus (4b,8b,16b,32b) DRAM chip DIMM (Dual Inline Memory Module) contains multiple chips with clock/control/address signals connected in parallel (sometimes need buffers to drive signals to all chips) Data pins work together to return wide word (e.g., 64-bit data bus using 16x4-bit parts) 72-pin SO DIMM 168-pinn DIMM Images removed due to copyright restrictions.

Double-Data Rate (DDR2) DRAM 6.823 L7-35 Figure removed for copyright reasons. Source: Micron 256Mb DDR2 SDRAM datasheet - Bank Read Mode on pg. 44 of Micron Synchronous DRAM Specification.