CS 152 Computer Architecture and Engineering. Lecture 6 - Memory

Similar documents
Lecture 6 - Memory. Dr. George Michelogiannakis EECS, University of California at Berkeley CRD, Lawrence Berkeley National Laboratory

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

CS 152 Computer Architecture and Engineering. Lecture 6 - Memory. Last =me in Lecture 5

CS252 Spring 2017 Graduate Computer Architecture. Lecture 11: Memory

Lecture-14 (Memory Hierarchy) CS422-Spring

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Agenda. EE 260: Introduction to Digital Design Memory. Naive Register File. Agenda. Memory Arrays: SRAM. Memory Arrays: Register File

CS252 Graduate Computer Architecture Spring 2014 Lecture 10: Memory

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Page 1. Multilevel Memories (Improving performance using a little cash )

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CSC 631: High-Performance Computer Architecture

Advanced Computer Architecture

Lecture 7 - Memory Hierarchy-II

Computer Architecture ELEC3441

EC 513 Computer Architecture

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Memory Hierarchy. Slides contents from:

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

ECE 552 / CPS 550 Advanced Computer Architecture I. Lecture 13 Memory Part 2

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

ECE 252 / CPS 220 Advanced Computer Architecture I. Lecture 13 Memory Part 2

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

The Memory Hierarchy & Cache

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

CS650 Computer Architecture. Lecture 9 Memory Hierarchy - Main Memory

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

The University of Adelaide, School of Computer Science 13 September 2018

CS 152 Computer Architecture and Engineering. Lecture 9 - Address Translation

Memory Hierarchy. Slides contents from:

COSC 6385 Computer Architecture - Memory Hierarchies (II)

Chapter 8 Memory Basics

Memory Hierarchy and Caches

EEM 486: Computer Architecture. Lecture 9. Memory

ECE 485/585 Microprocessor System Design

The Memory Hierarchy & Cache The impact of real memory on CPU Performance. Main memory basic properties: Memory Types: DRAM vs.

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 7: Memory Organization Part II

Mainstream Computer System Components CPU Core 2 GHz GHz 4-way Superscaler (RISC or RISC-core (x86): Dynamic scheduling, Hardware speculation

Mainstream Computer System Components

Lecture 18: DRAM Technologies

Memory Hierarchy. 2/18/2016 CS 152 Sec6on 5 Colin Schmidt

CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

CPE300: Digital System Architecture and Design

Memory. Lecture 22 CS301

CENG3420 Lecture 08: Memory Organization

EE414 Embedded Systems Ch 5. Memory Part 2/2

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Memory systems. Memory technology. Memory technology Memory hierarchy Virtual memory

Memory Hierarchy Design

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Adapted from David Patterson s slides on graduate computer architecture

Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

Lecture 13 - VLIW Machines and Statically Scheduled ILP

Copyright 2012, Elsevier Inc. All rights reserved.

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

ECE 250 / CS250 Introduction to Computer Architecture

CSEE W4824 Computer Architecture Fall 2012

CENG4480 Lecture 09: Memory 1

LECTURE 5: MEMORY HIERARCHY DESIGN

Internal Memory. Computer Architecture. Outline. Memory Hierarchy. Semiconductor Memory Types. Copyright 2000 N. AYDIN. All rights reserved.

Views of Memory. Real machines have limited amounts of memory. Programmer doesn t want to be bothered. 640KB? A few GB? (This laptop = 2GB)

Where We Are in This Course Right Now. ECE 152 Introduction to Computer Architecture. This Unit: Main Memory. Readings

Topic 21: Memory Technology

Topic 21: Memory Technology

Lecture 11 Cache. Peng Liu.

CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS

CS252 Spring 2017 Graduate Computer Architecture. Lecture 8: Advanced Out-of-Order Superscalar Designs Part II

CS 152, Spring 2011 Section 8

DRAM Main Memory. Dual Inline Memory Module (DIMM)

Lecture 9 - Virtual Memory

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

CS152 Computer Architecture and Engineering Lecture 16: Memory System

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

EEC 483 Computer Organization

EN1640: Design of Computing Systems Topic 06: Memory System

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Main Memory. EECC551 - Shaaban. Memory latency: Affects cache miss penalty. Measured by:

Memories: Memory Technology

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface

Introduction to memory system :from device to system

The Memory Hierarchy Part I

1/19/2009. Data Locality. Exploiting Locality: Caches

Addressing the Memory Wall

Memory latency: Affects cache miss penalty. Measured by:

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

ECE 152 Introduction to Computer Architecture

Chapter Seven Morgan Kaufmann Publishers

Transcription:

CS 152 Computer Architecture and Engineering Lecture 6 - Memory Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste! http://inst.eecs.berkeley.edu/~cs152! February 7, 2011 CS152, Spring 2011

Last time in Lecture 5 Control hazards (branches, interrupts) are most difficult to handle as they change which instruction should be executed next Speculation commonly used to reduce effect of control hazards (predict sequential fetch, predict no exceptions) Branch delay slots make control hazard visible to software Precise exceptions: stop cleanly on one instruction, all previous instructions completed, no following instructions have changed architectural state To implement precise exceptions in pipeline, shift faulting instructions down pipeline to commit point, where exceptions are handled in program order February 7, 2011 CS152, Spring 2011 2

Early Read-Only Memory Technologies Punched cards, From early 1700s through Jaquard Loom, Babbage, and then IBM Diode Matrix, EDSAC-2 µcode store Punched paper tape, instruction stream in Harvard Mk 1 IBM Card Capacitor ROS February 7, 2011 CS152, Spring 2011 IBM Balanced Capacitor ROS 3

Early Read/Write Main Memory Technologies Babbage, 1800s: Digits stored on mechanical wheels Williams Tube, Manchester Mark 1, 1947 Mercury Delay Line, Univac 1, 1951 Also, regenerative capacitor memory on Atanasoff-Berry computer, and rotating magnetic drum memory on IBM 650 February 7, 2011 CS152, Spring 2011 4

Core Memory Core memory was first large scale reliable main memory invented by Forrester in late 40s/early 50s at MIT for Whirlwind project Bits stored as magnetization polarity on small ferrite cores threaded onto two-dimensional grid of wires Coincident current pulses on X and Y wires would write cell and also sense original state (destructive reads) Robust, non-volatile storage Used on space shuttle computers until recently Cores threaded onto wires by hand (25 billion a year at peak production) Core access time ~ 1µs DEC PDP-8/E Board, 4K words x 12 bits, (1968) February 7, 2011 CS152, Spring 2011 5

Semiconductor Memory Semiconductor memory began to be competitive in early 1970s Intel formed to exploit market for semiconductor memory Early semiconductor memory was Static RAM (SRAM). SRAM cell internals similar to a latch (cross-coupled inverters). First commercial Dynamic RAM (DRAM) was Intel 1103 1Kbit of storage on single chip charge on a capacitor used to hold value Semiconductor memory quickly replaced core in 70s February 7, 2011 CS152, Spring 2011 6

One Transistor Dynamic RAM [Dennard, IBM] 1-T DRAM Cell word access transistor V REF TiN top electrode (V REF ) Ta 2 O 5 dielectric bit Storage capacitor (FET gate, trench, stack) poly W bottom word electrode line access February 7, 2011 transistor CS152, Spring 2011 7

Modern DRAM Structure [Samsung, sub-70nm DRAM, 2004] February 7, 2011 CS152, Spring 2011 8

DRAM Architecture Col. 1 bit lines Col. N Row Address Decoder 2 M Row 1 word lines Row 2 N N+M M Column Decoder & Sense Amplifiers Memory cell (one bit) Data D Bits stored in 2-dimensional arrays on chip Modern chips have around 4-8 logical banks on each chip each logical bank physically implemented as many smaller arrays February 7, 2011 CS152, Spring 2011 9

DRAM Packaging (Laptops/Desktops/Servers) ~7 Clock and control signals Address lines multiplexed row/column address ~12 Data bus (4b,8b,16b,32b) DRAM chip DIMM (Dual Inline Memory Module) contains multiple chips with clock/control/address signals connected in parallel (sometimes need buffers to drive signals to all chips) Data pins work together to return wide word (e.g., 64-bit data bus using 16x4-bit parts) February 7, 2011 CS152, Spring 2011 10

DRAM Packaging, Mobile Devices [ Apple A4 package on circuit board] Two stacked DRAM die Processor plus logic die [ Apple A4 package cross-section, ifixit 2010 ] February 7, 2011 CS152, Spring 2011 11

DRAM Operation Three steps in read/write access to a given bank Row access (RAS) decode row address, enable addressed row (often multiple Kb in row) bitlines share charge with storage cell small change in voltage detected by sense amplifiers which latch whole row of bits sense amplifiers drive bitlines full rail to recharge storage cells Column access (CAS) decode column address to select small number of sense amplifier latches (4, 8, 16, or 32 bits depending on DRAM package) on read, send latched bits out to chip pins on write, change sense amplifier latches which then charge storage cells to required value can perform multiple column accesses on same row without another row access (burst mode) Precharge charges bit lines to known value, required before next row access Each step has a latency of around 15-20ns in modern DRAMs Various DRAM standards (DDR, RDRAM) have different ways of encoding the signals for transmission to the DRAM, but all share same core architecture February 7, 2011 CS152, Spring 2011 12

200MHz Clock Double-Data Rate (DDR2) DRAM Row Column Precharge Row Data 400Mb/s [ Micron, 256Mb DDR2 SDRAM datasheet ] Data Rate February 7, 2011 CS152, Spring 2011 13

CPU-Memory Bottleneck CPU Memory Performance of high-speed computers is usually limited by memory bandwidth & latency Latency (time for a single access) Memory access time >> Processor cycle time Bandwidth (number of accesses per unit time) if fraction m of instructions access memory, 1+m memory references / instruction CPI = 1 requires 1+m memory refs / cycle (assuming MIPS RISC ISA) February 7, 2011 CS152, Spring 2011 14

Performance Processor-DRAM Gap (latency) 1000 100 10 1 1980 1981 1982 1983 1984 µproc 60%/year 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 Time DRAM 1998 1999 CPU Processor-Memory Performance Gap: (growing 50%/yr) 2000 DRAM 7%/year Four-issue 3GHz superscalar accessing 100ns DRAM could execute 1,200 instructions during time for one memory access! February 7, 2011 CS152, Spring 2011 15

Physical Size Affects Latency CPU CPU Small Memory Big Memory Signals have further to travel Fan out to more locations February 7, 2011 CS152, Spring 2011 16

Relative Memory Cell Sizes On-Chip SRAM in logic chip DRAM on memory chip [ Foss, Implementing Application-Specific Memory, ISSCC 1996 ] February 7, 2011 CS152, Spring 2011 17

Memory Hierarchy CPU A Small, Fast Memory (RF, SRAM) B holds frequently used data Big, Slow Memory (DRAM) capacity: Register << SRAM << DRAM latency: Register << SRAM << DRAM bandwidth: on-chip >> off-chip On a data access: if data fast memory low latency access (SRAM) if data fast memory long latency access (DRAM) February 7, 2011 CS152, Spring 2011 18

CS152 Administrivia February 7, 2011 CS152, Spring 2011 19

Management of Memory Hierarchy Small/fast storage, e.g., registers Address usually specified in instruction Generally implemented directly as a register file» but hardware might do things behind software s back, e.g., stack management, register renaming Larger/slower storage, e.g., main memory Address usually computed from values in register Generally implemented as a hardware-managed cache hierarchy» hardware decides what is kept in fast memory» but software may provide hints, e.g., don t cache or prefetch February 7, 2011 CS152, Spring 2011 20

Real Memory Reference Patterns Memory Address (one dot per access)! Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): 168-192 (1971) February 7, 2011 CS152, Spring 2011 Time!

Typical Memory Reference Patterns Address n loop iterations Instruction fetches Stack accesses subroutine call argument access subroutine return Data accesses scalar accesses Time February 7, 2011 CS152, Spring 2011

Common Predictable Patterns Two predictable properties of memory references: Temporal Locality: If a location is referenced it is likely to be referenced again in the near future. Spatial Locality: If a location is referenced it is likely that locations near it will be referenced in the near future. February 7, 2011 CS152, Spring 2011

Memory Reference Patterns Memory Address (one dot per access)! Spatial Locality Temporal Locality Time! Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal February 7, 2011 CS152, Spring 10(3): 168-192 2011 (1971)

Caches Caches exploit both types of predictability: Exploit temporal locality by remembering the contents of recently accessed locations. Exploit spatial locality by fetching blocks of data around recently accessed locations. February 7, 2011 CS152, Spring 2011

Inside a Cache Address Address Processor Data CACHE Data Main Memory copy of main memory location 100 copy of main memory location 101 100 304 Data Byte Data Byte Data Byte Line Address Tag 6848 416 Data Block February 7, 2011 CS152, Spring 2011

Cache Algorithm (Read) Look at Processor Address, search cache tags to find match. Then either Found in cache a.k.a. HIT Not in cache a.k.a. MISS Return copy of data from cache Read block of data from Main Memory Wait February 7, 2011 CS152, Spring 2011 Return data to processor and update cache Q: Which line do we replace?

Placement Policy Block Number 0 1 2 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1 1 0 1 2 3 4 5 6 7 8 9 2 2 2 2 2 2 2 2 2 2 0 1 2 3 4 5 6 7 8 9 3 3 0 1 Memory Set Number 0 1 2 3 0 1 2 3 4 5 6 7 Cache block 12 can be placed Fully (2-way) Set Direct Associative Associative Mapped anywhere anywhere in only into set 0 block 4 (12 mod 4) (12 mod 8) February 7, 2011 CS152, Spring 2011 28

Direct-Mapped Cache Tag Index Block Offset t V Tag k Data Block b 2 k lines = t HIT Data Word or Byte February 7, 2011 CS152, Spring 2011

Direct Map Address Selection higher-order vs. lower-order address bits Index Tag Block Offset k V Tag t Data Block b 2 k lines = t HIT Data Word or Byte February 7, 2011 CS152, Spring 2011

2-Way Set-Associative Cache t Tag Index k V Tag Data Block Block Offset b V Tag Data Block t = = Data Word or Byte HIT February 7, 2011 CS152, Spring 2011

Fully Associative Cache V Tag Data Block t = Block Offset Tag b t = = Data Word or Byte HIT February 7, 2011 CS152, Spring 2011

Replacement Policy In an associative cache, which block from a set should be evicted when the set becomes full? Random Least-Recently Used (LRU) LRU cache state must be updated on every access true implementation only feasible for small sets (2-way) pseudo-lru binary tree often used for 4-8 way First-In, First-Out (FIFO) a.k.a. Round-Robin used in highly associative caches Not-Most-Recently Used (NMRU) FIFO with exception for most-recently used block or blocks This is a second-order effect. Why? Replacement only happens on misses February 7, 2011 CS152, Spring 2011 33

Block Size and Spatial Locality Block is unit of transfer between the cache and memory Tag Word0 Word1 Word2 Word3 4 word block, b=2 Split CPU address block address offset b 32-b bits b bits 2 b = block size a.k.a line size (in bytes) Larger block size has distinct hardware advantages less tag overhead exploit fast burst transfers from DRAM exploit fast burst transfers over wide busses What are the disadvantages of increasing block size? Fewer blocks => more conflicts. Can waste bandwidth. February 7, 2011 CS152, Spring 2011 34

Acknowledgements These slides contain material developed and copyright by: Arvind (MIT) Krste Asanovic (MIT/UCB) Joel Emer (Intel/MIT) James Hoe (CMU) John Kubiatowicz (UCB) David Patterson (UCB) MIT material derived from course 6.823 UCB material derived from course CS252 February 7, 2011 CS152, Spring 2011 35