Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved.

Similar documents
William Stallings Computer Organization and Architecture 10 th Edition Pearson Education, Inc., Hoboken, NJ. All rights reserved.

WEEK 7. Chapter 4. Cache Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved.

(Advanced) Computer Organization & Architechture. Prof. Dr. Hasan Hüseyin BALIK (5 th Week)

(Advanced) Computer Organization & Architechture. Prof. Dr. Hasan Hüseyin BALIK (4 th Week)

William Stallings Computer Organization and Architecture 8th Edition. Chapter 5 Internal Memory

Chapter 4 Main Memory

Contents. Main Memory Memory access time Memory cycle time. Types of Memory Unit RAM ROM

Computer Organization. 8th Edition. Chapter 5 Internal Memory

Eastern Mediterranean University School of Computing and Technology CACHE MEMORY. Computer memory is organized into a hierarchy.

Concept of Memory. The memory of computer is broadly categories into two categories:

Chapter 5 Internal Memory

Computer Organization and Assembly Language (CS-506)

COMPUTER ARCHITECTURE AND ORGANIZATION

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory

Internal Memory. Computer Architecture. Outline. Memory Hierarchy. Semiconductor Memory Types. Copyright 2000 N. AYDIN. All rights reserved.

William Stallings Computer Organization and Architecture 6th Edition. Chapter 5 Internal Memory

Embedded Systems Design: A Unified Hardware/Software Introduction. Outline. Chapter 5 Memory. Introduction. Memory: basic concepts

Embedded Systems Design: A Unified Hardware/Software Introduction. Chapter 5 Memory. Outline. Introduction

UNIT-V MEMORY ORGANIZATION

Organization. 5.1 Semiconductor Main Memory. William Stallings Computer Organization and Architecture 6th Edition

Module 5a: Introduction To Memory System (MAIN MEMORY)

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

Memory memories memory

COMP3221: Microprocessors and. and Embedded Systems. Overview. Lecture 23: Memory Systems (I)

Chapter 4. Cache Memory. Yonsei University

Address connections Data connections Selection connections

Advanced Parallel Architecture Lesson 4 bis. Annalisa Massini /2015

Memory Overview. Overview - Memory Types 2/17/16. Curtis Nelson Walla Walla University

CS 320 February 2, 2018 Ch 5 Memory

Chapter 5. Internal Memory. Yonsei University

Unit 2. Chapter 4 Cache Memory

8051 INTERFACING TO EXTERNAL MEMORY

Microcontroller Systems. ELET 3232 Topic 11: General Memory Interfacing

Read and Write Cycles

Basic Organization Memory Cell Operation. CSCI 4717 Computer Architecture. ROM Uses. Random Access Memory. Semiconductor Memory Types

ECE 341. Lecture # 16

William Stallings Computer Organization and Architecture 8th Edition. Cache Memory

CENG4480 Lecture 09: Memory 1

Overview IN this chapter we will study. William Stallings Computer Organization and Architecture 6th Edition

Characteristics of Memory Location wrt Motherboard. CSCI 4717 Computer Architecture. Characteristics of Memory Capacity Addressable Units

chapter 8 The Memory System Chapter Objectives

a) Memory management unit b) CPU c) PCI d) None of the mentioned

Lecture 18: Memory Systems. Spring 2018 Jason Tang

CPE300: Digital System Architecture and Design

Chapter 8 Memory Basics

UNIT V (PROGRAMMABLE LOGIC DEVICES)

Lecture-7 Characteristics of Memory: In the broad sense, a microcomputer memory system can be logically divided into three groups: 1) Processor

UNIT:4 MEMORY ORGANIZATION

CSC 553 Operating Systems

Summer 2003 Lecture 18 07/09/03

Cache memory. Lecture 4. Principles, structure, mapping

EE414 Embedded Systems Ch 5. Memory Part 2/2

LECTURE 11. Memory Hierarchy

Large and Fast: Exploiting Memory Hierarchy

CENG3420 Lecture 08: Memory Organization

The Memory Component

Unit 6 1.Random Access Memory (RAM) Chapter 3 Combinational Logic Design 2.Programmable Logic

Introduction read-only memory random access memory

Memory classification:- Topics covered:- types,organization and working

Semiconductor Memories: RAMs and ROMs

Memory and Disk Systems

Semiconductor Memory Types Microprocessor Design & Organisation HCA2102

Chapter 1 Computer System Overview

Chapter 2: Fundamentals of a microprocessor based system

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

CS 265. Computer Architecture. Wei Lu, Ph.D., P.Eng.

COSC 243. Memory and Storage Systems. Lecture 10 Memory and Storage Systems. COSC 243 (Computer Architecture)

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

Physical characteristics (such as packaging, volatility, and erasability Organization.

CS 261 Fall Mike Lam, Professor. Memory

Logic and Computer Design Fundamentals. Chapter 8 Memory Basics

Memory and Programmable Logic

Chapter 6 Objectives

Overview. EE 4504 Computer Organization. Historically, the limiting factor in a computer s performance has been memory access time

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Memory Hierarchy and Caches

MEMORY. Objectives. L10 Memory

Advanced Parallel Architecture Lesson 4. Annalisa Massini /2015

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2017 Lecture 13

1. Explain in detail memory classification.[summer-2016, Summer-2015]

Internal Memory Cache Stallings: Ch 4, Ch 5 Key Characteristics Locality Cache Main Memory

Computer Organization. Chapter 12: Memory organization

Memory. Lecture 22 CS301

Computer & Microprocessor Architecture HCA103

Design and Implementation of an AHB SRAM Memory Controller

P-2 Digital Design & Applications

Memory. Objectives. Introduction. 6.2 Types of Memory

Unit IV MEMORY SYSTEM PART A (2 MARKS) 1. What is the maximum size of the memory that can be used in a 16-bit computer and 32 bit computer?

Memory & Simple I/O Interfacing

Memory Hierarchy. Cache Memory. Virtual Memory

Chapter 4 - Cache Memory

MEMORY SYSTEM MEMORY TECHNOLOGY SUMMARY DESIGNING MEMORY SYSTEM. The goal in designing any memory system is to provide

DIGITAL SYSTEM FUNDAMENTALS (ECE421) DIGITAL ELECTRONICS FUNDAMENTAL (ECE422)

Computer Memory. Textbook: Chapter 1

ECE 485/585 Microprocessor System Design

COMP2121: Microprocessors and Interfacing. Introduction to Microprocessors

MEMORY BHARAT SCHOOL OF BANKING- VELLORE

machine cycle, the CPU: (a) Fetches an instruction, (b) Decodes the instruction, (c) Executes the instruction, and (d) Stores the result.

Announcement. Computer Architecture (CSC-3501) Lecture 20 (08 April 2008) Chapter 6 Objectives. 6.1 Introduction. 6.

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Transcription:

1 Memory +

2 Location Internal (e.g. processor registers, cache, main memory) External (e.g. optical disks, magnetic disks, tapes) Capacity Number of words Number of bytes Unit of Transfer Word Block Access Method Sequential Direct Random Associative Performance Access time Cycle time Transfer rate Physical Type Semiconductor Magnetic Optical Magneto-optical Physical Characteristics Volatile/nonvolatile Erasable/nonerasable Organization Memory modules Table 4.1 Key Characteristics of Computer Memory Systems

+ Characteristics of Memory Systems 3 Location Refers to whether memory is internal and external to the computer Internal memory is often equated with main memory Processor requires its own local memory, in the form of registers Cache is another form of internal memory External memory consists of peripheral storage devices that are accessible to the processor via I/O controllers Capacity Memory is typically expressed in terms of bytes Unit of transfer For internal memory the unit of transfer is equal to the number of electrical lines into and out of the memory module

Method of Accessing Units of Data 4 Sequential access Direct access Random access Associative Memory is organized into units of data called records Involves a shared readwrite mechanism Each addressable location in memory has a unique, physically wiredin addressing mechanism A word is retrieved based on a portion of its contents rather than its address Access must be made in a specific linear sequence Individual blocks or records have a unique address based on physical location The time to access a given location is independent of the sequence of prior accesses and is constant Each location has its own addressing mechanism and retrieval time is constant independent of location or prior access patterns Access time is variable Access time is variable Any location can be selected at random and directly addressed and accessed Cache memories may employ associative access Main memory and some cache systems are random access

Capacity and Performance: 5 The two most important characteristics of memory Three performance parameters are used: Access time (latency) For random-access memory it is the time it takes to perform a read or write operation For non-random-access memory it is the time it takes to position the read-write mechanism at the desired location Memory cycle time Access time plus any additional time required before second access can commence Additional time may be required for transients to die out on signal lines or to regenerate data if they are read destructively Concerned with the system bus, not the processor Transfer rate The rate at which data can be transferred into or out of a memory unit For random-access memory it is equal to 1/(cycle time)

+ Memory 6 The most common forms are: Semiconductor memory Magnetic surface memory Optical Magneto-optical Several physical characteristics of data storage are important: Volatile memory Information decays naturally or is lost when electrical power is switched off Nonvolatile memory Once recorded, information remains without deterioration until deliberately changed No electrical power is needed to retain information Magnetic-surface memories Are nonvolatile Semiconductor memory May be either volatile or nonvolatile Nonerasable memory Cannot be altered, except by destroying the storage unit Semiconductor memory of this type is known as read-only memory (ROM) For random-access memory the organization is a key design issue Organization refers to the physical arrangement of bits to form words

+ Memory Hierarchy 7 Design constraints on a computer s memory can be summed up by three questions: How much, how fast, how expensive There is a trade-off among capacity, access time, and cost Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time The way out of the memory dilemma is not to rely on a single memory component or technology, but to employ a memory hierarchy

Inboard memory Registers Cache Main memory 8 Outboard storage Magnetic disk CD-ROM CD-RW DVD-RW DVD-RAM Blu-Ray Off-line storage Magnetic tape Figure 4.1 The Memory Hierarchy

+ Memory 9 The use of three levels exploits the fact that semiconductor memory comes in a variety of types which differ in speed and cost Data are stored more permanently on external mass storage devices External, nonvolatile memory is also referred to as secondary memory or auxiliary memory Disk cache A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk A few large transfers of data can be used instead of many small transfers of data Data can be retrieved rapidly from the software cache rather than slowly from the disk

Word Transfer Block Transfer 10 CPU Cache Main Memory Fast Slow (a) Single cache CPU Level 1 (L1) cache Level 2 (L2) cache Level 3 (L3) cache Main Memory Fastest Fast Less fast Slow (b) Three-level cache organization Figure 4.3 Cache and Main Memory

Line Number Tag Block 0 1 2 Memory address 0 1 2 3 Block 0 (K words) 11 C 1 Block Length (K Words) (a) Cache Block M 1 2 n 1 Word Length (b) Main memory Figure 4.4 Cache/Main-Memory Structure

START 12 Receive address RA from CPU Is block containing RA in cache? Yes Fetch RA word and deliver to CPU No Access main memory for block containing RA Allocate cache line for main memory block Load main memory block into cache line Deliver RA word to CPU DONE Figure 4.5 Cache Read Operation

System Bus Address 13 Address buffer Processor Control Cache Control Data buffer Data Figure 4.6 Typical Cache Organization

14 Cache Addresses Logical Physical Cache Size Mapping Function Direct Associative Set Associative Replacement Algorithm Least recently used (LRU) First in first out (FIFO) Least frequently used (LFU) Random Write Policy Write through Write back Line Size Number of caches Single or two level Unified or split Table 4.2 Elements of Cache Design

+ Cache Addresses 15 Virtual Memory Virtual memory Facility that allows programs to address memory from a logical point of view, without regard to the amount of main memory physically available When used, the address fields of machine instructions contain virtual addresses For reads to and writes from main memory, a hardware memory management unit (MMU) translates each virtual address into a physical address in main memory

Logical address MMU Physical address 16 Processor Cache Main memory Data (a) Logical Cache Logical address MMU Physical address Processor Cache Main memory Data (b) Physical Cache Figure 4.7 Logical and Physical Caches

Mapping Function 17 Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines Three techniques can be used: Direct The simplest technique Maps each block of main memory into only one possible cache line Associative Permits each main memory block to be loaded into any line of the cache The cache control logic interprets a memory address simply as a Tag and a Word field To determine whether a block is in the cache, the cache control logic must simultaneously examine every line s Tag for a match Set Associative A compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages

m lines B 0 b t b L 0 18 B m 1 First m blocks of main memory (equal to size of cache) (a) Direct mapping cache memory L m 1 b = length of block in bits t = length of tag in bits t b L 0 b one block of main memory (b) Associative mapping cache memory L m 1 Figure 4.8 Mapping From Main Memory to Cache: Direct and Associative

s+w 19 Cache Main Memory s r Tag Memory Address Line r Word w Tag Data L 0 WO W1 W2 W3 B 0 s r Compare (hit in cache) 1 if match 0 if no match w L i s w W4j W(4j+1) W(4j+2) W(4j+3) B j 0 if match 1 if no match (miss in cache) L m 1 Figure 4.9 Direct-Mapping Cache Organization

Tag (hex) 00 00 Main memory address (binary) Tag Line + Word 000000000000000000000000 000000000000000000000100 Data 13579246 20 00 00 000000001111111111111000 000000001111111111111100 16 16 000101100000000000000000 000101100000000000000100 77777777 11235813 Tag 00 16 Data 13579246 11235813 Line Number 0000 0001 16 000101100011001110011100 FEDCBA98 16 FEDCBA98 0CE7 16 000101101111111111111100 12345678 FF 16 11223344 12345678 3FFE 3FFF FF FF 111111110000000000000000 111111110000000000000100 8 bits 32 bits 16-Kline cache FF FF 111111111111111111111000 111111111111111111111100 11223344 24682468 32 bits 16-MByte main memory Note: Memory address values are in binary representation; other values are in hexadecimal Tag Line Word Main memory address = 8 bits 14 bits 2 bits Figure 4.10 Direct Mapping Example

+ Direct Mapping Summary 21 Address length = (s + w) bits Number of addressable units = 2 s+w words or bytes Block size = line size = 2 w words or bytes Number of blocks in main memory = 2 s+ w /2 w = 2 s Number of lines in cache = m = 2 r Size of tag = (s r) bits

+ Victim Cache 22 Originally proposed as an approach to reduce the conflict misses of direct mapped caches without affecting its fast access time Fully associative cache Typical size is 4 to 16 cache lines Residing between direct mapped L1 cache and the next level of memory

s+w 23 Cache Main Memory s Memory Address Tag Word Tag Data L 0 W0 W1 W2 W3 B 0 w Compare 1 if match 0 if no match (hit in cache) s w L j L m 1 s w W4j W(4j+1) W(4j+2) W(4j+3) B j 0 if match 1 if no match (miss in cache) Figure 4.11 Fully Associative Cache Organization

Tag (hex) 000000 000001 Main memory address (binary) Tag Word Data 000000000000000000000000 000000000000000000000100 13579246 24 058CE6 058CE7 058CE8 000101100011001110011000 000101100011001110011100 000101100011001110100000 FEDCBA98 Tag 3FFFFE 058CE7 3FFFFD 000000 3FFFFF Data 11223344 FEDCBA98 FEDCBA98 33333333 13579246 24682468 Line Number 0000 0001 3FFD 3FFE 3FFF 22 bits 32 bits 16 Kline Cache 3FFFFD 3FFFFE 3FFFFF 111111111111111111110100 111111111111111111111000 111111111111111111111100 33333333 11223344 24682468 32 bits 16 MByte Main Memory Note: Memory address values are in binary representation; other values are in hexadecimal Main Memory Address = Tag Word 22 bits 2 bits Figure 4.12 Associative Mapping Example

+ Associative Mapping Summary 25 Address length = (s + w) bits Number of addressable units = 2 s+w words or bytes Block size = line size = 2 w words or bytes Number of blocks in main memory = 2 s+ w /2 w = 2 s Number of lines in cache = undetermined Size of tag = s bits

+ Set Associative Mapping 26 Compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages Cache consists of a number of sets Each set contains a number of lines A given block maps to any line in a given set e.g. 2 lines per set 2 way associative mapping A given block can be in one of 2 lines in only one set

v lines k lines B 0 L 0 B v 1 First v blocks of main memory (equal to number of sets) Cache memory - set 0 L k 1 27 Cache memory - set v 1 (a) v associative-mapped caches B 0 L 0 one set B v 1 First v blocks of main memory (equal to number of sets) Cache memory - way 1 Cache memory - way k L v 1 (b) k direct-mapped caches Figure 4.13 Mapping From Main Memory to Cache: k-way Set Associative

s+w 28 Cache Main Memory Tag Memory Address Set Word Tag Data F 0 B 0 s d d w F 1 B 1 Set 0 s d F k 1 F k s+w B j Compare F k+i Set 1 1 if match 0 if no match (hit in cache) F 2k 1 0 if match 1 if no match (miss in cache) Figure 4.14 k-way Set Associative Cache Organization

+ Set Associative Mapping Summary 29 Address length = (s + w) bits Number of addressable units = 2 s+w words or bytes Block size = line size = 2 w words or bytes Number of blocks in main memory = 2 s+w/ 2 w= 2 s Number of lines in set = k Number of sets = v = 2 d Number of lines in cache = m=kv = k * 2 d Size of cache = k * 2 d+w words or bytes Size of tag = (s d) bits

Write Policy 30 When a block that is resident in the cache is to be replaced there are two cases to consider: There are two problems to contend with: If the old block in the cache has not been altered then it may be overwritten with a new block without first writing out the old block More than one device may have access to main memory If at least one write operation has been performed on a word in that line of the cache then main memory must be updated by writing the line of cache out to the block of memory before bringing in the new block A more complex problem occurs when multiple processors are attached to the same bus and each processor has its own local cache - if a word is altered in one cache it could conceivably invalidate a word in other caches

+ Write Through 31 and Write Back Write through Simplest technique All write operations are made to main memory as well as to the cache The main disadvantage of this technique is that it generates substantial memory traffic and may create a bottleneck Write back Minimizes memory writes Updates are made only in the cache Portions of main memory are invalid and hence accesses by I/O modules can be allowed only through the cache This makes for complex circuitry and a potential bottleneck

+ Multilevel Caches 32 As logic density has increased it has become possible to have a cache on the same chip as the processor The on-chip cache reduces the processor s external bus activity and speeds up execution time and increases overall system performance When the requested instruction or data is found in the on-chip cache, the bus access is eliminated On-chip cache accesses will complete appreciably faster than would even zero-wait state bus cycles During this period the bus is free to support other transfers Two-level cache: Internal cache designated as level 1 (L1) External cache designated as level 2 (L2) Potential savings due to the use of an L2 cache depends on the hit rates in both the L1 and L2 caches The use of multilevel caches complicates all of the design issues related to caches, including size, replacement algorithm, and write policy

33 + Internal Memory

Control Control 34 Select Cell Data in Select Cell Sense (a) Write (b) Read Figure 5.1 Memory Cell Operation

35 Memory Type Category Erasure Random-access memory (RAM) Read-only memory (ROM) Programmable ROM (PROM) Erasable PROM (EPROM) Electrically Erasable PROM (EEPROM) Flash memory Read-write memory Read-only memory Read-mostly memory Electrically, byte-level Not possible UV light, chiplevel Electrically, byte-level Electrically, block-level Write Mechanism Electrically Masks Electrically Volatility Volatile Nonvolatile Table 5.1 Semiconductor Memory Types

+ Dynamic RAM (DRAM) 36 RAM technology is divided into two technologies: Dynamic RAM (DRAM) Static RAM (SRAM) DRAM Made with cells that store data as charge on capacitors Presence or absence of charge in a capacitor is interpreted as a binary 1 or 0 Requires periodic charge refreshing to maintain data storage The term dynamic refers to tendency of the stored charge to leak away, even with power continuously applied

dc voltage 37 Address line T 3 T 4 Transistor C 1 C 2 T 5 T 6 Storage capacitor T 1 T 2 Bit line B Ground Ground Bit line B Address line Bit line B (a) Dynamic RAM (DRAM) cell (b) Static RAM (SRAM) cell Figure 5.2 Typical Memory Cell Structures

+ 38 Static RAM (SRAM) Digital device that uses the same logic elements used in the processor Binary values are stored using traditional flip-flop logic gate configurations Will hold its data as long as power is supplied to it 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.

+ SRAM versus DRAM Both volatile Power must be continuously supplied to the memory to preserve the bit values Dynamic cell Simpler to build, smaller More dense (smaller cells = more cells per unit area) Less expensive Requires the supporting refresh circuitry Tend to be favored for large memory requirements Used for main memory Static Faster Used for cache memory (both on and off chip) SRAM DRAM 39

+ Read Only Memory (ROM) 40 Contains a permanent pattern of data that cannot be changed or added to No power source is required to maintain the bit values in memory Data or program is permanently in main memory and never needs to be loaded from a secondary storage device Data is actually wired into the chip as part of the fabrication process Disadvantages of this: No room for error, if one bit is wrong the whole batch of ROMs must be thrown out Data insertion step includes a relatively large fixed cost

+ Programmable ROM (PROM) 41 Less expensive alternative Nonvolatile and may be written into only once Writing process is performed electrically and may be performed by supplier or customer at a time later than the original chip fabrication Special equipment is required for the writing process Provides flexibility and convenience Attractive for high volume production runs

Read-Mostly Memory EPROM EEPROM Erasable programmable read-only memory Electrically erasable programmable read-only memory Can be written into at any time without erasing prior contents Erasure process can be performed repeatedly More expensive than PROM but it has the advantage of the multiple update capability Combines the advantage of non-volatility with the flexibility of being updatable in place More expensive than EPROM Flash Memory Intermediate between EPROM and EEPROM in both cost and functionality Uses an electrical erasing technology, does not provide byte-level erasure Microchip is organized so that a section of memory cells are erased in a single action or flash 42

RAS CAS WE OE 43 Timing and Control Refresh Counter MUX A0 A1 Row Address Buffer Row Decoder Memory array (2048 2048 4) A10 Column Address Buffer Refresh circuitry Column Decoder Data Input Buffer Data Output Buffer D1 D2 D3 D4 Figure 5.3 Typical 16 Megabit DRAM (4M 4)

A19 A16 A15 A12 A7 A6 A5 A4 A3 A2 A1 A0 D0 D1 D2 Vss 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1M 8 32 31 30 29 28 27 26 25 32 Pin Dip 24 0.6" 23 22 21 20 19 18 17 Top View Vcc A18 A17 A14 A13 A8 A9 A11 Vpp A10 CE D7 D6 D5 D4 D3 Vcc D0 D1 WE RAS NC A10 A0 A1 A2 A3 Vcc 1 2 3 4 5 6 7 8 9 10 11 12 4M 4 24 23 22 21 20 19 24 Pin Dip 18 0.6" 17 16 15 14 13 Top View Vss D3 D2 CAS OE A9 A8 A7 A6 A5 A4 Vss 44 (a) 8 Mbit EPROM (b) 16 Mbit DRAM Figure 5.4 Typical Memory Package Pins and Signals

Decode 1 of 512 Decode 1 of 512 Memory address register (MAR) 512 words by 512 bits Chip #1 45 9 9 Decode 1 of 512 bit-sense Memory buffer register (MBR) 1 2 3 4 5 6 7 8 512 words by 512 bits Chip #8 Decode 1 of 512 bit-sense Figure 5.5 256-KByte Memory Organization

Interleaved Memory Composed of a collection of DRAM chips 46 Grouped together to form a memory bank Each bank is independently able to service a memory read or write request K banks can service K requests simultaneously, increasing memory read or write rates by a factor of K If consecutive words of memory are stored in different banks, the transfer of a block of memory is speeded up

47 Advanced DRAM Organization SDRAM One of the most critical system bottlenecks when using high-performance processors is the interface to main internal memory The traditional DRAM chip is constrained both by its internal architecture and by its interface to the processor s memory bus DDR-DRAM RDRAM + A number of enhancements to the basic DRAM architecture have been explored The schemes that currently dominate the market are SDRAM and DDR-DRAM

Synchronous DRAM (SDRAM) 48 One of the most widely used forms of DRAM Exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states With synchronous access the DRAM moves data in and out under control of the system clock The processor or other master issues the instruction and address information which is latched by the DRAM The DRAM then responds after a set number of clock cycles Meanwhile the master can safely do other tasks while the SDRAM is processing

49

A0 to A12 BA0, BA1 CLK CKE CS RAS CAS WE DQ0 to DQ15 DQM Address inputs Bank address lines Clock input Clock enable Chip select Row address strobe Column address strobe Write enable Data input/output Data mask Table 5.3 50 SDRAM Pin Assignment s

+ Double Data Rate SDRAM 51 (DDR SDRAM) Developed by the JEDEC Solid State Technology Association (Electronic Industries Alliance s semiconductor-engineeringstandardization body) Numerous companies make DDR chips, which are widely used in desktop computers and servers DDR achieves higher data rates in three ways: First, the data transfer is synchronized to both the rising and falling edge of the clock, rather than just the rising edge Second, DDR uses higher clock rate on the bus to increase the transfer rate Third, a buffering scheme is used

52 Prefetch buffer (bits) DDR1 DDR2 DDR3 DDR4 2 4 8 8 Voltage level (V) 2.5 1.8 1.5 1.2 Front side bus data rates (Mbps) 200 400 400 1066 800 2133 2133 4266 Table 5.4 DDR Characteristics

+ Flash Memory 53 Used both for internal memory and external memory applications First introduced in the mid-1980 s Is intermediate between EPROM and EEPROM in both cost and functionality Uses an electrical erasing technology like EEPROM It is possible to erase just blocks of memory rather than an entire chip Gets its name because the microchip is organized so that a section of memory cells are erased in a single action Does not provide byte-level erasure Uses only one transistor per bit so it achieves the high density of EPROM