Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory

Similar documents
Overview IN this chapter we will study. William Stallings Computer Organization and Architecture 6th Edition

Semiconductor Memory Types Microprocessor Design & Organisation HCA2102

Chapter 4 Main Memory

UNIT-V MEMORY ORGANIZATION

Computer Organization and Assembly Language (CS-506)

Semiconductor Memory Types. Computer & Microprocessor Architecture HCA103. Memory Cell Operation. Semiconductor Memory.

William Stallings Computer Organization and Architecture 6th Edition. Chapter 5 Internal Memory

Organization. 5.1 Semiconductor Main Memory. William Stallings Computer Organization and Architecture 6th Edition

Internal Memory. Computer Architecture. Outline. Memory Hierarchy. Semiconductor Memory Types. Copyright 2000 N. AYDIN. All rights reserved.

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

Unit 2. Chapter 4 Cache Memory

Memory and Disk Systems

Overview. EE 4504 Computer Organization. Historically, the limiting factor in a computer s performance has been memory access time

William Stallings Computer Organization and Architecture 8th Edition. Cache Memory

Chapter 5 Internal Memory

Computer Organization. 8th Edition. Chapter 5 Internal Memory

CS 265. Computer Architecture. Wei Lu, Ph.D., P.Eng.

Characteristics of Memory Location wrt Motherboard. CSCI 4717 Computer Architecture. Characteristics of Memory Capacity Addressable Units

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

Computer & Microprocessor Architecture HCA103

Computer Organization: A Programmer's Perspective

Lecture 18: Memory Systems. Spring 2018 Jason Tang

Eastern Mediterranean University School of Computing and Technology CACHE MEMORY. Computer memory is organized into a hierarchy.

Module 5a: Introduction To Memory System (MAIN MEMORY)

COMPUTER ARCHITECTURE AND ORGANIZATION

Characteristics. Microprocessor Design & Organisation HCA2102. Unit of Transfer. Location. Memory Hierarchy Diagram

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Memory Overview. Overview - Memory Types 2/17/16. Curtis Nelson Walla Walla University

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

BCN1043. By Dr. Mritha Ramalingam. Faculty of Computer Systems & Software Engineering

COSC 243. Memory and Storage Systems. Lecture 10 Memory and Storage Systems. COSC 243 (Computer Architecture)

CS 201 The Memory Hierarchy. Gerson Robboy Portland State University

CS 33. Memory Hierarchy I. CS33 Intro to Computer Systems XVI 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved.

Advanced Parallel Architecture Lesson 4 bis. Annalisa Massini /2015

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

Concept of Memory. The memory of computer is broadly categories into two categories:

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, SPRING 2013

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 26, FALL 2012

NEXT SET OF SLIDES FROM DENNIS FREY S FALL 2011 CMSC313.

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

MEMORY. Objectives. L10 Memory

William Stallings Computer Organization and Architecture 10 th Edition Pearson Education, Inc., Hoboken, NJ. All rights reserved.

UNIT:4 MEMORY ORGANIZATION

Memory Hierarchy Y. K. Malaiya

Chapter 5. Internal Memory. Yonsei University

Large and Fast: Exploiting Memory Hierarchy

CENG3420 Lecture 08: Memory Organization

CS429: Computer Organization and Architecture

CISC 360. The Memory Hierarchy Nov 13, 2008

Lecture 2: Memory Systems

CS429: Computer Organization and Architecture

Storage Technologies and the Memory Hierarchy

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

Advanced Parallel Architecture Lesson 4. Annalisa Massini /2015

Giving credit where credit is due

Adapted from instructor s supplementary material from Computer. Patterson & Hennessy, 2008, MK]

Random Access Memory (RAM)

+ Random-Access Memory (RAM)

WEEK 7. Chapter 4. Cache Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved.

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

The Memory Hierarchy Sept 29, 2006

ECE 341. Lecture # 16

(Advanced) Computer Organization & Architechture. Prof. Dr. Hasan Hüseyin BALIK (4 th Week)

Memory. Objectives. Introduction. 6.2 Types of Memory

Magnetic Disk. Optical. Magnetic Tape. RAID Removable. CD-ROM CD-Recordable (CD-R) CD-R/W DVD

Computer Systems. Memory Hierarchy. Han, Hwansoo

Chapter Seven Morgan Kaufmann Publishers

CS 261 Fall Mike Lam, Professor. Memory

William Stallings Computer Organization and Architecture 6 th Edition. Chapter 6 External Memory

Chapter 4. Cache Memory. Yonsei University

Virtual Memory. Adapted from instructor s supplementary material from Computer. Patterson & Hennessy, 2008, MK]

The Memory Hierarchy 10/25/16

LECTURE 10: Improving Memory Access: Direct and Spatial caches

MEMORY. Computer memory refers to the hardware device that are used to store and access data or programs on a temporary or permanent basis.

Chapter 6 Objectives

Foundations of Computer Systems

MEMORY BHARAT SCHOOL OF BANKING- VELLORE

Memory. Lecture 22 CS301

CPE300: Digital System Architecture and Design

Random-Access Memory (RAM) Lecture 13 The Memory Hierarchy. Conventional DRAM Organization. SRAM vs DRAM Summary. Topics. d x w DRAM: Key features

Module 1: Basics and Background Lecture 4: Memory and Disk Accesses. The Lecture Contains: Memory organisation. Memory hierarchy. Disks.

CPE300: Digital System Architecture and Design

Key features. ! RAM is packaged as a chip.! Basic storage unit is a cell (one bit per cell).! Multiple RAM chips form a memory.

Random-Access Memory (RAM) CS429: Computer Organization and Architecture. SRAM and DRAM. Flash / RAM Summary. Storage Technologies

PRATHYUSHA INSTITUTE OF TECHNOLOGY AND MANAGEMENT

William Stallings Computer Organization and Architecture 8th Edition. Chapter 5 Internal Memory

Memory memories memory

CS 261 Fall Mike Lam, Professor. Memory

Computer Architecture and System Software Lecture 09: Memory Hierarchy. Instructor: Rob Bergen Applied Computer Science University of Winnipeg

The Memory Hierarchy /18-213/15-513: Introduction to Computer Systems 11 th Lecture, October 3, Today s Instructor: Phil Gibbons


COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Microcontroller Systems. ELET 3232 Topic 11: General Memory Interfacing

Where Have We Been? Ch. 6 Memory Technology

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

William Stallings Computer Organization and Architecture 8 th Edition. Chapter 6 External Memory

CSCI-UA.0201 Computer Systems Organization Memory Hierarchy

Memory Hierarchy. Memory Flavors Principle of Locality Program Traces Memory Hierarchies Associativity. (Study Chapter 5)

Memory classification:- Topics covered:- types,organization and working

Transcription:

Memory Hierarchy Contents Memory System Overview Cache Memory Internal Memory External Memory Virtual Memory Memory Hierarchy Registers In CPU Internal or Main memory Cache RAM External memory Backing store

Memory Hierarchy Cost/bit Capacity Access time Frequency of access Why memory hierarchy Memory-Processor Performance Gap Why memory hierarchy Locality Temporal locality The currently required data are likely to need again in the near future Spatial locality There is high probability that the other data nearby will be need soon

Cache Memory Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module Cache in processor AMD Athlon 64 X2 (Dual Core) Intel Tukwila (Four Core) Typical Cache Organization

Cache operation CPU requests contents of memory location Check cache for this data If present, get from cache (fast) If not present, read required block from main memory to cache Then deliver from cache to CPU Avg memory access time = Hit time L1 + Miss rate L1 x Miss penalty L1 Miss Penalty L1 = Hit time L2 + Miss rate L2 x Miss penalty L2 Example Two levels of memory Level 1 access time 0.01 us, Level 2 access time 0.1us 95% memory accesses are found in level 1 Average memory access time for each word? Cache Design Size Mapping Function Replacement Algorithm Replacement Algorithm Write Policy Block Size Number of Caches

Size does matter Cost More cache is expensive Speed More cache is faster (up to a point) Checking cache for data takes time Mapping Memory blocks Cache Lines (blocks) Cache Tag Mapping Direct Associative Set associative

Mapping Example Address structure Address is in two parts (s+w) Most Significant s bits specify one memory block Block address: which block should be read or written The s bits are split into a cache line field r and a tag of s-r (most significant) Least Significant w bits identify unique byte Block shift: within the block, which byte should be read or written Direct Mapping Each block of main memory maps to only one cache line i.e. if a block is in cache, it must be in one specific place How i=j modulo m i: cache line number j: memory block number m: number of total lines in the cache

Example Cache of 64kByte Cache block of 4 bytes i.e. cache is 16k (2 14 ) lines of 4 bytes 16MBytes main memory 24 bit address (2 24 =16M) 4M memory blocks Ex:Direct Mapping Address Structure Tag s-r Line of block index r Word w 8 14 2 24 bit address 2 bit word identifier (4 byte block) Total cache lines: 2 14 (or the number of blocks in caches) 22 bit block identifier 8 bit tag (=22-14) Direct Mapping Cache Organization

Direct Mapping pros & cons Simple and fast Inexpensive Fixed location for given block If a program accesses 2 blocks that map to the same line repeatedly, cache misses are very high Fully Associative Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of memory Every line s tag is examined for a match Flexibility in placing data blocks Cache searching gets expensive Fully Associative Address Structure Tag 22 bit Word 2 bit 22 bit tag stored with each 4 bytes of data Compare tag field with tag entry in cache to check for hit Least significant 2 bits of address identify which byte is required from the 4 byte block

Fully Associative Cache Organization Set Associative Mapping Cache lines is divided into a number of groups, i.e., sets A given block maps to only one set, but can be any line in the set How i=j modulo v v: number of total sets in the cache Trade off between direct mapping and fully associative mapping Ex: 8-way Set Associative Mapping Address Structure Tag s-r Set Index (r) Word w 8 14 2 24 bit address 2 bit word identifier (4 byte block) 22 bit block identifier Total number of sets: 2 14 /8 = 2 11 11 bit tag (=22-11) 11 bit set index

K-Way Set Associative Cache Organization Replacement Algorithms For Associative & Set Associative Least Recently used (LRU) First in first out (FIFO) Least frequently used Random Write Policy Why Must not overwrite a cache block unless main memory is up to date Multiple CPUs may have individual caches I/O may address main memory directly How Write through Write back

Write through All writes go to main memory as well as cache Multiple CPUs can monitor main memory traffic to keep local l (to CPU) cache up to date Lots of traffic Slows down writes Write back Updates initially made in cache only If block is to be replaced, write to main memory only if updated Other caches get out of sync I/O must access main memory through cache Line Size Larger blocks reduce the number of blocks that fit into a cache. Larger blocks make each additional word is less likely to be needed in the near future.

Number of Caches Single or two level Logic density increased On-chip cache reduces the processor s s external bus activity Off-chip/external cache is still desirable Unified or split Contents Memory System Overview Cache Memory Internal t l Memory External Memory Virtual Memory Internal Memory Internal memory types DRAM vs. SRAM

Semiconductor Memory Types Dynamic RAM Bits stored as charge in capacitors Charges leak -> Need refreshing (and therefore refresh circuits) even when powered Simpler construction Smaller per bit Less expensive Slower Main memory Static RAM Bits stored as on/off switches No charges to leak and therefore no refreshing needed when powered More complex construction Larger per bit More expensive Faster Cache

SRAM vs DRAM Both volatile Power needed to preserve data Dynamic cell Simpler to build, smaller, more dense Less expensive Needs refresh Larger memory units Static Faster Cache Read Only Memory (ROM) Permanent storage Nonvolatile Usage Library subroutines Systems programs (BIOS) Function tables Types of ROM Written during manufacture Programmable (once) PROM Needs special equipment to program Needs special equipment to program Read mostly Erasable Programmable (EPROM) Erased by UV Electrically Erasable (EEPROM) Takes much longer to write than read Flash memory

Contents Memory System Overview Cache Memory Internal t l Memory External Memory Virtual Memory Contents Memory System Overview Cache Memory Internal t l Memory External Memory Virtual Memory External Memory Types of external memory

Types of External Memory Magnetic Disk RAID (Redundant Array of Independent Disk) Removable Optical CD-ROM CD-Recordable (CD-R) CD-R/W DVD Magnetic Tape Magnetic Disk Read and Write Mechanisms Recording and retrieval via conductive coil called a head May be single read/write head or separate ones During read/write, head is stationary, platter rotates Write Current through coil produces magnetic field Pulses sent to head Magnetic pattern recorded on surface below Data Organization and Formatting Concentric rings or tracks Gaps between tracks Reduce gap to increase capacity Same number of bits per track (variable packing density) Tracks divided into sectors

Performance Seek time Moving head to correct track (t s ) Average (Rotational) latency Waiting for data to rotate under head t r = 1/2r (r: rounds per second) Transfer time: t t =b/(rn) b: number of bytes to be transferred r: rounds per second N: number of bytes per track Total disk access time = t s +t r +t t Example Seek time = 4ms Rotation speed = 7500rpm 512 bytes/sector, 500 sectors/track A file with total 2500 sectors Assume: Sequentially stored Randomly stored 1. Ts= 4ms, Max Tr = 60/7500=8ms 2500 sectors 5 tracks Tt for each track = b/rn =8ms Total disk access time = (4 + 8 + 8) + 8 * 4 = 52 ms (assume no seek and rotation time except for the first track) 2. Ts = 4ms, Avg Tr= 4ms, Tt for each sector = (1x 512) * 60 /(7500 * (500 x 512)) = 8/500 ms Total disk access time = 2500 x (4 + 4 + 8/500) = 20.04 sec Contents Memory System Overview Cache Memory Internal t l Memory External Memory Virtual Memory

Virtual Memory Virtual Memory A technique of using secondary storage such as disks to extend the size limit of physical memory Memory Paging Split memory into equal sized, small chunks - page frames Allocate the required number page frames to a process Operating System maintains i list of free frames A process does not require contiguous page frames Use page table to keep track Virtual Memory Demand paging Do not require all pages of a process in memory Bring in pages as required Bring in pages as required Page fault Required page is not in memory Operating System must swap in required page May need to swap out a page to make space Select page to throw out based on recent history

Bonus We do not need all of a process in memory for it to run We can swap in pages as required So - we can now run processes that are bigger than total memory available! Main memory is called real memory User/programmer sees much bigger memory - virtual memory Thrashing Too many processes in too little memory Operating System spends all its time swapping Little or no real work is done Disk light is on all the time Solutions Good page replacement algorithms Reduce number of processes running Fit more memory Translation Lookaside Buffer (TLB) Two physical memory accesses/virtual memory reference Page table Desired data TLB A special cache for page table entries

TLB Operation