Memory Organization MEMORY ORGANIZATION. Memory Hierarchy. Main Memory. Auxiliary Memory. Associative Memory. Cache Memory.

Similar documents
Computer Organization (Autonomous)

UNIT:4 MEMORY ORGANIZATION

CS370 Operating Systems

Memory Hierarchy. Cache Memory. Virtual Memory

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Chapter 9: Virtual Memory

CS370 Operating Systems

Chapter 9: Virtual-Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

CSC 553 Operating Systems

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 8: Virtual Memory. Operating System Concepts

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Chapter 1 Computer System Overview

Operating System - Virtual Memory

VIRTUAL MEMORY. Maninder Kaur. 1

Module 9: Virtual Memory

Demand Paging. Valid-Invalid Bit. Steps in Handling a Page Fault. Page Fault. Transfer of a Paged Memory to Contiguous Disk Space

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

Page Replacement Algorithms

VIRTUAL MEMORY READING: CHAPTER 9

CPE300: Digital System Architecture and Design

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chap. 12 Memory Organization

Principles of Operating Systems

CPE300: Digital System Architecture and Design

Operating System Concepts

Module 9: Virtual Memory

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

Eastern Mediterranean University School of Computing and Technology CACHE MEMORY. Computer memory is organized into a hierarchy.

CSC Operating Systems Spring Lecture - XIV Virtual Memory - II. Tevfik Ko!ar. Louisiana State University. March 27 th, 2008.

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 4 Memory Management

Computer Architecture Memory hierarchies and caches

Last Class. Demand Paged Virtual Memory. Today: Demand Paged Virtual Memory. Demand Paged Virtual Memory

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

1. Background. 2. Demand Paging

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

Chapter 9: Virtual Memory

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual Memory COMPSCI 386

CS370 Operating Systems

CPSC 352. Chapter 7: Memory. Computer Organization. Principles of Computer Architecture by M. Murdocca and V. Heuring

Overview IN this chapter we will study. William Stallings Computer Organization and Architecture 6th Edition

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

Memory. Objectives. Introduction. 6.2 Types of Memory

CS370 Operating Systems

Physical characteristics (such as packaging, volatility, and erasability Organization.

Introduction to Virtual Memory Management

Virtual Memory Outline

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

Chapter 9: Virtual Memory

Chapter 10: Virtual Memory. Background

DAT (cont d) Assume a page size of 256 bytes. physical addresses. Note: Virtual address (page #) is not stored, but is used as an index into the table

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

Chapter 9: Virtual Memory

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory

OPERATING SYSTEM. Chapter 9: Virtual Memory

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

CS307: Operating Systems

CS420: Operating Systems

Structure of Computer Systems

UNIT-V MEMORY ORGANIZATION

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

I/O Management and Disk Scheduling. Chapter 11

Chapter 6 Objectives

COMPUTER ARCHITECTURE AND ORGANIZATION

Chapter 9: Virtual Memory

Week 2: Tiina Niklander

V. Primary & Secondary Memory!

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

Roadmap. Tevfik Ko!ar. CSC Operating Systems Spring Lecture - XIV Virtual Memory - I. Louisiana State University.

Chapters 9 & 10: Memory Management and Virtual Memory

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

Operating Systems, Fall

Basic Page Replacement

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

1/19/2009. Data Locality. Exploiting Locality: Caches

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Where are we in the course?

Swapping. Operating Systems I. Swapping. Motivation. Paging Implementation. Demand Paging. Active processes use more physical memory than system has

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Basic Memory Management

Lecture 14 Page Replacement Policies

Transcription:

MEMORY ORGANIZATION Memory Hierarchy Main Memory Auxiliary Memory Associative Memory Cache Memory Virtual Memory

MEMORY HIERARCHY Memory Hierarchy Memory Hierarchy is to obtain the highest possible access speed while minimizing the total cost of the memory system Auxiliary memory Magnetic tapes Magnetic disks I/O processor Main memory CPU Cache memory Register Cache Main Memory Magnetic Disk Magnetic Tape

RAM and ROM Chips Typical RAM chip MAIN MEMORY Main Memory Chip select Chip select Read Write 7-bit address CS CS RD WR AD 7 8 x 8 RAM 8-bit data bus Typical ROM chip CS CS RD WR x x x x x x x Memory function Inhibit Inhibit Inhibit Write Read Inhibit State of data bus High-impedence High-impedence High-impedence Input data to RAM Output data from RAM High-impedence Chip select Chip select CS CS 5 x 8 ROM 8-bit data bus 9-bit address AD 9

4 MEMORY ADDRESS MAP Main Memory Address space assignment to each memory chip Example: 5 bytes RAM and 5 bytes ROM Component RAM RAM RAM RAM 4 ROM Hexa address - 7F 8 - FF - 7F 8 - FF - FF Address bus 9 8 7 6 5 4 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x Memory Connection to CPU - RAM and ROM chips are connected to a CPU through the data and address buses - The low-order lines in the address bus select the byte within the chips and other lines in the address bus select a particular chip through its chip select inputs

Data Data Data Data Data Memory Organization 5 CONNECTION OF MEMORY TO CPU Main Memory 6- Address bus 9 8 7- CPU RD WR Data bus Decoder CS CS RD WR AD7 CS CS RD WR AD7 CS CS RD WR AD7 CS CS RD WR AD7 8 x 8 RAM 8 x 8 RAM 8 x 8 RAM 8 x 8 RAM 4-7 8 9 CS CS 5 x 8 AD9 ROM }

6 AUXILIARY MEMORY Auxiliary Memory Information Organization on Magnetic Tapes file i block block block EOF R R R R4 R5 EOF block R5 R4 IRG R6 R R R block block Organization of Disk Hardware Moving Head Disk Fixed Head Disk Track

7 ASSOCIATIVE MEMORY Associative Memory - Accessed by the content of the data rather than by an address - Also called Content Addressable Memory (CAM) Hardware Organization Argument register(a) Key register (K) Input Read Write Associative memory array and logic m words n bits per word Match register M - Compare each word in CAM in parallel with the content of A(Argument Register) - If CAM Word[i] = A, M(i) = - Read sequentially accessing CAM for CAM Word(i) for M(i) = - K(Key Register) provides a mask for choosing a particular field or key in the argument in A (only those bits in the argument that have s in their corresponding position of K are compared)

8 ORGANIZATION OF CAM A A j A n Associative Memory K K j K n Word C C j C n M Word i C i C ij C in M i Word m C m C mj C mn Bit Bit j Bit n M m Internal organization of a typical cell C ij Input A j K j Write Read R S F ij Match logic To M i Output

9 MATCH LOGIC Associative Memory K A K A K n A n F' i F i F' i F i.... F' in F in M i

CACHE MEMORY Cache Memory Locality of Reference - The references to memory at any given time interval tend to be confined within a localized areas - This area contains a set of information and the membership changes gradually as time goes by - Temporal Locality The information which will be used in near future is likely to be in use already (e.g. Reuse of information in loops) - Spatial Locality If a word is accessed, adjacent(near) words are likely accessed soon (e.g. Related data items (arrays) are usually stored together; instructions are executed sequentially) Cache - The property of Locality of Reference makes the Cache memory systems work - Cache is a fast small capacity memory that should hold those information which are most likely to be accessed Main memory Cache memory CPU

PERFORMANCE OF CACHE Cache Memory Memory Access All the memory accesses are directed first to Cache If the word is in Cache; Access cache to provide it to CPU If the word is not in Cache; Bring a block (or a line) including that word to replace a block now in Cache - How can we know if the word that is required is there? - If a new block is to replace one of the old blocks, which one should we choose? Performance of Cache Memory System Hit Ratio - % of memory accesses satisfied by Cache memory system T e : Effective memory access time in Cache memory system T c : Cache access time T m : Main memory access time T e = T c + ( - h) T m Example: T c =.4 s, T m =. s, h = 85% T e =.4 + ( -.85) *. =.58 s

Cache Memory MEMORY AND CACHE MAPPING - ASSOCIATIVE MAPPLING - Mapping Function Specification of correspondence between main memory blocks and cache blocks Associative mapping Direct mapping Set-associative mapping Associative Mapping - Any block location in Cache can store any block in memory Most flexible - Mapping Table is implemented in an associative memory Very Expensive - Mapping Table Stores both address and the content of the memory word address (5 bits) Argument register CAM Address 7 7 7 5 Data 4 5 6 7 4

MEMORY AND CACHE MAPPING - DIRECT MAPPING - Each memory block has only one place to load in Cache - Mapping Table is made of RAM instead of CAM - n-bit memory address consists of parts; k bits of Index field and n-k bits of Tag field - n-bit addresses are used to access main memory and k-bit Index is used to access the Cache Addressing Relationships Tag(6) Index(9) Cache Memory Direct Mapping Cache Organization Memory address Memory data 777 4 4 5 77 777 K x Main memory Address = 5 bits Data = bits Cache memory Index address Tag Data 777 5 x Cache memory Address = 9 bits Data = bits 777 4 5 6 5 6 7 777 6 7 777 6 7

Operation 4 DIRECT MAPPING Cache Memory - CPU generates a memory request with (TAG;INDEX) - Access Cache using INDEX ; (tag; data) Compare TAG and tag - If matches Hit Provide Cache[INDEX](data) to CPU - If not match Miss M[tag;INDEX] Cache[INDEX](data) Cache[INDEX] (TAG;M[TAG; INDEX]) CPU Cache[INDEX](data) Direct Mapping with block size of 8 words Block Block Index tag data 4 5 7 6 5 7 8 7 6 6 Tag Block Word INDEX Block 6 77 777 6 7

5 SET ASSOCIATIVE MAPPING Cache Memory - Each memory block has a set of locations in the Cache to load Set Associative Mapping Cache with set size of two Index Tag Data Tag Data 4 5 5 6 7 777 6 7 4

6 BLOCK REPLACEMENT POLICY Cache Memory Many different block replacement policies are available LRU(Least Recently Used) is most easy to implement Cache word = (tag, data, U);(tag, data, U), Ui = or (binary)

Write Through 7 CACHE WRITE Cache Memory When writing into memory If Hit, both Cache and memory is written in parallel If Miss, Memory is written For a read miss, missing block may be overloaded onto a cache block Memory is always updated -> Important when CPU and DMA I/O are both executing Slow, due to the memory access time Write-Back (Copy-Back) When writing into memory If Hit, only Cache is written If Miss, missing block is brought to Cache and write into Cache For a read miss, candidate block must be written back to the memory Memory is not up-to-date, i.e., the same item in Cache and memory may have different value

8 VIRTUAL MEMORY Virtual Memory Give the programmer the illusion that the system has a very large memory, even though the computer actually has a relatively small main memory Address Space(Logical) and Memory Space(Physical) address space memory space virtual address (logical address) Mapping physical address address generated by programs actual main memory address Address Mapping Memory Mapping Table for Virtual Address -> Physical Address Virtual address Virtual address register Memory mapping table Main memory address register Main memory Memory table buffer register Physical Address Main memory buffer register

9 ADDRESS MAPPING Virtual Memory Address Space and Memory Space are each divided into fixed size group of words called blocks or pages K words group Page Page Address space N = 8K = Page Page Page 4 Page 5 Page 6 Page 7 Memory space M = 4K = Organization of memory Mapping Table in a paged system Block Block Block Block Page no. Line number Virtual address Memory page table Table address Presence bit Main memory address register Main memory Block Block Block Block MBR

ASSOCIATIVE MEMORY PAGE TABLE Assume that Number of Blocks in memory = m Number of Pages in Virtual Address Space = n Page Table - Straight forward design -> n entry table in memory Inefficient storage space utilization <- n-m entries of the table is empty - More efficient method is m-entry Page Table Page Table made of an Associative Memory m words; (Page Number:Block Number) Page no. Virtual address Line number Argument register Virtual Memory Key register Page no.block no. Associative memory Page Fault Page number cannot be found in the Page Table

PAGE FAULT Virtual Memory. Trap to the OS. Save the user registers and program state. Determine that the interrupt was a page fault 4. Check that the page reference was legal and determine the location of the page on the backing store(disk) 5. Issue a read from the backing store to a free frame a. Wait in a queue for this device until serviced b. Wait for the device seek and/or latency time c. Begin the transfer of the page to a free frame 6. While waiting, the CPU may be allocated to some other process 7. Interrupt from the backing store (I/O completed) 8. Save the registers and program state for the other user 9. Determine that the interrupt was from the backing store. Correct the page tables (the desired page is now in memory). Wait for the CPU to be allocated to this process again. Restore the user registers, program state, and new page table, then resume the interrupted instruction. LOAD M Reference 6 restart instruction OS 5 reset page table Page is on backing store trap free frame main memory 4 bring in missing page Processor architecture should provide the ability to restart any instruction after a page fault.

PAGE REPLACEMENT Decision on which page to displace to make room for an incoming page when no free frame is available Modified page fault service routine. Find the location of the desired page on the backing store. Find a free frame - If there is a free frame, use it - Otherwise, use a page-replacement algorithm to select a victim frame - Write the victim page to the backing store. Read the desired page into the (newly) free frame 4. Restart the user process frame f v i f v page table valid/ invalid bit change to invalid 4 reset page table for new page victim swap out victim page swap desired page in backing store Virtual Memory physical memory

FIFO PAGE REPLACEMENT ALGORITHMS Reference string 7 4 7 7 7 7 Page frames 4 4 FIFO algorithm selects the page that has been in memory the longest time Using a queue - every time a page is loaded, its - identification is inserted in the queue Easy to implement May result in a frequent page fault Optimal Replacement (OPT) - Lowest page fault rate of all algorithms Replace that page which will not be used for the longest period of time Reference string 7 4 7 7 7 7 Page frames 4 4 7 Virtual Memory 7 7 7

4 PAGE REPLACEMENT ALGORITHMS LRU - OPT is difficult to implement since it requires future knowledge - LRU uses the recent past as an approximation of near future. Replace that page which has not been used for the longest period of time Virtual Memory Reference string 7 4 7 7 7 7 Page frames 4 4 4 7 - LRU may require substantial hardware assistance - The problem is to determine an order for the frames defined by the time of last use

5 PAGE REPLACEMENT ALGORITHMS LRU Implementation Methods Counters - For each page table entry - time-of-use register - Incremented for every memory reference - Page with the smallest value in time-of-use register is replaced Stack - Stack of page numbers - Whenever a page is referenced its page number is removed from the stack and pushed on top - Least recently used page number is at the bottom Reference string 4 7 7 7 Virtual Memory LRU Approximation - Reference (or use) bit is used to approximate the LRU - Turned on when the corresponding page is referenced after its initial loading - Additional reference bits may be used 7 4 7 4