ECE 341. Lecture # 18

Similar documents
k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

LECTURE 12. Virtual Memory

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

CPE300: Digital System Architecture and Design

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

Virtual to physical address translation

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Virtual Memory Review. Page faults. Paging system summary (so far)

CS 153 Design of Operating Systems Winter 2016

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

Memory: Page Table Structure. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

ECE 341. Lecture # 19

a process may be swapped in and out of main memory such that it occupies different regions

Virtual Memory. control structures and hardware support

Chapter 10: Virtual Memory. Lesson 05: Translation Lookaside Buffers

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Translation Buffers (TLB s)

The memory of a program. Paging and Virtual Memory. Swapping. Paging. Physical to logical address mapping. Memory management: Review

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Cache Performance (H&P 5.3; 5.5; 5.6)

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, FALL 2012

ECE 341. Lecture # 16

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

CSE 351. Virtual Memory

Memory and multiprogramming

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

ADDRESS TRANSLATION AND TLB

CS 333 Introduction to Operating Systems. Class 11 Virtual Memory (1) Jonathan Walpole Computer Science Portland State University

EN1640: Design of Computing Systems Topic 06: Memory System

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

Virtual Memory. Chapter 8

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

ADDRESS TRANSLATION AND TLB

ECE Lab 8. Logic Design for a Direct-Mapped Cache. To understand the function and design of a direct-mapped memory cache.

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

ECE4680 Computer Organization and Architecture. Virtual Memory

Memory Management. Memory

Virtual Memory: From Address Translation to Demand Paging

LECTURE 11. Memory Hierarchy

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013

CS370 Operating Systems

Memory Hierarchies 2009 DAT105

EN1640: Design of Computing Systems Topic 06: Memory System

Motivations for Virtual Memory Virtual Memory Oct. 29, Why VM Works? Motivation #1: DRAM a Cache for Disk

Virtual Memory 1. Virtual Memory

Virtual Memory 1. Virtual Memory

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

Carnegie Mellon. 16 th Lecture, Mar. 20, Instructors: Todd C. Mowry & Anthony Rowe

ECE 485/585 Microprocessor System Design

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Memory Management Virtual Memory

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 7: Memory Organization Part II

Computer Systems. Virtual Memory. Han, Hwansoo

1. Creates the illusion of an address space much larger than the physical memory

CS307 Operating Systems Main Memory

Virtual Memory Oct. 29, 2002

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

ECE232: Hardware Organization and Design

ECE468 Computer Organization and Architecture. Virtual Memory

Lecture Topics ECE 341. Lecture # 17. Cache Miss. Cache Hit. The Memory System. Cache Memories. Reference:

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Lecture 2: Memory Systems

Virtual Memory. Virtual Memory

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

A Few Problems with Physical Addressing. Virtual Memory Process Abstraction, Part 2: Private Address Space

CS61C : Machine Structures

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Sarah L. Harris and David Money Harris. Digital Design and Computer Architecture: ARM Edition Chapter 8 <1>

Virtual Memory, Address Translation

CSE 120 Principles of Operating Systems Spring 2017

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

Advanced Memory Organizations

Chapter Seven. SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors)

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

Memory Hierarchy Y. K. Malaiya

Chapter 5 The Memory System

ECE331: Hardware Organization and Design

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

Memory. Objectives. Introduction. 6.2 Types of Memory

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

CISC 360. Virtual Memory Dec. 4, 2008

Spring 2018 :: CSE 502. Cache Design Basics. Nima Honarmand

The levels of a memory hierarchy. Main. Memory. 500 By 1MB 4GB 500GB 0.25 ns 1ns 20ns 5ms

Processes and Virtual Memory Concepts

Module 2: Virtual Memory and Caches Lecture 3: Virtual Memory and Caches. The Lecture Contains:

Virtual Memory. User memory model so far:! In reality they share the same memory space! Separate Instruction and Data memory!!

Chapter 8: Main Memory

198:231 Intro to Computer Organization. 198:231 Introduction to Computer Organization Lecture 14

Memory Design. Cache Memory. Processor operates much faster than the main memory can.

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Lecture 12: Memory hierarchy & caches

Memory. Principle of Locality. It is impossible to have memory that is both. We create an illusion for the programmer. Employ memory hierarchy

Transcription:

ECE 341 Lecture # 18 Instructor: Zeshan Chishti zeshan@ece.pdx.edu December 1, 2014 Portland State University

Lecture Topics The Memory System Cache Memories Performance Considerations Hit Ratios and Miss Penalty How to Increase Cache Hit Ratio? Virtual Memory Memory and Storage Virtual Memory Organization Address Translation Translation Lookaside Buffer (TLB) Page Faults and Replacements References: Chapter 8: Sections 8.7, 8.8

How to Handle Stale Data? A valid bit is associated with each cache block If the block contains valid data, this bit is set to 1, otherwise it is set to 0 Valid bits are set to 0, when the power is just turned on When a block is loaded into the cache the first time, the valid bit is set to 1 The processor fetches data from a cache block only if its valid bit is 1 Data transfers between main memory & disk can occur directly through direct memory access (DMA), bypassing cache What happens if the memory contents change and a copy of the data is also resident in the cache? The valid bit of cache block is set to 0 to indicate that this is a stale copy

Hit Ratio and Miss Penalty Cache hit ratio = No. of cache hits / No. of cache accesses Cache miss ratio = No. of cache misses / No. of cache accesses = 1- hit ratio Cache hit latency = No. of cycles it takes to read data from the cache Cache miss penalty = No. of cycles the processor has to wait when required data is not found in the cache For a system with only one level of cache, average memory access time is: where: h = Cache hit ratio C = cache hit latency M = Cache miss penalty t avg = hc + (1 h)m To reduce memory access time, it is important to have both high cache hit ratios and low cache miss penalties

Example Consider a computer with one level of cache, that has the following parameters: Cache access time is 1 cycle. When a cache miss occurs, a block of 8 words is transferred from memory to cache. It takes 10 cycles to transfer the first word of block, and the remaining words are transferred at the rate of 1 word per cycle. The miss penalty also includes the initial access to the cache and another delay of 1 cycle to transfer the word to the processor after the block has been loaded into the cache. Assume that the cache hit rate is 95%. Calculate the average memory access time. Solution: Cache hit rate h = 95% Cache hit latency C = 1 Cache miss latency M = 1 + 10 + (8 1)(1) + 1 = 19 Average memory access time t avg = hc + (1-h)M = (0.95)(1) + (1-0.95)(19) = 1.9 cycles Refer to Example 8.1 (Page 301) for a more detailed example

How to Increase Cache Hit Ratio? Make the cache larger Allows more data to be kept in the cache But increases both cost and cache access time Increase the block size, while keeping the cache size constant Helps if there is high spatial locality in the program But larger blocks are effective only up to a certain size Very large blocks need longer to transfer => higher miss penalty Block size should be neither too large nor too small Typical choice for block sizes is in the 64 to 128 byte range Add a second level cache First level (L1) cache provides fast access but small capacity Second level (L2) cache can be slower but much larger than L1 cache to ensure a high hit rate, so that fewer accesses go to main memory

Two-level Cache Hierarchy In a processor with two levels of caches, the average memory access time is given by: t avg = h 1 C 1 + (1 h 1 )(h 2 C 2 + (1-h 2 )M 2 ) where: h 1 = Hit ratio of L1 cache h 2 = Hit ratio of L2 cache C 1 = Access time of L1 cache C 2 = Miss penalty to transfer data from L2 cache to L1 cache M = Miss penalty to transfer data from memory to L2 cache Assume that h 1 = h 2 = 0.9, then only 0.1*0.1 = 1% of memory accesses need to go to main memory With a single level cache, 10% accesses would have gone to main memory

Prefetching New data is usually brought into the cache when it is needed Processor has to pay miss penalty if required data is not in cache To avoid this penalty, processor can prefetch the data into the cache before it is actually needed. Prefetching can be accomplished through software by including a special prefetch instruction in the processor instruction set Prefetching can also be accomplished using hardware: Circuitry that attempts to discover patterns in memory references and then prefetches addresses according to these patterns

Virtual Memory

Introduction Recall that the most important challenge in memory system design is to provide a computer with as large and fast a memory as possible, within a given cost target Two key architectural solutions are used to increase the effective speed and size of the memory system Cache memories are used to increase the effective speed of the memory system Virtual memory is used to increase the effective size of the memory system

Memory and Storage Recall that the addressable memory space depends on the number of address bits in a computer E.g., if a computer issues 40-bit addresses, addressable memory space is 1 TB But due to cost constraints, main memory in a computer is generally not as large as the entire possible addressable space Application programmers do not want to be constrained by the limited size of main memory To mitigate the limited main memory, large programs that cannot fit in the main memory have their parts stored on secondary storage devices, such as magnetic disks

Memory and Storage (cont.) Pieces of programs must be transferred from secondary storage to the main memory before they can be executed When a new piece of data is to be transferred to the main memory, and the main memory is full, then some other data in the main memory must be replaced This is similar to the replacement issue in case of cache memories Operating system automatically transfers data between the main memory and secondary storage Application programmer need not be concerned with this transfer Also, application programmer does not need to be aware of the limitations imposed by the available physical memory

Virtual Memory Techniques used to manage data transfer between main memory and secondary storage are called virtual-memory techniques Instruction and data references in a program do not depend on the physical size of the main memory. They depend only on the number of address bits used by the processor When executing a program, addresses issued by the processor are called logical or virtual addresses Virtual addresses are translated into physical addresses by a combination of hardware and software subsystems A physical address refers to the mapping of a virtual address in main memory If the virtual address refers to a part of the program that is currently in the main memory, the translation provides the physical address, which is accessed immediately If the virtual address refers to a part of the program that is not in the main memory, it is first transferred to the main memory before it can be used

Virtual Memory Organization Data Data Processor Cache Main memory MMU Virtual address Physical address Physical address Memory management unit (MMU) translates virtual addresses into physical addresses If the desired data or instructions are in the main memory they are fetched as described previously If the desired data or instructions are not in the main memory, they must be transferred from secondary storage to the main memory MMU causes the operating system to bring the data from the secondary storage into the main memory DMA transfer Disk storage

Memory Pages Assume that physical memory space is divided into fixed-length units called pages A page consists of a block of words that occupy contiguous locations in the main memory Page is a basic unit of information that is transferred between secondary storage and main memory. Size of a page commonly ranges from 2K to 16K bytes Pages should not be too small, because the access time of a secondary storage device is much larger than the main memory Pages should not be too large, else a large portion of the page may not be used, and it will occupy valuable space in the main memory

Address Translation A virtual address needs to be translated to a physical address, before the memory is accessed Each virtual address generated by the processor is interpreted as a virtual page number (high-order bits) plus a offset (low-order bits) that specifies the location of a particular byte within that page For example, if the processor uses 40 address bits and the page size is 4 kbytes, then the address bits are interpreted as follows: Virtual Page Number Offset 28 bits 12 bits The first step in address translation is to break down a virtual address into the above two fields

Address Translation (cont.) Information about each virtual page is kept in the page table Conceptually the page table has one entry for each virtual page Is the virtual page currently mapped into the main memory? If yes, what is the main memory address where the page is mapped to? Current status of the page (dirty, LRU etc.) Starting address of the page table is kept in page table base register Virtual page number generated by the processor is added to the contents of the page table base register This provides the address of the corresponding entry in the page table The contents of this page table entry provide the starting address of the page if the page is currently in the main memory

Address Translation (cont.) PTBR holds the address of the page table. Page table base register Page table address + Virtual page number Virtual address from processor Offset Virtual address is interpreted as page number and offset. PTBR + virtual page number provide the entry of the page in the page table. PAGE TABLE This entry has the starting location of the page. Page table holds information about each page. This includes the starting address of the page in the main memory. Control bits Page frame in memory Page frame Offset Physical address sent to main memory

Page Table Entry Page table entry for a page also includes some control bits which describe the status of the page while it is in the main memory Valid bit indicates the validity of the page Indicates whether the page is actually loaded into the main memory Dirty bit indicates whether the page has been modified during its residency in the main memory This bit determines whether the page should be written back to the disk when it is removed from the main memory Similar to the dirty bit in case of cache memory

Page Table Location Where should the page table be located? Recall that the page table is used by the MMU for every read and write access to the memory Ideal location for the page table is within the MMU Page table is quite large MMU is implemented as part of the processor chip Impossible to include a complete page table on the chip Page table is kept in the main memory A copy of a small portion of the page table can be cached within the MMU Cached portion consists of page table entries that correspond to the most recently accessed pages

Translation Lookaside Buffer (TLB) A small cache called as Translation Lookaside Buffer (TLB) is included in the MMU TLB holds page table entries of the most recently accessed pages Recall that cache memory holds most recently accessed blocks from the main memory The relationship between the TLB and page table is similar to the relationship between the cache and main memory Page table entry for a page includes: Address of the page frame where the page resides in the main memory Some control bits In addition to the above for each page, TLB must hold the virtual page number for each page

TLB (cont.) No Miss Yes =? Hit Virtual address from processor Virtual page number TLB Virtual page number Control bits Page frame in memory Page frame Offset Offset Physical address to main memory Associative-mapped TLB High-order bits of the virtual address generated by the processor select the virtual page These bits are compared to the virtual page numbers in the TLB. If there is a match, a hit occurs and the corresponding address of the page frame is read. If there is no match, a miss occurs and the page table within the main memory must be consulted. Set-associative mapped TLBs are used in commercial processors.

TLB (cont.) How to keep the entries of the TLB coherent with the contents of the page table in the main memory? OS may change the contents of the page table in the main memory Simultaneously it must also invalidate the corresponding entries in the TLB A control bit is provided in the TLB to invalidate an entry If an entry is invalidated, then the TLB gets the information for that entry from the page table

Page Fault What happens if a program generates an access to a page that is not in the main memory? In this case, a page fault is said to occur Whole page must be brought into the main memory from the disk, before the execution can proceed Upon detecting a page fault by the MMU, following actions occur: MMU asks the operating system to intervene by raising an exception Processing of the active task which caused the page fault is interrupted Control is transferred to the operating system Operating system copies the requested page from secondary storage to the main memory Once the page is copied, control is returned to the task which was interrupted

Page Replacement When a new page is to be brought into the main memory from secondary storage, the main memory may be full Some page from the main memory must be replaced with this new page How to choose which page to replace? This is similar to the replacement that occurs when the cache is full The principle of locality of reference can also be applied here A replacement strategy similar to LRU can be applied Since the size of the main memory is relatively larger compared to cache, a relatively large amount of programs and data can be held in the main memory Minimizes the frequency of transfers between storage and main memory