ECE4680 Computer Organization and Architecture. Virtual Memory

Similar documents
ECE468 Computer Organization and Architecture. Virtual Memory

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

Modern Computer Architecture

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Advanced Computer Architecture

Handout 4 Memory Hierarchy

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Virtual Memory. Virtual Memory

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Memory hierarchy review. ECE 154B Dmitri Strukov

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

CSE 502 Graduate Computer Architecture. Lec 6-7 Memory Hierarchy Review

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Chapter 8. Virtual Memory

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Technologies. Technology Trends

CS61C Review of Cache/VM/TLB. Lecture 26. April 30, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)

Virtual Memory: From Address Translation to Demand Paging

COSC 6385 Computer Architecture - Memory Hierarchies (I)

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Memory Hierarchies 2009 DAT105

Virtual Memory: From Address Translation to Demand Paging

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS152 Computer Architecture and Engineering Lecture 17: Cache System

Lecture 11. Virtual Memory Review: Memory Hierarchy

CPE 631 Lecture 04: CPU Caches

ECE331: Hardware Organization and Design

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Virtual to physical address translation

CS450/550 Operating Systems

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

CPE300: Digital System Architecture and Design

CSE 502 Graduate Computer Architecture. Lec 5-6 Memory Hierarchy Review

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

virtual memory. March 23, Levels in Memory Hierarchy. DRAM vs. SRAM as a Cache. Page 1. Motivation #1: DRAM a Cache for Disk

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

virtual memory Page 1 CSE 361S Disk Disk

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley.

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency)

Recap: Set Associative Cache. N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

Virtual Memory Oct. 29, 2002

CHAPTER 4 MEMORY HIERARCHIES TYPICAL MEMORY HIERARCHY TYPICAL MEMORY HIERARCHY: THE PYRAMID CACHE PERFORMANCE MEMORY HIERARCHIES CACHE DESIGN

Chapter 5 (Part II) Large and Fast: Exploiting Memory Hierarchy. Baback Izadi Division of Engineering Programs

Memory Management Virtual Memory

ECE ECE4680

Motivations for Virtual Memory Virtual Memory Oct. 29, Why VM Works? Motivation #1: DRAM a Cache for Disk

5DV118 Computer Organization and Architecture Umeå University Department of Computing Science Stephen J. Hegner

Virtual Memory. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

CISC 360. Virtual Memory Dec. 4, 2008

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CSE 120 Principles of Operating Systems Spring 2017

Chapter 5. Large and Fast: Exploiting Memory Hierarchy. Part II Virtual Memory

Chapter 8 Virtual Memory

Computer Architecture. Lecture 8: Virtual Memory

UC Berkeley CS61C : Machine Structures

Virtual Memory - Objectives

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

CSE 502 Graduate Computer Architecture. Lec 7-10 App B: Memory Hierarchy Review

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Chapter 8 Virtual Memory

Modern Virtual Memory Systems. Modern Virtual Memory Systems

Processes and Tasks What comprises the state of a running program (a process or task)?

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

Virtual Memory. Adapted from instructor s supplementary material from Computer. Patterson & Hennessy, 2008, MK]

V. Primary & Secondary Memory!

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

LECTURE 12. Virtual Memory

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Memory Hierarchy Review

Memory Design. Cache Memory. Processor operates much faster than the main memory can.

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Recall from Tuesday. Our solution to fragmentation is to split up a process s address space into smaller chunks. Physical Memory OS.

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Transcription:

ECE468 Computer Organization and Architecture Virtual Memory If I can see it and I can touch it, it s real. If I can t see it but I can touch it, it s invisible. If I can see it but I can t touch it, it s virtual. And if I can t see it and I can t touch it s gone! ECE468 Virtual memory. 23-3-3 Review: The Principle of Locality Probability of reference Address Space 2 The Principle of Locality: Program access a relatively small portion of the address space at any instant of time. Example: 9% of time in % of the code ECE468 Virtual memory.2 23-3-3

Review: The Need to Make a Decision! Direct Mapped Cache: Each memory location can only mapped to cache location No need to make any decision :-) - Current item replaced the previous item in that cache location N-way Set Associative Cache: Each memory location have a choice of N cache locations Fully Associative Cache: Each memory location can be placed in ANY cache location Cache miss in a N-way Set Associative or Fully Associative Cache: Bring in new block from memory Throw out a cache block to make room for the new block Damn! We need to make a decision which block to throw out! ECE468 Virtual memory.3 23-3-3 Review: Summary The Principle of Locality: Temporal Locality vs Spatial Locality Four Questions For Any Cache Where to place in the cache How to locate a block in the cache Replacement Write policy: Write through vs Write back - Write miss: Three Major Categories of Cache Misses: Compulsory Misses: sad facts of life. Example: cold start misses. Conflict Misses: increase cache size and/or associativity. Nightmare Scenario: ping pong effect! Capacity Misses: increase cache size ECE468 Virtual memory.4 23-3-3

Review: Levels of the Memory Hierarchy Capacity Access Time Cost CPU Registers s Bytes <s ns Cache K Bytes - ns $.-./bit Main Memory M Bytes ns-us $.-. Disk G Bytes ms -3-4 - cents Tape infinite sec-min -6 Registers Cache Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler -8 bytes cache cntl 8-28 bytes OS 52-4K bytes user/operator Mbytes Upper Level faster Larger Lower Level ECE468 Virtual memory.5 23-3-3 Today s Topic --- Virtual Memory Provides illusion of very large memory sum of the memory of many jobs greater than physical memory address space of each job larger than physical memory Allows available (fast and expensive) physical memory to be very well utilized Simplifies memory management (main reason today) Exploits memory hierarchy to keep average access time low. Involves at least two storage levels: main and secondary Virtual Address -- address used by the programmer Virtual Address Space -- collection of such addresses Memory Address -- address of word in physical memory also known as physical address or real address ECE468 Virtual memory.6 23-3-3

Basic Issues in VM System Design size of information blocks that are transferred from secondary to main storage block of information brought into M, and M is full, then some region of M must be released to make room for the new block --> replacement policy which region of M is to hold the new block --> placement policy missing item fetched from secondary memory only on the occurrence of a fault --> fetch/load policy mem disk cache reg Paging Organization frame pages virtual and physical address space partitioned into blocks of equal size page frames pages ECE468 Virtual memory.7 23-3-3 Address Map V = {,,..., n - } virtual address space M = {,,..., m - } physical address space n > m MAP: V M U {} address mapping function MAP(a) = a' if data at virtual address a is present in physical address a' in M = if data at virtual address a is not present in M a Processor a Name Space V Addr Trans Mechanism a' missing item fault fault handler Main Memory Secondary Memory physical address OS performs this transfer ECE468 Virtual memory.8 23-3-3

Page Table We often use page table to implement the Address Translation mechanism. Virtual page number Valid Page table Physical page or disk address Physical memory Disk storage ECE468 Virtual memory.9 23-3-3 P.A. 24 768 Paging Organization frame 7 Physical Memory K K K Addr Trans MAP Address Mapping VA page no. disp 24 page K K 3744 3 K Virtual Memory unit of mapping also unit of transfer from virtual to physical memory Page Table Base Reg index into page table V Page Table Access Rights PA + table located in physical memory actually, concatenation is more likely frame no. disp PA ECE468 Virtual memory. 23-3-3

Address Mapping Algorithm If V = then page is in main memory at frame address stored in table else address located page in secondary memory Access Rights R = Read-only, R/W = read/write, X = execute only If kind of access not compatible with specified access rights, then protection_violation_fault If valid bit not set then page fault Protection Fault: access rights violation; causes trap to hardware, microcode, or software fault handler Page Fault: page not resident in physical memory, also causes a trap; usually accompanied by a context switch: current process suspended while page is fetched from secondary storage ECE468 Virtual memory. 23-3-3 Fragmentation & Relocation Fragmentation is when areas of memory space become unavailable for some reason Relocation: move program or data to a new region of the address space (possibly fixing all the pointers) External Fragmentation: Space left between blocks. Internal Fragmentation: program is not an integral # of pages, part of the last page frame is "wasted" (obviously less of an issue as physical memories get larger) occupied... k- ECE468 Virtual memory.2 23-3-3

Optimal Page Size Choose page that minimizes fragmentation large page size => internal fragmentation more severe BUT increases the # of pages / name space => larger page tables In general, the trend is towards larger page sizes because -- memories get larger as the price of RAM drops -- the gap between processor speed and disk speed grow wider -- programmers desire larger virtual address spaces Most machines at 4K byte pages today, with page sizes likely to increase ECE468 Virtual memory.3 23-3-3 Page Replacement Algorithms Just like cache block replacement! Least Recently Used (LRU): -- selects the least recently used page for replacement -- requires knowledge about past references, more difficult to implement (thread thru page table entries from most recently referenced to least recently referenced; when a page is referenced it is placed at the head of the list; the end of the list is the page to replace) -- good performance, recognizes principle of locality ECE468 Virtual memory.4 23-3-3

Example: Suppose the most recent page references (in order) were, 2, 9, 7,, When page 9 is referenced, which was not present in memory, and the memory is full. Which page should be replace in LRU? ECE468 Virtual memory.5 23-3-3 Page Replacement (Continued) Not Recently Used: Associated with each page is a reference flag such that ref flag = if the page has been referenced in recent past = otherwise -- if replacement is necessary, choose any page frame such that its reference bit is. This is a page that has not been referenced in the recent past -- clock implementation of NRU: page table entry page table entry last replaced pointer (lrp) if replacement is to take place, advance lrp to next entry (mod table size) until one with a bit is found; this is the target for replacement; As a side effect, all examined PTE's have their reference bits set to zero. ref bit An optimization is to search for the a page that is both not recently referenced AND not dirty. ECE468 Virtual memory.6 23-3-3

Demand Paging and Prefetching Pages Fetch Policy when is the page brought into memory? if pages are loaded solely in response to page faults, then the policy is demand paging An alternative is prefetching: anticipate future references and load such pages before their actual use + reduces page transfer overhead - removes pages already in page frames, which could adversely affect the page fault rate - predicting future references usually difficult Most systems implement demand paging without prepaging (One way to obtain effect of prefetching behavior is increasing the page size) ECE468 Virtual memory.7 23-3-3 Virtual Address and a Cache VA PA miss Translation CPU Cache data hit Main Memory It takes an extra memory access to translate VA to PA This makes cache access very expensive, and this is the "innermost loop" that you want to go as fast as possible ASIDE: Why access cache with PA at all? VA caches have a problem,i.e. synonym problem: two different virtual addresses map to same physical address two different cache entries holding data for the same physical address! for update: must update all cache entries with same physical address or memory becomes inconsistent determining this requires significant hardware, essentially an associative lookup on the physical address tags to see if you have multiple hits ECE468 Virtual memory.8 23-3-3

TLBs --- Making Address Translation Fast A way to speed up translation is to use a special cache of recently used page table entries -- this has many names, but the most frequently used is Translation Lookaside Buffer or TLB Virtual Address Physical Address Dirty Ref Valid Access (or tag) Virtual page number Valid Tag Physical page address Physical memory Page table Physical page Valid or disk address Disk storage ECE468 Virtual memory.9 23-3-3 Translation Look-Aside Buffers Just like any other cache, the TLB can be organized as fully associative, set associative, or direct mapped TLBs are usually small, typically not more than 28-256 entries even on high end machines. This permits fully associative lookup on these machines. Most mid-range machines use small n-way set associative organizations. CPU hit VA PA miss TLB Lookup Cache Main Memory Translation with a TLB miss Translation /2 t ECE468 Virtual memory.2 23-3-3 hit data t 2 t

Segmentation (see x86) Alternative to paging (often combined with paging) Segments allocated for each program module; may be different sizes segment is unit of transfer between physical memory and disk BR seg # disp Present Access Length Phy Addr Segment Table + physical addr Presence Bit segment length access rights Addr=start addr of segment Faults: missing segment (Present = ) overflow (Displacement exceeds segment length) protection violation (access incompatible with segment protection) Segment-based addressing is sometimes used to implement capabilities, i.e., hardware support for sophisticated protection mechanisms ECE468 Virtual memory.2 23-3-3 Segment Based Addressing Three Serious Drawbacks: () storage allocation with variable sized blocks (best fit vs. first fit vs. buddy system) (2) external fragmentation: physical memory allocated in such a fashion that all remaining pieces are too small to be allocated to any segment. Solved be expensive run-time memory compaction. (3) Non-linear address matching pointer arithmetic in C? The best of both worlds: paged segmentation schemes virtual address: seg # page # displacement used by IBM: 4K byte pages, 6 x Mbyte or 64 x 64 Kbyte segments ECE468 Virtual memory.22 23-3-3

Conclusion # Virtual Memory invented as another level of the hierarchy Today VM allows many processes to share single memory without having to swap all processes to disk, protection more important (Multi-level) page tables to map virtual address to physical address TLBs are important for fast translation TLB misses are significant in performance ECE468 Virtual memory.23 23-3-3 Conclusion #2 Theory of Algorithms & Compilers based on number of operations Compiler remove operations and simplify ops: Integer adds << Integer multiplies << FP adds << FP multiplies Advanced pipelines => these operations take similar time As Clock rates get higher and pipelines are longer, instructions take less time but DRAMs only slightly faster (although much larger) Today time is a function of (ops, cache misses) Given importance of caches, what does this mean to: Compilers? Data structures? Algorithms? ECE468 Virtual memory.24 23-3-3

Common Framework for Memory Hierarchy Question : Where can a Block be Placed Cache: - direct mapped, n-way set associative VM: - fully associative Question 2: How is a block found index, index the set and search among elements search all cache entries or separate lookup table Question 3: Which block be replaced Random, LRU, NRU What happens on a write write through vs write back write allocate vs write no-allocate on a write miss ECE468 Virtual memory.25 23-3-3