ECE468 Computer Organization and Architecture. Virtual Memory

Similar documents
ECE4680 Computer Organization and Architecture. Virtual Memory

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

Modern Computer Architecture

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Advanced Computer Architecture

Handout 4 Memory Hierarchy

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Virtual Memory. Virtual Memory

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Memory hierarchy review. ECE 154B Dmitri Strukov

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

CSE 502 Graduate Computer Architecture. Lec 6-7 Memory Hierarchy Review

Chapter 8. Virtual Memory

Memory Technologies. Technology Trends

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Virtual Memory: From Address Translation to Demand Paging

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Memory Hierarchies 2009 DAT105

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Virtual Memory: From Address Translation to Demand Paging

CS61C Review of Cache/VM/TLB. Lecture 26. April 30, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Lecture 11. Virtual Memory Review: Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CPE 631 Lecture 04: CPU Caches

ECE331: Hardware Organization and Design

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

CS450/550 Operating Systems

Virtual to physical address translation

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

CPE300: Digital System Architecture and Design

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley.

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency)

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

CSE 502 Graduate Computer Architecture. Lec 5-6 Memory Hierarchy Review

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

virtual memory. March 23, Levels in Memory Hierarchy. DRAM vs. SRAM as a Cache. Page 1. Motivation #1: DRAM a Cache for Disk

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

Chapter 5 (Part II) Large and Fast: Exploiting Memory Hierarchy. Baback Izadi Division of Engineering Programs

Recap: Set Associative Cache. N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel

Memory Management Virtual Memory

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

virtual memory Page 1 CSE 361S Disk Disk

ECE ECE4680

5DV118 Computer Organization and Architecture Umeå University Department of Computing Science Stephen J. Hegner

Virtual Memory. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

Virtual Memory Oct. 29, 2002

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

CHAPTER 4 MEMORY HIERARCHIES TYPICAL MEMORY HIERARCHY TYPICAL MEMORY HIERARCHY: THE PYRAMID CACHE PERFORMANCE MEMORY HIERARCHIES CACHE DESIGN

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CSE 120 Principles of Operating Systems Spring 2017

Chapter 5. Large and Fast: Exploiting Memory Hierarchy. Part II Virtual Memory

Computer Architecture. Lecture 8: Virtual Memory

Chapter 8 Virtual Memory

Virtual Memory - Objectives

Motivations for Virtual Memory Virtual Memory Oct. 29, Why VM Works? Motivation #1: DRAM a Cache for Disk

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

Chapter 8 Virtual Memory

Modern Virtual Memory Systems. Modern Virtual Memory Systems

Processes and Tasks What comprises the state of a running program (a process or task)?

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

CISC 360. Virtual Memory Dec. 4, 2008

Virtual Memory. Adapted from instructor s supplementary material from Computer. Patterson & Hennessy, 2008, MK]

V. Primary & Secondary Memory!

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

CSE 502 Graduate Computer Architecture. Lec 7-10 App B: Memory Hierarchy Review

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

1. Memory technology & Hierarchy

LECTURE 12. Virtual Memory

UC Berkeley CS61C : Machine Structures

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

CS3350B Computer Architecture

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

Memory Design. Cache Memory. Processor operates much faster than the main memory can.

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Recall from Tuesday. Our solution to fragmentation is to split up a process s address space into smaller chunks. Physical Memory OS.

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Transcription:

ECE468 Computer Organization and Architecture Virtual Memory ECE468 vm.1 Review: The Principle of Locality Probability of reference 0 Address Space 2 The Principle of Locality: Program access a relatively small portion of the address space at any instant of time. Example: 90% of time in 10% of the code ECE468 vm.2

Review: The Need to Make a Decision! Direct Mapped Cache: Each memory location can only mapped to 1 cache location No need to make any decision :-) - Current item replaced the previous item in that cache location N-way Set Associative Cache: Each memory location have a choice of N cache locations Fully Associative Cache: Each memory location can be placed in ANY cache location Cache miss in a N-way Set Associative or Fully Associative Cache: Bring in new block from memory Throw out a cache block to make room for the new block Damn! We need to make a decision which block to throw out! ECE468 vm.3 Review: Summary The Principle of Locality: Temporal Locality vs Spatial Locality Four Questions For Any Cache Where to place in the cache How to locate a block in the cache Replacement Write policy: Write through vs Write back - Write miss: Three Major Categories of Cache Misses: Compulsory Misses: sad facts of life. Example: cold start misses. Conflict Misses: increase cache size and/or associativity. Nightmare Scenario: ping pong effect! Capacity Misses: increase cache size ECE468 vm.4

Review: Levels of the Memory Hierarchy Capacity Access Time Cost CPU Registers 100s Bytes <10s ns Cache K Bytes 10-100 ns $.01-.001/bit Main Memory M Bytes 100ns-1us $.01-.001 Disk G Bytes ms -3-4 10-10 cents Tape infinite sec-min 10-6 Registers Cache Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntl 8-128 bytes OS 512-4K bytes user/operator Mbytes Upper Level faster Larger Lower Level ECE468 vm.5 Today s Topic --- Virtual Memory Provides illusion of very large memory sum of the memory of many jobs greater than physical memory address space of each job larger than physical memory Allows available (fast and expensive) physical memory to be very well utilized Simplifies memory management (main reason today) Exploits memory hierarchy to keep average access time low. Involves at least two storage levels: main and secondary Virtual Address -- address used by the programmer Virtual Address Space -- collection of such addresses Memory Address -- address of word in physical memory also known as physical address or real address ECE468 vm.6

Basic Issues in VM System Design size of information blocks that are transferred from secondary to main storage block of information brought into M, and M is full, then some region of M must be released to make room for the new block --> replacement policy which region of M is to hold the new block --> placement policy missing item fetched from secondary memory only on the occurrence of a fault --> fetch/load policy mem disk cache reg Paging Organization frame pages virtual and physical address space partitioned into blocks of equal size page frames ECE468 vm.7 pages Address Map V = {0, 1,..., n - 1} virtual address space M = {0, 1,..., m - 1} physical address space n > m MAP: V --> M U {0} address mapping function MAP(a) = a' if data at virtual address a is present in physical address a' and a' in M = 0 if data at virtual address a is not present in M a Processor a Name Space V Addr Trans Mechanism 0 a' missing item fault fault handler Main Memory Secondary Memory physical address OS performs this transfer ECE468 vm.8

P.A. 0 1024 7168 Paging Organization frame 0 1 7 Physical Memory 1K 1K 1K Address Mapping 10 VA page no. disp Addr Trans MAP 0 1024 31744 page 0 1 31 Virtual Memory 1K 1K 1K unit of mapping also unit of transfer from virtual to physical memory Page Table Base Reg index into page table ECE468 vm.9 Page Table V Access Rights PA + table located in physical memory physical memory address actually, concatenation is more likely Address Mapping Algorithm If V = 1 then page is in main memory at frame address stored in table else address located page in secondary memory Access Rights R = Read-only, R/W = read/write, X = execute only If kind of access not compatible with specified access rights, then protection_violation_fault If valid bit not set then page fault Protection Fault: access rights violation; causes trap to hardware, microcode, or software fault handler Page Fault: page not resident in physical memory, also causes a trap; usually accompanied by a context switch: current process suspended while page is fetched from secondary storage ECE468 vm.10

Fragmentation & Relocation Fragmentation is when areas of memory space become unavailable for some reason Relocation: move program or data to a new region of the address space (possibly fixing all the pointers) External Fragmentation: Space left between blocks. Internal Fragmentation: program is not an integral # of pages, part of the last page frame is "wasted" (obviously less of an issue as physical memories get larger) occupied 0 1... k-1 ECE468 vm.11 Optimal Page Size Choose page that minimizes fragmentation large page size => internal fragmentation more severe BUT increases the # of pages / name space => larger page tables In general, the trend is towards larger page sizes because -- memories get larger as the price of RAM drops -- the gap between processor speed and disk speed grow wider -- programmers desire larger virtual address spaces Most machines at 4K byte pages today, with page sizes likely to increase ECE468 vm.12

Page Replacement Algorithms Just like cache block replacement! Least Recently Used (LRU): -- selects the least recently used page for replacement -- requires knowledge about past references, more difficult to implement (thread thru page table entries from most recently referenced to least recently referenced; when a page is referenced it is placed at the head of the list; the end of the list is the page to replace) -- good performance, recognizes principle of locality ECE468 vm.13 Example: Suppose the most recent page references (in order) were 10, 12, 9, 7, 11, 10 When page 9 is referenced, which was not present in memory, and the memory is full. Which page should be replace in LRU? ECE468 vm.14

Page Replacement (Continued) Not Recently Used: Associated with each page is a reference flag such that ref flag = 1 if the page has been referenced in recent past = 0 otherwise -- if replacement is necessary, choose any page frame such that its reference bit is 0. This is a page that has not been referenced in the recent past -- clock implementation of NRU: page table entry 1 0 1 0 1 0 0 0 page table entry last replaced pointer (lrp) if replacement is to take place, advance lrp to next entry (mod table size) until one with a 0 bit is found; this is the target for replacement; As a side effect, all examined PTE's have their reference bits set to zero. ref bit An optimization is to search for the a page that is both not recently referenced AND not dirty. ECE468 vm.15 Demand Paging and Prefetching Pages Fetch Policy when is the page brought into memory? if pages are loaded solely in response to page faults, then the policy is demand paging An alternative is prefetching: anticipate future references and load such pages before their actual use + reduces page transfer overhead - removes pages already in page frames, which could adversely affect the page fault rate - predicting future references usually difficult Most systems implement demand paging without prepaging (One way to obtain effect of prefetching behavior is increasing the page size ECE468 vm.16

Virtual Address and a Cache VA PA miss Translation CPU Cache data hit Main Memory It takes an extra memory access to translate VA to PA This makes cache access very expensive, and this is the "innermost loop" that you want to go as fast as possible ASIDE: Why access cache with PA at all? VA caches have a problem! synonym problem: two different virtual addresses map to same physical address => two different cache entries holding data for the same physical address! for update: must update all cache entries with same physical address or memory becomes inconsistent determining this requires significant hardware, essentially an associative lookup on the physical address tags to see if you have multiple hits ECE468 vm.17 TLBs A way to speed up translation is to use a special cache of recently used page table entries -- this has many names, but the most frequently used is Translation Lookaside Buffer or TLB Virtual Address Physical Address Dirty Ref Valid Access TLB access time comparable to, though shorter than, cache access time (still much less than main memory access time) ECE468 vm.18

Translation Look-Aside Buffers Just like any other cache, the TLB can be organized as fully associative, set associative, or direct mapped TLBs are usually small, typically not more than 128-256 entries even on high end machines. This permits fully associative lookup on these machines. Most mid-range machines use small n-way set associative organizations. CPU hit VA PA miss TLB Lookup Cache Main Memory Translation with a TLB miss hit Translation data ECE468 vm.19 1/2 t t 20 t Segmentation (see x86) Alternative to paging (often combined with paging) Segments allocated for each program module; may be different sizes segment is unit of transfer between physical memory and disk BR seg # disp Present Access Length Phy Addr Segment Table physical addr + Presence Bit segment length access rights Addr=start addr of segment Faults: missing segment (Present = 0) overflow (Displacement exceeds segment length) protection violation (access incompatible with segment protection) Segment-based addressing sometimes used to implement capabilities, i.e., hardware support for sophisticated protection mechanisms ECE468 vm.20

Segment Based Addressing Three Serious Drawbacks: (1) storage allocation with variable sized blocks (best fit vs. fit fit vs. buddy system) (2) external fragmentation: physical memory allocated in such a fashion that all remaining pieces are too small to be allocated to any segment. Solved be expensive run-time memory compaction. (3) Non-linear address matching pointer arithmetic in C? The best of both worlds: paged segmentation schemes virtual address seg # page # displacement used by IBM: 4K byte pages, 16 x 1 Mbyte or 64 x 64 Kbyte segments ECE468 vm.21 Conclusion #1 Virtual Memory invented as another level of the hierarchy Today VM allows many processes to share single memory without having to swap all processes to disk, protection more important (Multi-level) page tables to map virtual address to physical address TLBs are important for fast translation TLB misses are significant in performance ECE468 vm.22

Conclusion #2 Theory of Algorithms & Compilers based on number of operations Compiler remove operations and simplify ops: Integer adds << Integer multiplies << FP adds << FP multiplies Advanced pipelines => these operations take similar time As Clock rates get higher and pipelines are longer, instructions take less time but DRAMs only slightly faster (although much larger) Today time is a function of (ops, cache misses) Given importance of caches, what does this mean to: Compilers? Data structures? Algorithms? ECE468 vm.23 Common Framework for Memory Hierarchy Question 1: Where can a Block be Planced Cache: - direct mapped, n-way set associative VM: - fully associative Question 2: How is a block found index, index the set and search among elements search all cache entries or separate lookup table Question 3: Which block be replaced Random, LRU, NRU What happens on a write write through vs write back write allocate vs write no-allocate on a write miss ECE468 vm.24