CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

Similar documents
CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

ECE468 Computer Organization and Architecture. Virtual Memory

ECE4680 Computer Organization and Architecture. Virtual Memory

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

Virtual Memory. Virtual Memory

Modern Computer Architecture

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Virtual Memory: From Address Translation to Demand Paging

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

CS5460: Operating Systems Lecture 14: Memory Management (Chapter 8)

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Virtual to physical address translation

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Virtual Memory: From Address Translation to Demand Paging

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

COSC3330 Computer Architecture Lecture 20. Virtual Memory

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

CS 153 Design of Operating Systems Winter 2016

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

Topic 18 (updated): Virtual Memory

VIRTUAL MEMORY II. Jo, Heeseung

Computer Systems Architecture I. CSE 560M Lecture 18 Guest Lecturer: Shakir James

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Recap: Set Associative Cache. N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel

Virtual Memory. Kevin Webb Swarthmore College March 8, 2018

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Modern Virtual Memory Systems. Modern Virtual Memory Systems

ECE331: Hardware Organization and Design

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

EECS 470. Lecture 16 Virtual Memory. Fall 2018 Jon Beaumont

UC Berkeley CS61C : Machine Structures

Chapter 8 Memory Management

Chapter 8. Virtual Memory

Computer Structure. X86 Virtual Memory and TLB

ADDRESS TRANSLATION AND TLB

Memory Management. An expensive way to run multiple processes: Swapping. CPSC 410/611 : Operating Systems. Memory Management: Paging / Segmentation 1

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

HY225 Lecture 12: DRAM and Virtual Memory

Virtual Memory - Objectives

CSE 120 Principles of Operating Systems Spring 2017

Computer Science 146. Computer Architecture

CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS450/550 Operating Systems

EECS 482 Introduction to Operating Systems

Computer Architecture. Lecture 8: Virtual Memory

Virtual Memory. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

CS 61C: Great Ideas in Computer Architecture. Virtual Memory

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Virtual Memory Oct. 29, 2002

ADDRESS TRANSLATION AND TLB

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

1. Creates the illusion of an address space much larger than the physical memory

Lecture 9 - Virtual Memory

Lecture 11. Virtual Memory Review: Memory Hierarchy

Motivations for Virtual Memory Virtual Memory Oct. 29, Why VM Works? Motivation #1: DRAM a Cache for Disk

Topic 18: Virtual Memory

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

virtual memory Page 1 CSE 361S Disk Disk

ECE 571 Advanced Microprocessor-Based Design Lecture 12

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

Main Memory (Fig. 7.13) Main Memory

CISC 360. Virtual Memory Dec. 4, 2008

ECE232: Hardware Organization and Design

Recap: Memory Management

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

CS 318 Principles of Operating Systems

a process may be swapped in and out of main memory such that it occupies different regions

Page 1. Memory Hierarchies (Part 2)

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

CSE 120 Principles of Operating Systems

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Address spaces and memory management

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

Memory Hierarchy. Mehran Rezaei

Multi-level Translation. CS 537 Lecture 9 Paging. Example two-level page table. Multi-level Translation Analysis

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Memory: Page Table Structure. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Main Memory: Address Translation

Transcription:

CPS 104 Computer Organization and Programming Lecture 20: Virtual Nov. 10, 1999 Dietolf (Dee) Ramm http://www.cs.duke.edu/~dr/cps104.html CPS 104 Lecture 20.1

Outline of Today s Lecture O Virtual. 6 Paged virtual memory. 6 Virtual to Physical translation: The page table. 6 Fragmentation. 6 Page replacement policies. 6 Reducing the Virtual to Physical address translation time. The TLB Parallel access to the TLB and Cache 6 Protection 6 Putting it all together: The SPARC-20 memory system CPS 104 Lecture 20.2

Virtual CPS 104 Lecture 20.3

System Management Problems O Different Programs have different memory requirements. 6 How to manage program placement? O Different machines have different amount of memory. 6 How to run the same program on many different machines? O At any given time each machine runs a different set of programs. 6 How to fit the program mix into memory? Reclaiming unused memory? Moving code around? O The amount of memory consumed by each program is dynamic (changes over time) 6 How to effect changes in memory location: add or subtract space? O Program bugs can cause a program to generate reads and writes outside the program address space. 6 How to protect one program from another? CPS 104 Lecture 20.4

Virtual Provides illusion of very large memory Sum of the memory of many jobs greater than physical memory Address space of each job larger than physical memory Allows available (fast and expensive) physical memory to be well utilized. Simplifies memory management: code and data movement, protection,... (main reason today) Exploits memory hierarchy to keep average access time low. Involves at least two storage levels: main and secondary Virtual Address -- address used by the programmer Virtual Address Space -- collection of such addresses Address -- address in physical memory also known as physical address or real address CPS 104 Lecture 20.5

Review: A Simple Program s Layout O What are the possible addresses generated by the program? O How big is our DRAM? O Is there more than one program running? O If so, how do we allocate memory to each? 2 n-1 Stack 5 Data 3 0 Text add r,s1,s2 Reserved CPS 104 Lecture 20.6

Paged Virtual : Main Idea O Divide memory (virtual and physical) into fixed size blocks (Pages, Frames). 6 Pages in Virtual space. 6 Frames in Physical space. O Make page size a power of 2: (page size = 2 k ) O All pages in the virtual address space are contiguous. O Pages can be mapped into physical Frames in any order. O Some of the pages are in main memory (DRAM), some of the pages are on secodary memory (disk). O All programs are written using Virtual Address Space. O The hardware does on-the-fly translation between virtual and physical address spaces. O Use a Page Table to translate between Virtual and Physical addresses CPS 104 Lecture 20.7

Basic Issues in Virtual System Design size of information blocks (pages)that are transferred from secondary to main storage (M) block of information brought into M, and M is full, then some region of M must be released to make room for the new block --> replacement policy which region of M is to hold the new block --> placement policy missing item fetched from secondary memory only on the occurrence of a fault --> demand load policy mem disk cache reg Paging Organization frame pages virtual and physical address space partitioned into blocks of equal size frames CPS 104 Lecture 20.8 pages

Virtual and Physical Memories Virtual Page-0 Page-1 Page-2 Page-3 Physical Frame-0 Frame-1 Frame-2 Frame-3 Frame-4 Frame-5 Disk Page -2 Page N-2 Page N-1 Page N CPS 104 Lecture 20.9

Virtual to Physical Address translation Page size: 4K 31 Virtual Address 11 0 Virtual Page Number Page offset Page Table 29 11 0 Physical Frame Number Page offset CPS 104 Lecture 20.10 Physical Address

Address Map V = {0, 1,..., n - 1} virtual address space M = {0, 1,..., m - 1} physical address space n > m MAP: V --> M U {0} address mapping function MAP(a) = a' if data at virtual address a is present in physical address a' and a' in M = 0 if data at virtual address a is not present in M missing item fault Processor fault handler a Addr. Trans. Mechanism 0 a' Main Secondary physical address OS performs this transfer CPS 104 Lecture 20.11

The page table 31 Virtual Address 1211 0 Virtual Page Number Page offset Page table reg V D AC Frame number Access Control Dirty bit 1 0 Valid bit 29 11 0 Physical Frame Number Page offset CPS 104 Lecture 20.12 Physical Address

Address Mapping Algorithm If V = 1 then page is in main memory at frame address stored in table else address located page in secondary memory Access Control R = Read-only, R/W = read/write, X = execute only If kind of access not compatible with specified access rights, then protection_violation_fault If valid bit not set then page fault Protection Fault: access control violation; causes trap to hardware, or software fault handler Page Fault: page not resident in physical memory, also causes a trap; usually accompanied by a context switch: current process suspended while page is fetched from secondary storage e.g., VAX 11/780 each process sees a 4 gigabyte (2 32 bytes) virtual address space 1/2 for user regions, 1/2 for a system wide name space shared by all processes. CPS 104 Lecture 20.13 page size is 512 bytes (too small)

Optimal Page Size Choose page size that minimizes fragmentation (partial use of pages) large page size => internal fragmentation more severe BUT decreases the # of pages / name space => smaller page tables In general, the trend is towards larger page sizes because -- Memories get larger as the price of RAM drops. -- The gap between processor speed and disk speed grow wider. -- Programmers demand larger virtual address spaces (larger program) -- Smaller page table. Most machines at 4K-8K byte pages today, with page sizes likely to increase CPS 104 Lecture 20.14

Fragmentation Page Table Fragmentation occurs when page tables become very large because of large virtual address spaces; linearly mapped page tables could take up sizable chunk of memory 21 9 EX: VAX Architecture (late 1970s) XX Page Number Disp NOTE: this implies that page table could require up to 2 ^21 entries, each on the order of 4 bytes long (8 M Bytes) Alternatives to linear page table: 00 P0 region of user process 01 P1 region of user process 10 system name space (1) Hardware associative mapping: requires one entry per page frame (O( M )) rather than per virtual page (O( N )) (2) "software" approach based on a hash table (inverted page table) page# disp Present Access Page# Phy Addr Page Table CPS 104 Lecture 20.15 associative or hashed lookup on the page number field

Page Replacement Algorithms Just like cache block replacement! Least Recently Used: -- selects the least recently used page for replacement -- requires knowledge about past references, more difficult to implement (thread through page table entries from most recently referenced to least recently referenced; when a page is referenced it is placed at the head of the list; the end of the list is the page to replace) -- good performance, recognizes principle of locality CPS 104 Lecture 20.16

Page Replacement (Continued) Not Recently Used: Associated with each page is a reference flag such that ref flag = 1 if the page has been referenced in recent past = 0 otherwise -- if replacement is necessary, choose any page frame such that its reference bit is 0. This is a page that has not been referenced in the recent past -- clock implementation of NRU: page table entry 1 0 1 0 1 0 0 0 page table entry last replaced pointer (lrp) if replacement is to take place, advance lrp to next entry (mod table size) until one with a 0 bit is found; this is the target for replacement; As a side effect, all examined PTE's have their reference bits set to zero. ref bit An optimization is to search for a page that is both not recently referenced AND not dirty. Why? CPS 104 Lecture 20.17

Virtual Address and a Cache CPU VA PA miss Translation Cache data hit Main It takes an extra memory access to translate VA to PA This makes cache access very expensive, and this is the "innermost loop" that you want to go as fast as possible CPS 104 Lecture 20.18

Translation Lookaside Buffer (TLB) A way to speed up translation is to use a special cache of recently used page table entries -- this has many names, but the most frequently used is Translation Lookaside Buffer or TLB Virtual Address Physical Address Dirty Ref Valid Access TLB access time comparable to cache access time (much less than main memory access time) Typical TLB is 64-256 entries fully associative cache with random replacement CPS 104 Lecture 20.19

Translation Look-Aside Buffers Just like any other cache, the TLB can be organized as fully associative, set associative, or direct mapped TLBs are usually small, typically not more than 128-256 entries even on high end machines. This permits fully associative lookup on these machines. Many mid-range machines use small n-way set associative organizations. CPU hit VA PA miss TLB Cache Lookup Main Translation with a TLB miss Translation hit data CPS 104 Lecture 20.20 1/2 t t 20 t

TLB Design O Must be fast, not increase critical path O Must achieve high hit ratio O Generally small highly associative (64-128 entries FA cache) O Mapping change 6 page added/removed from physical memory 6 processor must invalidate the TLB entry (special instructions) O PTE is per process entity 6 Multiple processes with same virtual addresses 6 Context Switches? O Flush TLB O Add ASID (PID) 6 part of processor state, must be set on context switch CPS 104 Lecture 20.21

Hardware Managed TLBs CPU O Hardware Handles TLB miss TLB Control O Dictates page table organization O Complicated state machine to walk page table O Exception only if access violation CPS 104 Lecture 20.22

Software Managed TLBs TLB CPU Control O Software Handles TLB miss 6 OS reads translations from Page Table and puts them in TLB 6 special instructions O Flexible page table organization O Simple Hardware to detect Hit or Miss O Exception if TLB miss or access violation CPS 104 Lecture 20.23

Choosing a Page Size What if page is too small? O Too many misses O BIG page tables What if page is too big? O Fragmentation 6 don t use all of the page, but can t use that DRAM for other pages 6 want to minimize fragmentation (get good utilization of physical memory) O Smaller page tables O Trend it toward larger pages 6 increasing gap between CPU/DRAM/DISK CPS 104 Lecture 20.24

Reducing Translation Time Machines with TLBs go one step further to reduce # cycles/cache access They overlap the cache access with the TLB access Works because high order bits of the VA are used to look in the TLB while low order bits are used as index into cache CPS 104 Lecture 20.25

Overlapped Cache & TLB Access 32 TLB assoc lookup index Cache PA Hit/ Miss 20 page # 12 disp PA Data Hit/ Miss = IF cache hit AND (cache tag = PA) then deliver data to CPU ELSE IF [cache miss OR (cache tag = PA)] and TLB hit THEN access memory with the PA from the TLB ELSE do standard VA translation CPS 104 Lecture 20.26

Problems With Overlapped TLB Access Overlapped access only works as long as the address bits used to index into the cache do not change as the result of VA translation This usually limits things to small caches, large page sizes, or high n-way set associative caches if you want a large cache Example: suppose everything the same except that the cache is increased to 8 K bytes instead of 4 K: 11 2 00 cache index 20 12 virt page # disp Solutions: go to 8K byte page sizes; go to 2 way set associative cache; or SW guarantee VA[13]=PA[13] This bit is changed by VA translation, but is needed for cache lookup CPS 104 Lecture 20.27 10 4 4 1K 2 way set assoc cache

More on Selecting a Page Size O Reasons for larger page size 6 Page table size is inversely proportional to the page size. 6 faster cache hit time when cache " page size; bigger page => bigger cache (no aliasing problem). 6 Transferring larger pages to or from secondary storage, is more efficient (Higher bandwidth) 6 The number of TLB entries is restricted by clock cycle time, so a larger page size reduces TLB misses. O Reasons for a smaller page size 6 don t waste storage; data must be contiguous within page. 6 quicker process start for small processes(?) O Hybrid solution: multiple page sizes: Alpha, UltraSPARC: 8KB, 64KB, 512 KB, 4 MB pages CPS 104 Lecture 20.28

Protection O Paging Virtual memory provides protection by: 6 Each process (user or OS) has different virtual memory space. 6 The OS maintain the page tables for all processes. 6 A reference outside the process allocated space cause an exception that lets the OS decide what to do. 6 sharing between processes is done via different Virtual spaces but common physical frames. CPS 104 Lecture 20.29

Putting it together: The SparcStation 20: O The SparcStation 20 has the following memory system. O Caches: Two level-1 caches: I-cache and D-cache Parameter Instruction cache Data cache Organization 20Kbyte 5-way SA 16KB 4-way SA Page size 4K bytes 4K bytes Line size 8 bytes 4 bytes Replacement Pseudo LRU Pseudo LRU P TLB: 64 entry Fully Associative TLB, Random replacement PExternal Level-2 Cache: 1M-byte, Direct Map, 128 byte blocks, 32-byte sub-blocks. CPS 104 Lecture 20.30

Virtual Address 20 SparcStation 20 Data Access 12 2 10 Data Cache tag data tag tag tag 4 bytes 1K 10 Physical Address = = = = To Data Select 24 20 TLB = = = tag0 tag1 tag2 To CPU CPS 104 Lecture 20.31 = tag63

SparcStation 20: Instruction Instruction Address 2 Instruction Cache 4 bytes 20 12 10 tag data tag tag tag tag 1K 10 Physical Address 36 = = = = = To Instruction Select 24 20 TLB = = = tag0 tag1 tag2 To CPU (instruction register) CPS 104 Lecture 20.32 = tag63