Virtual Memory 2. To do. q Handling bigger address spaces q Speeding translation

Similar documents
Virtual Memory 2. Today. Handling bigger address spaces Speeding translation

Virtual Memory. Today. Handling bigger address spaces Speeding translation

Virtual Memory 2. q Handling bigger address spaces q Speeding translation

Virtual Memory. Yannis Smaragdakis, U. Athens

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Multi-level Translation. CS 537 Lecture 9 Paging. Example two-level page table. Multi-level Translation Analysis

VIRTUAL MEMORY II. Jo, Heeseung

CS 550 Operating Systems Spring Memory Management: Paging

CS 153 Design of Operating Systems Winter 2016

Memory Management. An expensive way to run multiple processes: Swapping. CPSC 410/611 : Operating Systems. Memory Management: Paging / Segmentation 1

Memory: Page Table Structure. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Virtual Memory 1. To do. q Segmentation q Paging q A hybrid system

Virtual Memory. Today. Segmentation Paging A good, if common example

20. Paging: Smaller Tables

CS 333 Introduction to Operating Systems. Class 11 Virtual Memory (1) Jonathan Walpole Computer Science Portland State University

CSE 120 Principles of Operating Systems Spring 2017

1. Creates the illusion of an address space much larger than the physical memory

CSE 120 Principles of Operating Systems

Virtual to physical address translation

Virtual Memory. Samira Khan Apr 27, 2017

Chapter 8 Main Memory

Main Memory (Part II)

Virtual Memory: From Address Translation to Demand Paging

CS 318 Principles of Operating Systems

Beyond Physical Memory: Mechanisms

Memory Management. Dr. Yingwu Zhu

Paging: Smaller Tables

CS370 Operating Systems

CS3600 SYSTEMS AND NETWORKS

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013

Virtual Memory: From Address Translation to Demand Paging

Virtual memory Paging

CS 3733 Operating Systems:

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

20-EECE-4029 Operating Systems Spring, 2013 John Franco

Virtual Memory. Kevin Webb Swarthmore College March 8, 2018

Memory Management. Dr. Yingwu Zhu

Chapter 8 Memory Management

Lecture 8 Memory Management Strategies (chapter 8)

Paging: Faster Translations (TLBs)

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

CS399 New Beginnings. Jonathan Walpole

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Page Tables. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Chapter 8: Memory-Management Strategies

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Virtual Memory. Motivation:

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

ECE 571 Advanced Microprocessor-Based Design Lecture 13

Virtual Memory. Virtual Memory

Virtual memory - Paging

The process. one problem

Paging: Faster Translations (TLBs)

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Operating System Labs. Yuanbin Wu

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

CS 537 Lecture 6 Fast Translation - TLBs

COSC3330 Computer Architecture Lecture 20. Virtual Memory

Chapter 8: Memory Management

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

COS 318: Operating Systems. Virtual Memory and Address Translation

Operating Systems. Paging... Memory Management 2 Overview. Lecture 6 Memory management 2. Paging (contd.)

1. With respect to space, linked lists are and arrays are.

Chapter 9: Memory Management. Background

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, FALL 2012

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Chapter 8: Main Memory

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Goals of Memory Management

Main Memory: Address Translation

Operating Systems. Operating Systems Sina Meraji U of T

Basic Memory Management

Memory Management. Memory Management

Chapter 8: Main Memory

Memory Management. Memory

Operating System Labs. Yuanbin Wu

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

ECE 571 Advanced Microprocessor-Based Design Lecture 12

CIS Operating Systems Memory Management Address Translation for Paging. Professor Qiang Zeng Spring 2018

CS307 Operating Systems Main Memory

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

CSE 4/521 Introduction to Operating Systems. Lecture 14 Main Memory III (Paging, Structure of Page Table) Summer 2018

Main Memory (Fig. 7.13) Main Memory

Lecture 13: Virtual Memory Management. CSC 469H1F Fall 2006 Angela Demke Brown

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Address Translation. Tore Larsen Material developed by: Kai Li, Princeton University

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 8: Main Memory

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Transcription:

Virtual Memory 2 To do q Handling bigger address spaces q Speeding translation

Considerations with page tables Two key issues with page tables Mapping must be fast Done on every memory reference, at least 1 per instruction With large address spaces, large page tables 64b addresses & 4KB page 2 52 pages ~ 4.5x1 15!!! Simplest solutions Page table in registers Page table in memory & Page Table Base Register 2

Page table and page sizes Bigger pages => Smaller page tables But more internal fragmentation Smaller pages => Less internal fragmentation Less unused program in memory But larger page tables more I/O time, getting page from disk seek and rotational delays dominate Getting a bigger page would take as much time 3

Page table and page sizes Average process size s bytes, page size p bytes Number of pages needed per process ~ s/p Overhead = page table + internal fragmentation PTE size e bytes => s/p*e bytes of page table space Overhead = s/p*e + p/2 Finding the optimum Internal fragmentation Take first derivative respect to p, equating it to zero -se/p 2 + 1/2 = p = 2se s = 1MB e = 8 bytes è Optimal p = 4KB 4

Separate instruction & data spaces One address space Size limit Two address spaces? 2 address spaces, Instruction and Data, 2x space Each with its own page table & paging algorithm Pioneered by PDP-11 Single address space 2 32 2 32 I space 2 32 D space (free) Data (free) Data Program Program 5

A hybrid approach Pages & segments As in MULTICS Instead of a single page table, one per segment The base register of the segment points to the base of the page table 32b address 4 segments 4KB pages 31 3 29 28 27 26 25 2423 22 21 2 19 18 17 16 15 14 13 12 11 1 9 8 7 6 5 4 3 2 1 SN VPN Offset But Segmentation is not as flexible as we may want Fragmentation is still an issue Page tables can be of any size, so here we go again 6

Hierarchical or multilevel page table Another approach page the page table! Same argument you don t need the whole PT Example Virtual address (32b machine, 4KB page): 2b page # + offset Since PT is paged, divide page #: Page number (1b) + Page offset in 2 nd level (1b) Pros and cons Allocate PT space as needed If carefully done, each portion of PT fits neatly within a page More effort for translation And a bit more complex 7

Hierarchical page table 8 Memory Page table Page of page table Memory Page table Page Table Base Register Outer-page table Page Directory Base Register VPN2 Offset 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 19 2 21 22 23 24 25 26 27 28 29 3 31 VPN2 Offset 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 19 2 21 22 23 24 25 26 27 28 29 3 31 VPN1

Three-level page table in Linux Designed to accommodate the 64-bit Alpha To adjust for a 32b processor middle directory of size 1 Virtual address Global directory Middle directory Page Offset Page table Page middle directory + Page size Page directory + + Page frame in physical memory cr3 register + 9

Inverted page tables Another way to save space inverted page tables Page tables are index by virtual page #, hence their size Inverted page tables one entry per page frame But to get the page you are still given a VPN Straightforward with a page table, but Linear with inverted page tables too slow mapping! Traditional page table with an entry per each 2 52 pages 2 52-1 1GB physical memory has 2 18 4KB page frames 2 18-1 Indexed by virtual page 1

Inverted and hashed page tables Slow inverted page tables hash tables may help Since different virtual page number might map to identical hash values a collision-chain mechanism 1GB physical memory has 2 18 4KB page frames 2 18-1 2 18-1 Hash table Indexed by hash on virtual page Virtual page page frame And of course caching, a.k.a. TLB 11

Speeding things up a bit Simple page table 2x cost of memory lookups First into page table, a second to fetch the data Two-level page tables triple the cost! Two lookups into page table and then fetch the data And two is not enough How can we make this more efficient? 12

Speeding things up a bit Ideally, make fetching from a virtual address almost as efficient as from a physical address Observation locality of references (lot of references to few pages) Solution hardware cache inside the CPU Translation Lookaside Buffer (TLB) Cache the virtual-to-physical translations in HW A better name would be address-translation cache Traditionally managed by the MMU 13

TLBs Translates virtual page #s into page frame #s Can be done in single machine cycle Implemented in hardware A fully associative cache (parallel search) Cache tags are virtual page numbers Cache values are page frame numbers With this + offset, MMU can calculate physical address A typical TLB entry might look like this VPN PFN Other bits 14

TLBs hit VPN = (VirtAddr * VPN_MASK) >> SHIFT (Success, TlbEntry) = TLB_Lookup(VPN) If (Success == True) // TLB Hit if (CanAccess(TlbEntry.ProtectBits) == True) Offset = VirtAddr & OFFSET_MASK PhysAddr = (TlbEntry.PFN << SHIFT) Offset Register = AccessMemory(PhysAddr) else RaiseException(PROTECTION_FAULT) Logical address CPU Page # Frame # TLB hit Physical address Physical memory 15

TLBs miss else // TLB Miss PTEAddr = PTBR + (VPN * sizeof(pte)) PTE = AccessMemory(PTEAddr) if (PTE.Valid == False) RaiseException(SEGMENTATION_FAULT) else TLB_Insert(VPN, PTE.PFN, PTE.ProtectBits) RetryInstruction() Logical address CPU Page # Frame # TLB hit Physical address Physical memory TLB miss --- --- f --- --- --- Page table 16

Managing TLBs Address translations mostly handled by TLB >99% of translations, but there are TLB misses If a miss, translation is placed into the TLB Who manages the TLB miss? Hardware, the memory management unit MMU Knows where page tables are in memory OS maintains them, HW access them directly E.g., Intel x86 17

Managing TLBs Software TLB management E.g. MIPS R1k, Sun s SPARC v9 Idea OS loads TLB On a TLB miss, faults to OS OS finds page table entry removes an entry from TLB enters new one and restarts instruction Must be fast CPU ISA has instructions for TLB manipulation OS gets to pick the page table format 18

Managing TLBs OS ensures TLB and page tables are consistent When OS changes protection bits in an entry, it needs to invalidate the line if it is in the TLB When the TLB misses, and a new process table entry is loaded, a cached entry must be evicted Choosing a victim TLB replacement policy Implemented in hardware, usually simple (e.g., LRU) Could you have a TLB miss and still have the referenced page in memory? Yes, a soft miss ; just update the TLB 19

Managing TLBs What happens on a process context switch? Need to invalidate all the entries in TLB! (flush) A big part of why process context switches are costly Can you think of a hardware fix to this? Add an Address Space Identifier field to the TLB 2

An example TLB From MIPS R4 software-managed TLB 32b address space with 4KB pages 2b VPN and 12b offset But TLB has only 19b for VPN! User addresses will only come from half the address space the rest is for the kernel So 19b is enough VPN 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 19 2 21 22 23 24 25 26 27 28 29 3 31 PFN VPN can map to a 24b physical frame # and thus support systems with up to 64GB of physical memory G ASID C D V 21

An example TLB VPN ASID 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 19 2 21 22 23 24 25 26 27 28 29 3 31 G C D V PFN G is for pages globally shared (so ASID is ignored) C is for coherence; D is for dirty and V is for valid Since it is software managed, OS needs instructions to manipulate it TLBP probes TLBR reads TLBWI and TLBWR to replaces a specific or a random entry 22

Effective access time Associative Lookup = ε time units Hit ratio - α - fraction of times a page number is found in the associative registers (ratio related to TLB size) Effective Memory Access Time (EAT) TLB hit TLB miss EAT = α * (ε + memory-access) + (1 - α) (ε + 2* memory-access) Why 2? α = 8% ε = 2 nsec memory-access = 1 nsec EAT =.8 * (2 + 1) +.2 * (2 + 2 * 1) = 14 nsec 23

Next time Virtual memory policies Some other design issues 24

And now a short break Before the Internet - xkcd 25