Lecture 8: Memory Management

Similar documents
CS 153 Design of Operating Systems Winter 2016

Opera&ng Systems ECE344

CSE 120 Principles of Operating Systems Spring 2017

CSE 120 Principles of Operating Systems

CS 318 Principles of Operating Systems

Recap: Memory Management

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

VIRTUAL MEMORY II. Jo, Heeseung

VIRTUAL MEMORY. Operating Systems 2015 Spring by Euiseong Seo

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

This lecture. Virtual Memory. Virtual memory (VM) CS Instructor: Sanjeev Se(a

Virtual Memory: Concepts

Virtual Memory I. Jo, Heeseung

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Main Points. Address Transla+on Concept. Flexible Address Transla+on. Efficient Address Transla+on

Multi-level Translation. CS 537 Lecture 9 Paging. Example two-level page table. Multi-level Translation Analysis

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

CS153: Memory Management 1

Virtual Memory: Concepts

Operating Systems. Operating Systems Sina Meraji U of T

Memory: Page Table Structure. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Chapter 8 Virtual Memory

CS 153 Design of Operating Systems Winter 2016

CIS Operating Systems Memory Management Address Translation. Professor Qiang Zeng Fall 2017

Virtual Memory: Systems

Translation Buffers (TLB s)

Virtual Memory Review. Page faults. Paging system summary (so far)

Virtual Memory. CS61, Lecture 15. Prof. Stephen Chong October 20, 2011

CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. Lecture 9 Virtual Memory

Transla'on, Protec'on, and Virtual Memory. 2/25/16 CS 152 Sec'on 6 Colin Schmidt

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Memory Management. Goals of Memory Management. Mechanism. Policies

Virtual to physical address translation

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Page Tables. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

Goals of memory management

Lecture 2: Processes. CSE 120: Principles of Opera9ng Systems. UC San Diego: Summer Session I, 2009 Frank Uyeda

University of Washington Virtual memory (VM)

Virtual Memory II. CSE 351 Autumn Instructor: Justin Hsia

CS5460: Operating Systems Lecture 14: Memory Management (Chapter 8)

Virtual Memory Oct. 29, 2002

Outline. V Computer Systems Organization II (Honors) (Introductory Operating Systems) Advantages of Multi-level Page Tables

Memory Management - Demand Paging and Multi-level Page Tables

Virtual Memory: Mechanisms. CS439: Principles of Computer Systems February 28, 2018

CS 333 Introduction to Operating Systems. Class 11 Virtual Memory (1) Jonathan Walpole Computer Science Portland State University

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

CSE 351. Virtual Memory

CS399 New Beginnings. Jonathan Walpole

CS 153 Design of Operating Systems

John Wawrzynek & Nick Weaver

CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. Lecture 8 Address Transla>on

Background. IBM sold expensive mainframes to large organiza<ons. Monitor sits between one or more OSes and HW

Lecture 13: Virtual Memory Management. CSC 469H1F Fall 2006 Angela Demke Brown

virtual memory Page 1 CSE 361S Disk Disk

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

CISC 360. Virtual Memory Dec. 4, 2008

198:231 Intro to Computer Organization. 198:231 Introduction to Computer Organization Lecture 14

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

Computer Systems. Virtual Memory. Han, Hwansoo

Address spaces and memory management

How to create a process? What does process look like?

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

Processes and Virtual Memory Concepts

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

15 Sharing Main Memory Segmentation and Paging

CS 61C: Great Ideas in Computer Architecture. Virtual Memory III. Instructor: Dan Garcia

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 24

CS 61C: Great Ideas in Computer Architecture. Lecture 23: Virtual Memory

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2015 Lecture 23

Operating Systems and Computer Networks. Memory Management. Dr.-Ing. Pascal A. Klein

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

14 May 2012 Virtual Memory. Definition: A process is an instance of a running program

Motivations for Virtual Memory Virtual Memory Oct. 29, Why VM Works? Motivation #1: DRAM a Cache for Disk

16 Sharing Main Memory Segmentation and Paging

ADDRESS TRANSLATION AND TLB

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Virtual Memory: From Address Translation to Demand Paging

Caching and Demand- Paged Virtual Memory

Lecture 1: Course Overview

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 23

2-Level Page Tables. Virtual Address Space: 2 32 bytes. Offset or Displacement field in VA: 12 bits

ADDRESS TRANSLATION AND TLB

Chapter 8: Memory-Management Strategies

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

A Few Problems with Physical Addressing. Virtual Memory Process Abstraction, Part 2: Private Address Space

Virtual Memory: From Address Translation to Demand Paging

Carnegie Mellon. 16 th Lecture, Mar. 20, Instructors: Todd C. Mowry & Anthony Rowe

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS307: Operating Systems

Virtual Memory 1. Virtual Memory

Transcription:

Lecture 8: Memory Management CSE 120: Principles of Opera>ng Systems UC San Diego: Summer Session I, 2009 Frank Uyeda

Announcements PeerWise ques>ons due tomorrow. Project 2 is due on Friday. Milestone on Tuesday night. Tonight. Homework 3 is due next Monday. 2

PeerWise A base register contains: The first available physical memory address for the system The beginning physical memory address of a process s address space The base (i.e. base 2 for binary, etc) that is used for determining page sizes The minimum size of physical memory that will be assigned to each process 3

PeerWise If the base register holds 640000 and the limit register is 200000, then what range of addresses can the program legally access? 440000 through 640000 200000 through 640000 0 through 200000 640000 through 840000 4

Goals for Today Understand Paging Address transla>on Page Tables Understand Segmenta>on How it combines features of other techniques 5

Recap: Virtual Memory Memory Management Unit (MMU) Hardware unit that translates a virtual address to a physical address Each memory reference is passed through the MMU Translate a virtual address to a physical address Transla>on Lookaside Buffer (TLB) Essen>ally a cache for the MMU s virtual to physical transla>ons table Not needed for correctness but source of significant performance gain Virtual Address CPU MMU Transla>on Table Physical Address Memory TLB 6

Recap: Paging Paging solves the external fragmenta>on problem by using fixed sized units in both physical and virtual memory Virtual Memory Page 1 Page 2.. Page N Physical Memory Page 1 Page 2 Page 3 Page 4 Page 5 7

Memory Management in Nachos Virtual Memory Physical Memory 0x00000000 0x00000000 Code Page 1 Code Page 2 0x00000400 Data Page 3..... 0x.. Stack Page P.. Page N 0x00040000 8

Memory Management in Nachos Virtual Memory Physical Memory 0x00000000 0x00000000 Code Page 1 Code Page 2 0x00000400 Data Page 3..... 0x.. Stack Page P What if code sec>on isn t a mul>ple of page size? What if 0x.. is larger than physical memory? Page N What if 0x.. is smaller than physical memory?.. 0x00040000 9

Memory Management in Nachos How do we know how large a program is? A lot is done for you in userprog/userprocess.java Your applica>ons will be wrihen in C Your C programs compile into.coff files Coff.java and CoffSec>on.java are provided to par>>on the coff file If not enough physical memory, exec() returns error Nachos exec() is different from Unix exec()!!!!! Virtual Memory maps exactly to Physical Memory? Who needs a page table, TLB or MMU? What are the implica>ons for this in terms of mul>programming? 10

Project 2: Mul>programming Need to support for mul>progamming Programs coexist on physical memory How do we do this? Physical Memory P1 Base Register P4 s Base Physical Memory P1 Base Register P4 s Base Virtual Address Offset + P2 P3 P4 Limit Register P3 s Base Virtual Address < Yes? Offset + P2 P3 P5 No? Would Fixed Par>>ons work? Protec>on Fault Would Variable Par>>ons work? 11

Paging Process 1 s VAS Page 1 Physical Memory 0x00000000 Page 2 Page 3 Process 2 s VAS MMU Page 1 Page 2 0x00000400.. Page 1 TLB Page 3 Page M Page 2 Page 3 Page Table... Page P.. Page Q.. Page N 0x00040000 12

MMU and TLB Memory Management Unit (MMU) Hardware unit that translates a virtual address to a physical address Each memory reference is passed through the MMU Translate a virtual address to a physical address Transla>on Lookaside Buffer (TLB) Essen>ally a cache for the MMU s virtual to physical transla>ons table Not needed for correctness but source of significant performance gain Virtual Address CPU MMU Transla>on Table Physical Address Memory TLB 13

Paging: Transla>ons Transla>ng addresses Virtual address has two parts: virtual page number and offset Virtual page number (VPN) is an index into a page table Page table determines page frame number (PFN) Physical address is PFN::offset 0xBAADF00D = virtual address 0xBAADF virtual page number 0x00D offset 0xBAADF virtual page number Transla>on Table page table 0x900DF physical page number (page frame number) 14

Page Table Entries (PTEs) M R V Prot Page Frame Number 1 1 1 2 20 Page table entries control mapping The Modify bit says whether or not the page has been wrihen It is set when a write to the page occurs The Reference bit says whether the page has been accessed It is set when a read or write to the page occurs The Valid bit says whether or not the PTE can be used It is checked each >me the virtual address is used The Protec>on bits say what opera>ons are allowed on page Read, write, execute The page frame number (PFN) determines physical page Note: when you do exercises in exams or hw, we ll open tell you to ignore coun>ng M,R,V,Prot bits when calcula>ng size of structures 15

Paging Example Revisited 0xBAADF00D Physical Memory Virtual Address Page number Offset 0x00000000 Page 1 0xBAADF 0xF00D Page Table Physical Address Page frame 0x900DF Offset 0xF00D Page 2 Page 3 Page number Page table entry 0x900DF00D.. Page N M R V Prot Page Frame Number 0xFFFFFFFF 16

V Page Frame Paging Process 1 s VAS Page 1 Page 2 Page 3 1 0x04 1 0x01 1 0x05 1 0x07 Process 2 s VAS MMU Physical Memory Page 1 Page 2 0x00000000 0x00000400.. Page 1 TLB Page 3 Page M Page 2 Page 3 Page Table... Page P V Page Frame 1 0x04 1 0x01.. Page Q.. Page N 0x00040000 17 1 0x05 1 0x07

Example Assume we are using Paging and: Memory access = 5us TLB search = 500ns What is the avg. memory access >me without the TLB? What is the avg. memory access >me with 50% TLB hit rate? What is the avg. memory access >me with 90% TLB hit rate? Virtual Address CPU MMU Transla>on Table Physical Address Memory TLB 18

Paging Advantages Easy to allocate memory Memory comes from a free list of fixed size chunks Alloca>ng a page is just removing it from the list External fragmenta>on is not a problem Easy to swap out chunks of a program All chunks are the same size Pages are a convenient mul>ple of the disk block size How do we know if a page is in memory or not? 19

Paging Limita>ons Can s>ll have internal fragmenta>on Process may not use memory in mul>ples of pages Memory reference overhead 2 references per address lookup (page table, then memory) Solu>on use a hardware cache of lookups (TLB) Memory required to hold page table can be significant Need one PTE per page 32 bit address space w/4kb pages = up to PTEs 4 bytes/pte = MB page table 25 processes = MB just for page tables! Solu>on: page the page tables (more later) 20

Paging: Linear Address Space Stack 0xFFF.. (Ending Address) Heap Data Segment Text Segment 0x00. (Star>ng Address) 21

Segmenta>on: Mul>ple Address Spaces Heap Text Stack Data 22

Segmenta>on Segment Table Physical Memory Virtual Address limit base Segment # Offset Yes? < + No? Protec>on Fault 23

Segmenta>on Segmenta>on is a technique that par>>ons memory into logically related data units Module, procedure, stack, data, file, etc. Virtual addresses become <segment #, offset> Units of memory from user s perspec>ve Natural extension of variable sized par>>ons Variable sized par>>ons = 1 segment/process Segmenta>on = many segments/process Hardware support Mul>ple base/limit pairs, one per segment (segment table) Segments named by #, used to index into table 24

Segment Table Extensions Can have one segment table per process Segment #s are then process rela>ve (why do this?) Can easily share memory Put same transla>on into base/limit pair Can share with different protec>ons (same base/limit, diff prot) Why is this different from pure paging? Can define protec>on by segment Problems Cross segment addresses Segments need to have same #s for pointers to them to be shared among processes Large segment tables Keep in main memory, use hardware cache for speed 25

Segmenta>on Example External Fragmenta>on in Segmenta>on Note: Image courtesy of Tanenbaum, MOS 3/e 26

Paging vs Segmenta>on Considera4on Paging Segmenta4on Need the programmer be aware that this technique is being used? How many linear address spaces are there? Can the total address space exceed the size of physical memory? Can procedures and data be dis>nguished and separately protected? Is sharing of procedures between users facilitated? Note: Image adapted from Tanenbaum, MOS 3/e 27

Segmenta>on and Paging Can combine segmenta>on and paging The x86 supports segments and paging Use segments to manage logically related units Module, procedure, stack, file, data, etc. Segments vary in size, but usually large (mul>ple pages) Use pages to par>>on segments into fixed sized chunks Makes segments easier to manage within physical memory Segments become pageable rather than moving segments into and out of memory, just move page por>ons of segment Need to allocate page table entries only for those pieces of the segments that have themselves been allocated Tends to be complex 28

Virtual Memory Summary Virtual memory Processes use virtual addresses OS + hardware translates virtual addresses into physical addresses Various techniques Fixed par>>ons easy to use, but internal fragmenta>on Variable par>>ons more efficient, but external fragmenta>on Paging use small, fixed size chunks, efficient for OS Segmenta>on manage in chunks from user s perspec>ve Combine paging and segmenta>on to get benefits of both 29

Managing Page Tables We computed the size of the page table for a 32 bit address space with 4K pages to be MB This is far too much overhead for each process How can we reduce this overhead? Observa>on: Only need to map the por>on of the address space actually being used (>ny frac>on of en>re address space) How do we only map what is being used? Can dynamically extend page table Does not work if address space is sparse (internal fragmenta>on) Use another level of indirec>on: mul> level page tables 30

Managing Page Tables We computed the size of the page table for a 32 bit address space with 4K pages to be 100 MB This is far too much overhead for each process How can we reduce this overhead? Observa>on: Only need to map the por>on of the address space actually being used (>ny frac>on of en>re address space) How do we only map what is being used? Can dynamically extend page table Does not work if address space is sparse (internal fragmenta>on) Use another level of indirec>on: mul> level page tables 31

One Level Page Table Physical Memory Virtual Address Page number Offset 0x00000000 Page 1 Page Table Physical Address Page frame Offset Page 2 Page 3 Page frame (PTE).. 0xFFFFFFFF Page N 32

Two Level Page Table Physical Memory Virtual Address Master Secondary Offset 0x00000000 Page 1 Physical Address Page frame Offset Page 2 Page 3 Page table entry Master Page Table Page frame (PTE) Secondary Page Table(s) 0xFFFFFFFF.. Page N 33

Two Level Page Tables Originally, virtual addresses (VAs) had two parts Page number (which mapped to frame) and an offset Now VAs have three parts: Master page number, secondary page number, and offset Master page table maps VAs to secondary page table We d like a manageable master page size Secondary table maps page number to physical page Determines which physical frame the address resides in Offset indicates which byte in physical page Final system page/frame size is s>ll the same, so offset length stays the same 34

Two Level Page Table Example Physical Memory Virtual Address 0x00000000 Master Secondary Offset Page 1 10 10 12 Physical Address Page frame Offset Page 2 1 K entries 1 K entries Page 3 Page frame (PTE).. Page table entry Page N Master Page Table Secondary Page Table(s) 0xFFFFFFFF Example: 4KB pages, 4B PTEs (32 bit RAM), and split remaining bits evenly among master and secondary 35

Where do Page Tables Live? Physical memory Easy to address, no transla>on required But, allocated page tables consume memory for life>me of Vas Virtual memory (OS virtual address space) Cold (unused) page table pages can be paged out to disk But, addressing page tables requires transla>on How do we stop recursion Do not page the outer page table (an instance of page pinning or wiring) If we re going to page the page tables, might as well page the en>re OS address space, too Need to wire special code and data (fault, interrupt handlers) 36

Efficient Transla>ons Our original page table scheme already doubled the cost of doing memory lookups One lookup into the page table, another to fetch the data Now two level page tables triple the cost! Two lookups into the page tables, a third to fetch the data And this assumes the page table is in memory How can we use paging but also have lookups cost about the same as fetching from memory? Cache transla>ons in hardware Transla>on Lookaside Buffer (TLB) TLB managed by Memory Management Unit (MMU) 37

TLBs Transla>on Lookaside Buffers Translate virtual page #s into PTEs (not physical addresses) Can be done in a single machine cycle TLBs implemented in hardware Fully associa>ve cache (all entries looked up in parallel) Cache tags are virtual page numbers Cache values are PTEs (entries from page tables) With PTE + offset, can directly calculate physical address TLBs exploit locality Processes only use a handful of pages at a >me 16 48 entries/pages (64 192K) Only need those pages to be mapped Hit rates are therefore very important 38

Loading TLBs Most address transla>ons are handled using the TLB > 99% of transla>ons, but there are misses (TLB miss) Who paces transla>ons into the TLB (loads the TLB)? Sopware loaded TLB (OS) TLB faults to the OS, OS finds appropriate PTE, loads it into TLB Must be fast (but s>ll 20 200 cycles) CPU ISA has instruc>ons for manipula>ng TLB Tables can be in any format convenient for OS (flexible) Hardware (Memory Management Unit) Must know where page tables are in main memory OS maintains tables, HW accesses them directly Tables have to be in HW defined format (inflexible) 39

Managing TLBs OS ensures that TLB and page tables are consistent When I changes the protec>on bits of a PTE, it needs to invalidate the PTE if it is in the TLB also Reload TLB on a process context switch Invalidate all entries Why? What is one way to fix it? When the TLB misses and a new PTE has to be loaded, a cached PTE must be evicted Choosing a PTE to evict is called the TLB replacement policy Implemented in hardware, open simple (e.g., Last Not Used) 40

Summary Segmenta>on Observa>on: User applica>ons group related data Idea: Set virtual >physical mapping to mirror applica>on organiza>on. Assign privileges per segment. Page Table management Observa>on: Per process page tables can be very large Idea: Page the page tables, use mul> level page tables Efficient transla>ons Observa>on: Mul> level page tables and other page faults can make memory lookups slow Idea: Cache lookups in hardware TLB 41

Next Time Read Chapter 9 Peerwise ques>ons due tomorrow at midnight. Check Web site for course announcements hhp://www.cs.ucsd.edu/classes/su09/cse120 42