Virtual Memory Outline

Similar documents
Chapter 9: Virtual Memory

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Chapter 9: Virtual Memory

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 9: Virtual Memory

Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,

Chapter 3: Virtual Memory ว ตถ ประสงค. Background สามารถอธ บายข อด ในการท ระบบใช ว ธ การจ ดการหน วยความจ าแบบเสม อนได

Operating System Concepts

CS307: Operating Systems

Chapter 10: Virtual Memory. Background

Chapter 9: Virtual-Memory

First-In-First-Out (FIFO) Algorithm

OPERATING SYSTEM. Chapter 9: Virtual Memory

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Where are we in the course?

Chapter 10: Virtual Memory

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory

Operating System Concepts 9 th Edition

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Virtual Memory COMPSCI 386

Demand Paging. Valid-Invalid Bit. Steps in Handling a Page Fault. Page Fault. Transfer of a Paged Memory to Contiguous Disk Space

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

CS370 Operating Systems

Module 9: Virtual Memory

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

Module 9: Virtual Memory

Background. Virtual Memory (2/2) Demand Paging Example. First-In-First-Out (FIFO) Algorithm. Page Replacement Algorithms. Performance of Demand Paging

CS370 Operating Systems

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

1. Background. 2. Demand Paging

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

Lecture 17. Edited from slides for Operating System Concepts by Silberschatz, Galvin, Gagne

Chapters 9 & 10: Memory Management and Virtual Memory

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles

CS420: Operating Systems

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

Chapter 8: Main Memory. Operating System Concepts 8th Edition

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Principles of Operating Systems

UNIT - IV. What is virtual memory?

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

Page Replacement Algorithms

Swapping. Operating Systems I. Swapping. Motivation. Paging Implementation. Demand Paging. Active processes use more physical memory than system has

Chapter 6: Demand Paging

CS 3733 Operating Systems:

Topics: Virtual Memory (SGG, Chapter 09) CS 3733 Operating Systems

CS370 Operating Systems

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

VII. Memory Management

instruction is 6 bytes, might span 2 pages 2 pages to handle from 2 pages to handle to Two major allocation schemes

Basic Memory Management

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Memory Management and Protection

Memory Management Outline. Operating Systems. Motivation. Paging Implementation. Accessing Invalid Pages. Performance of Demand Paging

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Virtual Memory. ICS332 Operating Systems

CS370 Operating Systems

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

현재이이미지를표시할수없습니다. Chapter 9: Virtual Memory

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

Memory management, part 2: outline

MEMORY MANAGEMENT/1 CS 409, FALL 2013

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Chapter 9: Virtual Memory

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Virtual Memory B: Objec5ves

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Chapter 8 Virtual Memory

Memory Management Virtual Memory

CS370 Operating Systems

Chapter 4 Memory Management

Even in those cases where the entire program is needed, it may not all be needed at the same time (such is the case with overlays, for example).

CSE 4/521 Introduction to Operating Systems. Lecture 15 Virtual Memory I (Background, Demand Paging) Summer 2018

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Operating System - Virtual Memory

Virtual Memory CHAPTER CHAPTER OBJECTIVES. 8.1 Background

Practice Exercises 449

Basic Page Replacement

Operating Systems. No. 9 ศร ณย อ นทโกส ม Sarun Intakosum

Chapter 8 Virtual Memory

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Chapter 8: Main Memory Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

Chapter 4 Memory Management. Memory Management

Virtual Memory Management

Chapter 8 Virtual Memory

VIRTUAL MEMORY READING: CHAPTER 9

Transcription:

Virtual Memory Outline Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations Operating-System Examples 1

Background Virtual memory separation of user logical memory from physical memory only part of the program needs to be in memory for execution logical address space can therefore be much larger than physical address space allows address spaces to be shared by several processes allows for more efficient process creation Benefits of not requiring the entire program in memory before execution programs can be larger than the physical address space more programs can be resident in memory at the same time faster program startup 2

Virtual-address Space Space between heap and stack is not used, unless the heap or stack grows virtual memory does not require physical pages for such holes Shared memory pages can be easily shared by virtual memory shared libraries can be mapped read-only into address space of each process shared memory IPC can be implemented easily parent pages can be shared with child during fork() 3

Shared Library Using Virtual Memory 4

Demand Paging Bring a page into memory only when it is needed less I/O needed less memory needed faster response more applications simultaneously resident in memory Page is needed reference to it invalid reference abort not-in-memory bring to memory Lazy swapper never swaps a page into memory unless page will be needed Swapper that deals with pages is a pager 5

Demand Paging System 6

Valid-Invalid Bit Each page table entry associated with a valid invalid bit (v in-memory, i not-in-memory) initially set to invalid for all page table entries Example of a page table snapshot: Frame # valid-invalid bit v v v v i. i i page table Valid invalid bit in page table set to i during address translation causes a page fault 7

Page Table When Some Pages Are Not in Main Memory 8

Steps in Handling a Page Fault 9

Steps in Handling a Page Fault (2) 1. First reference to data or instructions on a page occurs 2. Memory access traps to the OS with a page fault operating system looks at another table to decide: invalid reference abort, or just not in memory 3. Find a free frame in physical memory 4. Read page from disk into free frame 5. Update page table valid-invalid bit to 'v' 6. Restart the instruction that caused the page fault 10

Performance of Demand Paging Page Fault Rate 0 p 1.0 if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT) EAT = (1 p) x memory access + p (page fault overhead + swap page out + swap page in + restart overhead ) 11

Demand Paging Example Memory access time = 200 nanoseconds Average page-fault service time = 8 milliseconds EAT = (1 p) x 200 + p (8 milliseconds) = (1 p x 200 + p x 8,000,000 = 200 + p x 7,999,800 If one access out of 1,000 causes a page fault, then EAT = 8.2 microseconds. This is a slowdown by a factor of 40!! 12

Process Creation Copy-on-Write Original process creation fork() system called duplicates parent address space for child demand paging with copy-on-write allow sharing address spaces Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory shared pages marked as copy-on-write pages if either process modifies a shared page, then the page is copied only the modified pages are copied COW allows more efficient process creation as only modified pages are copied Free pages are allocated from a pool of zeroed-out pages 13

Process 1 Modifies Page C Before After 14

Need For Page Replacement Review of demand paging separates logical memory from physical memory allows logical address space > physical address space enables a greater degree of multiprogramming higher CPU utilization and throughput allows fast process startup Drawbacks of demand paging may increase later individual memory access times potential for over-allocation of physical memory Over-allocation of memory currently active processes may reference more pages than room in physical memory pages from active processes may need to be replaced 15

Page Replacement Page replacement is needed when a page fault occurs, and there are no free frames to allocate terminate user process? swap out an entire process? find some page in memory, hopefully not really in use, and swap it out? Same page may be brought into memory several times 16

Basic Page Replacement 1. Find the location of the desired page on disk 2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame 3. Bring the desired page into the (newly) free frame; update the page and frame tables 4. Restart the process Reducing page fault overhead use modify bit to with each frame only copy the replacement page to memory, if modified 17

Basic Page Replacement (2) 18

Page Replacement Algorithms Criteria get the lowest page fault rate Page replacement schemes first in first out (FIFO) optimal page replacement least recently used (LRU) LRU approximation algorithms counting based replacement Evaluation of replacement algorithm simulate it on a particular string of memory page references compute the number of page faults on the reference string In all our examples, the reference string is: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 19

Expected Graph of Page Faults Versus The Number of Frames 20

First-In-First-Out (FIFO) Algorithm Simplest page replacement algorithm each page associated a time when it was brought in memory replace the oldest page, during page replacement implemented using a FIFO queue of all pages replacement page found from the head of the queue new page added to the tail of the queue 21

Belady's Anomaly For some page replacement algorithm, the page fault rate may increase as the number of frames increases! can happen with FIFO replacement scheme Example reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 FIFO replacement algorithm 22

Optimal Page Replacement Algorithm with the lowest page fault rate replace the page that will not be used for the longest period provably optimal does not suffer from Belady's anomaly needs future information! 23

Least Recently Used (LRU) Algorithm Approximate the optimal page replacement algorithm detect and store when each page is used replace page that has not been used for the longest period a very good replacement policy requires hardware support to be accurate and fast does not suffer from Belady's anomaly 24

LRU Implementation Counter implementation every page table entry has a time-of-use field copy the clock into the time-of-use field on every page access for replacement, find the page with the smallest time-of-use field replacement algorithm requires search of the full page table each memory access needs additional write to time-of-use field counter overflow? 25

LRU Implementation (2) Stack implementation keep a stack of page numbers in a linked list move page to top of stack when referenced find replacement page from bottom of stack each update is more expensive due to stack operations no search for replacement 26

LRU Approximation Algorithms Reference bit algorithm associate reference bit with each page set bit when page is references; clear all bits occasionally replace page that has reference bit = 0 (if one exists) cannot store the order of accesses in one bit! Record reference bit algorithm each page associated with a 8-bit field for recording reference bit at regular intervals right shift 8-bit field move reference bit into highest order bit page with lowest value in 8-bit field is the LRU page Example: (reference bit, 8-bit field) (1, 10101100) (0, 1101110) (0, 11001101) (0, 01100110) 27

LRU Approximation Algorithms (2) Second-chance algorithm single reference (RF) bit basic FIFO replacement algo find replacement page proceed, if RF=0 give page 2nd chance, if RF=1 and move onto next page, clear reference bit circular queue of pages pointer indicates next replacement page what is all bits are set? 28

Counting-Based Algorithms Keep a counter of the number of references that have been made to each page. Least frequently used replace page with the smallest count (most active page should have largest count) Most frequently used replace page with the largest count (page with smallest count was brought in last and will be accessed next) Not very popular algorithms 29

Page Buffering Algorithms Optimizations by maintaining a list of victim frames Scheme 1 bring new page into one of victim frame slots then, copy-out the replacement page to memory memory access to new page does not wait for copy-out Scheme 2 copy-out modified pages to disk when paging device is idle replacement is now faster Scheme 3 remember which page is in each victim frame if new page is in one of victim frame, then use directly works well if replacement algorithm replaces a page that is needed after afterwards 30

Allocation of Frames How many frames in physical memory should the OS allocate to each process? Minimum number of frames is there a minimum number of frames a process needs? depends of the maximum number of pages any instruction in the ISA can reference at a time direct addressing load/store need 2 frames what can happen if we do not provide this minimum? one for instruction, and one for memory reference one-level indirect addressing needs 3 frames one for instruction, one for address, and one for memory reference PDP-11 needs 6 frames, IBM 370 needs 8 frames should there be a maximum number? 31

Frame Allocation Algorithms Equal allocation divide m frames equally among n processes each process gets about m/n frames may give some process more frames than it needs! Proportional allocation each process may need different amounts of memory allocate memory proportional to process size Example: si = size of process pi m = 64 si = 10 S = si s2 = 127 m = total number of frames s ai = allocation for pi = i m S 10 64 5 137 127 a2 = 64 59 137 a1 = 32

Other Frame Allocation Issues Global Vs. local allocation global replacement process selects a replacement frame from the set of all frames one process can take a frame from another process cannot control its own page-fault rate local replacement each process selects from only its own set of allocated frames may hinder progress by not allowing high-priority process access to frames of low-priority process Non-uniform memory access memory may be distributed on multi-processor systems how to assign frames in such situations? assign memory which is close to where a process is running 33

Thrashing A process is thrashing if it spends more time paging than executing process keeps swapping active pages in and out leads to a very high page fault rate Causes of thrashing behavior process does not have enough pages Vicious cycle of thrashing not having enough frames cause more page faults lowers CPU utilization OS thinks that it needs to increase the degree of multiprogramming another process added to the system even less frames per process leads to more page faults cycle continues 34

Thrashing (2) Thrashing becomes more likely with increasing degree of multiprogramming Can we prevent thrashing by using a local replacement algorithm? 35

Preventing Thrashing Behavior Give a process as many frames as it needs how to know how many frames a process needs?! Working-Set model based on the locality model of process execution each process phase uses a small set of memory references / frames execution moves from one program phase to another the number of frames required for each phase is called the working set Implementation assume a particular working-set window, WSSi (working set of Process Pi) = total number of pages referenced in the most recent D = Σ WSSi total demand frames If D > m Thrashing, then suspend one of the processes 36

Working Sets and Page Fault Rates Page faults spike at the start of every new program working set (also called a phase) page fault rate falls until the next program phase behavior repeats 37

Preventing Thrashing Behavior (2) Page-Fault frequency scheme working set model is needs several assumptions and is complicated measure the page fault rate routinely establish acceptable page-fault rate If actual rate too low, process loses frame or increase processes If actual rate too high, process gains frame, or suspend processes 38

Memory-Mapped Files Memory-mapped file I/O allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory removes the need to read(), write() system calls converts disk access to memory access simplifies disk access for the user Mechanism details a file is initially read using demand paging a page-sized portion of the file is read from the file system into a physical frame subsequent reads/writes to/from the file are treated as ordinary memory accesses 39

Memory Mapped Files (2) Allows several processes to map the same file allowing the pages in memory to be shared 40

Memory-Mapped Files (3) Examples of OS using memory-mapped files Solaris, Unix, and Linux provide the mmap() system call to create a memory-mapped file files accessed using open(), read(), write() are mapped to kernel address space in Solaris Windows NT and XP use memory mapped files to accomplish shared memory 41

Memory-Mapped I/O Special I/O instructions may be available to transfer data and control messages to the I/O controller Memory-mapped I/O I/O device registers mapped to logical memory address spaces convenient and fast There may be a control bit to indicate if data is available in the data register if CPU polls the control bit, then method of operation is called programmed I/O if device sends interrupt to CPU to indicate data availability, then mode of operation is interrupt driven I/O 42

Allocating Kernel Memory Kernel memory often allocated from a free memory pool not using demand paging Reasons for treating kernel memory allocation differently some kernel memory needs to be contiguous attempt to minimize memory waste due to internal fragmentation kernel requests memory for structures of varying size do not use paging system Strategies for managing free memory buddy system slab allocator 43

Buddy System Power-of-2 allocator satisfies requests in units sized as power of 2 request rounded up to next highest power of 2 when smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2 continue until appropriate sized chunk available Example: 21kb requested from 256kb 44

Slab Allocator Slab contains several physically contiguous pages. Cache consists of one or more slabs Single cache for each unique kernel data structure cache filled with instantiations of the data structure (objects) Slab allocation algorithm create cache of objects in contiguous space, mark as free store objects in free slots, mark as used if current slab is full of used objects, allocate next object from empty slab if no empty slabs, go to free slots in the next slab Benefits no fragmentation fast memory request satisfaction 45

Slab Allocation (2) 46

Other Virtual Memory Issues Prepaging reduce page faults at process startup prepage all or some process pages before they are referenced I/O and memory wasted if prepaged pages are unused Issues in deciding a page size internal fragmentation page table size large page size preferred to reduce size of page table I/O overhead small page size preferred to reduce memory lost on the final page large page size preferred to reduce disk latency and seek time Page faults large page size reduces number of generated page faults 47

Other Virtual Memory Issues (2) TLB Reach amount of memory accessible from the TLB TLB Reach = (TLB Size) X (Page Size) ideally, the working set of each process is stored in the TLB improving TLB reach increase the page size provide multiple page sizes I/O interlock pages must sometimes be locked into memory consider I/O - Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm provide a lock bit with every page frame 48

Other Virtual Memory Issues (3) Program structure being aware of the paging structure can help performance example: Int[128,128] data; Each row is stored in one page program 1: 128 * 128 = 16, 384 page faults for (j = 0; j <128; j++) for (i = 0; i < 128; i++) data[i,j] = 0; Program 2: 128 page faults for (i = 0; i < 128; i++) for (j = 0; j < 128; j++) data[i,j] = 0; 49