CS420: Operating Systems

Similar documents
Chapter 9: Virtual Memory

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 9: Virtual-Memory

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Page Replacement Algorithms

CS307: Operating Systems

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory

Chapter 10: Virtual Memory

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

OPERATING SYSTEM. Chapter 9: Virtual Memory

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Module 9: Virtual Memory

Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,

Demand Paging. Valid-Invalid Bit. Steps in Handling a Page Fault. Page Fault. Transfer of a Paged Memory to Contiguous Disk Space

Operating System Concepts 9 th Edition

Chapter 9: Virtual Memory

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

Chapter 10: Virtual Memory. Background

Module 9: Virtual Memory

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

CS370 Operating Systems

Chapters 9 & 10: Memory Management and Virtual Memory

CS370 Operating Systems

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

Chapter 3: Virtual Memory ว ตถ ประสงค. Background สามารถอธ บายข อด ในการท ระบบใช ว ธ การจ ดการหน วยความจ าแบบเสม อนได

Chapter 9: Virtual Memory

Lecture 17. Edited from slides for Operating System Concepts by Silberschatz, Galvin, Gagne

CS370 Operating Systems

Virtual Memory Outline

Where are we in the course?

Operating System Concepts

Operating System - Virtual Memory

Principles of Operating Systems

Background. Virtual Memory (2/2) Demand Paging Example. First-In-First-Out (FIFO) Algorithm. Page Replacement Algorithms. Performance of Demand Paging

Basic Page Replacement

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

Virtual Memory COMPSCI 386

Chapter 6: Demand Paging

현재이이미지를표시할수없습니다. Chapter 9: Virtual Memory

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 9: Virtual Memory

First-In-First-Out (FIFO) Algorithm

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Chapter 9: Virtual Memory

CSC Operating Systems Spring Lecture - XIV Virtual Memory - II. Tevfik Ko!ar. Louisiana State University. March 27 th, 2008.

UNIT - IV. What is virtual memory?

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Virtual Memory. ICS332 Operating Systems

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Chapter 8: Main Memory. Operating System Concepts 8th Edition

Even in those cases where the entire program is needed, it may not all be needed at the same time (such is the case with overlays, for example).

1. Background. 2. Demand Paging

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Virtual Memory CHAPTER CHAPTER OBJECTIVES. 8.1 Background

Roadmap. Tevfik Ko!ar. CSC Operating Systems Spring Lecture - XIV Virtual Memory - I. Louisiana State University.

CSE 4/521 Introduction to Operating Systems. Lecture 15 Virtual Memory I (Background, Demand Paging) Summer 2018

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Basic Memory Management

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Virtual Memory - II. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XVI. University at Buffalo.

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

Virtual Memory B: Objec5ves

instruction is 6 bytes, might span 2 pages 2 pages to handle from 2 pages to handle to Two major allocation schemes

VII. Memory Management

Virtual Memory III. Jo, Heeseung

Swapping. Operating Systems I. Swapping. Motivation. Paging Implementation. Demand Paging. Active processes use more physical memory than system has

Memory Management and Protection

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

stack Two-dimensional logical addresses Fixed Allocation Binary Page Table

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

CS370 Operating Systems

Address spaces and memory management

Memory management, part 2: outline

Virtual Memory Management

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

VIRTUAL MEMORY READING: CHAPTER 9

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Virtual Memory - I. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XV. University at Buffalo.

CS370 Operating Systems

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

Virtual Memory - I. Roadmap. Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSE 421/521 - Operating Systems Fall 2012

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Transcription:

Virtual Memory James Moscola Department of Physical Sciences York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne

Background Code needs to be in memory to execute, but entire program rarely used - Error code, unusual routines, large data structures Want the ability to execute partially-loaded program - Programs no longer constrained by limits of physical memory - Program data no longer constrained by limits of physical memory 2

Background - Virtual Memory Virtual memory separation of user logical memory from physical memory - Only part of the program needs to be in memory for execution - Logical address space can therefore be much larger than physical address space - Allows physical address spaces to be shared by several processes - Allows for more efficient process creation - More programs running concurrently - Less I/O needed to load or swap processes In contrast to dynamic loading, virtual memory does not require programmer to do anything extra 3

Virtual Memory that is Larger than Physical Memory 4

Virtual Address Space Enables sparse address spaces with holes left for growth, dynamically linked libraries, etc. - e.g. don t waste physical memory with empty space that is intended for the growth of the stack/heap System libraries can be shared by mapping them into virtual address space Can create shared memory by mapping pages into virtual address space Virtual memory allows pages to be shared during fork(), speeding up process creation 5

Demand Paging Could bring entire process into memory at load time Or load a page into memory only when it is needed, called demand paging - Pages that are never used are never loaded Less I/O needed, no unnecessary I/O Less memory needed Faster response - Similar to page table system with swapping, but demand paging doesn t swap entire process into memory, instead uses a lazy swapper Lazy swapper never swaps a page into memory unless it will be needed - A swapper that deals with pages is called a pager 6

Page Table Example Not all pages are always resident in physical memory 7

Valid-Invalid Bit Each page table entry includes a valid invalid bit - If valid, then process is allowed to access that page, AND it is in physical memory - If invalid, process may not be allowed to access the requested page, or the page may not yet be in physical memory Initially valid invalid bit is set to i on all entries Frame #. valid-invalid bit v v v v i page table i i 8

Page Fault If requested page is memory resident, then process can operate normally If requested page not in physical memory (i.e. page is invalid in page table), then a page fault occurs If page fault occurs, first check to see if the requested page was an illegal request or if it just isn t in physical memory - If an illegal reference, the process terminates (seg fault, bus error,...) - If simply not in memory: Get a free frame of physical memory from the free-list Swap page from disk into the frame via a scheduled disk operation Update tables to indicate that the page is now in physical memory (i.e. set valid-invalid bit to v ) Restart the instruction that caused the page fault 9

Steps in Handling a Page Fault 10

Aspects of Demand Paging Extreme case of demand paging start a process with no pages in memory - Never bring a page into physical memory until it is needed - A page fault will occur each time a new page is requested, including on the very first page - The scheme is called pure demand paging Demand paging (and pure demand paging) require hardware support - Page table with valid / invalid bit - Secondary memory (swap device with swap space) - Ability to restart an instruction Performance of system can be severely impacted if too much page swapping is required (see example in OSC9 section 9.2.2, 40x slowdown!) 11

Page Sharing - Copy-on-Write During process creation Copy-On-Write (COW) allows both parent and child processes to initially share the same pages in memory - If either process modifies a shared page, only then is the page copied COW allows more efficient process creation as only modified pages are copied Free pages are typically allocated from a pool of zero-fill-on-demand pages A variation of fork() exists called vfork() (virtual memory fork) - Parent suspends so child can use its memory address space - If child modifies parent s address space, those changes will be visible to parent (i.e. it does NOT use copy-on-write) - Designed to be used when child calls exec() immediately after fork() - Very efficient since no pages are copied during creation of child 12

Example of Copy-on-Write Memory before either process tries to write to shared pages Memory after process1 writes to page C 13

What Happens if There is no Free Frame? All frames used up by process pages and I/O buffers - How much memory should be allocated to I/O and how much to processes? Page replacement find some page in physical memory, but not really in use, page it out to swap space - Once a page is in physical memory, that doesn t mean it will always be in physical memory - Same page may be brought into physical memory several times - For performance reasons, want a page replacement algorithm which will result in minimum number of page faults 14

Page Replacement Must prevent over-allocation of physical memory by modifying page-fault service routine to include page replacement - If no free frame exists, must select a victim frame and write the page that it contains off to swap space - After writing page to swap space, must update pages tables accordingly - Required page can then be read into newly freed frame - Continue process by restarting instruction that caused the page-fault Use modify (dirty) bit to reduce overhead of page transfers - Only modified pages are written back to disk from physical memory - Without modify bit, unmodified pages may be unnecessarily written back to swap space 15

Need For Page Replacement Must free a frame in physical memory to load page M 16

Page and Frame Replacement Algorithms Frame-allocation algorithm determines - How many frames to give each process Page-replacement algorithm determines - Which frames to replace - Want lowest page-fault rate on both first access and re-access Evaluate algorithms by running them on a particular sequence of memory references (reference string) and computing the number of page faults on that sequence - Reference string is just page numbers, not full addresses Example reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 - Repeated access to the same page does not cause a page fault - The more physical memory a system has, the more frames it has... resulting in fewer pagefaults 17

Page Replacement Algorithms Many different page replacement algorithms exist - FIFO Page Replacement - Optimal Page Replacement (theoretical best case) - Least-Recently Used (LRU) Page Replacement - Counting-Based Page Replacement 18

FIFO Page Replacement A very simple page replacement algorithm - When a page is brought into physical memory, insert a reference to that page into a FIFO - When a page must be replaced, replace the oldest page first (i.e. the page at the head of the FIFO) Pros: - Very easy to understand and program Cons: - Performance may not be very good - May result in a high number of page faults 19

FIFO Page Replacement Example Time A total of 15 pages faults Page Frames FIFO head 7 7 7 0 1 2 3 0 4 2 3 0 0 1 2 3 0 4 2 3 0 0 1 1 2 2 7 7 0 1 2 3 0 4 2 3 0 1 2 7 0 1 no page fault when accessing page 0 no page fault when accessing pages 3 or 2 no page fault when accessing pages 0 or 1 20

Optimal Page Replacement An optimal page replacement algorithm can be described very simply: - Replace the page that will not be used for longest period of time Guarantees the lowest possible page-fault rate for a fixed number of frames Sadly, it is not possible to know which page won t be used for the longest period of time (can t see the future) Optimal algorithm is still very useful - Used for measuring how well other algorithm performs - How close are other algorithms to optimal? 21

Optimal Page Replacement Example Time A total of 9 pages faults (this is as good as it gets) Page Frames replaced page 7 as it will not be used again for longest time replaced page 1 as it will not be used again for longest time replaced page 4, it will never be used again replaced page 0 as it will not be used again for longest time replaced page 3, it will never be used again replaced page 2, it will never be used again 22

Least Recently Used (LRU) Algorithm Interestingly, the same number of page-faults occur if a reference string is read either forwards or backwards and used to access pages - Because of this property, it is possible to use past knowledge of page use rather than future knowledge Least Recently Used (LRU) Replacement Algorithm: - Replace page that has not been used in the most amount of time (i.e. replace the page that is the least recently used page) Associate time of last use with each page Generally good algorithm and used frequently 23

Least Recently Used (LRU) Algorithm Example A total of 12 pages faults; better than FIFO, but not optimal Page Frames replaced page 7 as it was least recently used replaced page 1 as it was least recently used replaced page 2 as it was least recently used replaced page 3 as it was least recently used replaced page 0 as it was least recently used replaced page 4 as it was least recently used replaced page 0 as it was least recently used replaced page 2 as it was least recently used 24

Least Recently Used (LRU) Algorithm How should a least-recently-used algorithm be implemented? - Option #1 - Counter implementation Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be replaced, look at the counters to find smallest value - Requires a potentially lengthy search through the page table (SLOW) - Option #2 - Stack implementation Keep a stack of page numbers in a doubly linked list If a page is referenced, move it to the top of the stack (not a pure stack implementation) - Most recently used page is always at the top of the stack; the least recently used page is always at the bottom of the stack Finding and moving page numbers in the stack takes time (updates to the stack must be done on every memory reference) (SLOW) 25

Stack Implementation Using a Stack to Record the Most Recent Page References Time Most recently used page Least recently used page 26

LRU Approximation Algorithms The LRU algorithm as described previous is too slow, even with specialized hardware Instead of finding exactly which page was least recently used, an approximation will do One possible approach for approximating the least recently used page is to use a reference bit - Associate a reference bit with each page, initially = 0 - When page is referenced, set bit to 1 - When it is time to replace a page, replace any that have reference bit = 0 (if one exists) - Can also use multiple reference bits to maintain a longer history and thus more closely approximate a true LRU algorithm 27

Counting-Based Page Replacement Keep a counter of the number of references that have been made to each page - Not common Least-frequently-used (LFU) algorithm: replaces the page with smallest count Most-frequently-used (MFU) algorithm: replaces the page with the largest count - Based on the argument that the page with the smallest count was probably just brought in and has yet to be used 28

Speeding Up Page Replacement Always maintain a pool of free frames - No need to search for a free frame when the page-fault occurs - Load page into free frame and select victim to evict and add to free pool - When convenient, evict victim No longer have to wait for victim frame to get paged out before new page can get paged in Possibly, keep list of modified pages - When backing store is otherwise idle, write pages and set to non-dirty Possibly, keep free frame contents intact and note what is in them - If referenced again before reused, no need to load contents again from disk - Generally useful to reduce penalty if wrong victim frame selected 29

Allocation of Frames Each process needs some minimum number of frames that it cannot run without - Even a single instruction may require more than a single frame of physical memory Example: IBM 370 6 pages to handle SS MOVE instruction: - Instruction is 6 bytes, and therefore might span 2 pages - 2 pages to handle for from - 2 pages to handle for to Maximum number of frames that can be allocated is the total number of frames in the system Two major allocation frame schemes - Fixed allocation Equal allocation Proportional allocation - Priority allocation 30

Types of Fixed Allocation Equal allocation every process in the system is allocated an equal share of the available frames - If the number of frames is m and the number of processes is n, each process is allocated m/n frames - For example, if there are 100 frames (after allocating frames for the OS) and 5 processes, give each process 100/5 = 20 frames Maybe keep some as free frame buffer pool for faster paging - Pros: Easy to implement - Cons: Why allocate 20 frames to a process that might actually only need 5? Can be very wasteful A higher priority process doesn t get any more frames than a lower priority process 31

Types of Fixed Allocation (Cont.) Proportional allocation Allocate available frames according to the size of a process - Larger processes are allocated more frames than smaller processes - Dynamic as degree of multiprogramming change - Pros: Better allocation scheme than equal allocation - Cons: A higher priority process doesn t get any more frames than a lower priority process 32

Priority Allocation Use a proportional allocation scheme using process priorities rather than process size If process Pi generates a page fault, - Select for replacement one of its frames (local replacement) - Select for replacement a frame from a process with lower priority number (global replacement) 33

Global vs. Local Allocation Global replacement process selects a replacement frame from the set of ALL frames; one process can take a frame from another - Process execution time can vary greatly due to another process stealing frames - Tends to have greater throughput (the number of processes that complete execution per unit time) - More common than local replacement Local replacement each process selects a replacement frame from only its own set of allocated frames - Per-process performance is more consistent - But possibly underutilized memory 34

Thrashing If a process does not have enough pages, the page-fault rate is very high - Page fault to get page - Replace existing frame - But quickly need replaced frame back - This leads to: Low CPU utilization Operating system thinking that it needs to increase the degree of multiprogramming Another process added to the system Thrashing a process is busy swapping pages in and out (spends more time paging than executing) 35

Thrashing (Cont.) 36

Demand Paging and Thrashing Why does demand paging work? Locality model - Process migrates from one locality to another - Localities may overlap Why does thrashing occur? Σ size of locality > total memory size - Limit effects by using local or priority page replacement 37

Locality In A Memory-Reference Pattern 38