Principles of Operating Systems

Similar documents
Module 9: Virtual Memory

Module 9: Virtual Memory

Background. Virtual Memory (2/2) Demand Paging Example. First-In-First-Out (FIFO) Algorithm. Page Replacement Algorithms. Performance of Demand Paging

Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles

Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 9: Virtual-Memory

Chapter 10: Virtual Memory. Background

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

CS370 Operating Systems

Lecture 17. Edited from slides for Operating System Concepts by Silberschatz, Galvin, Gagne

Chapter 9: Virtual Memory

Page Replacement Algorithms

Chapter 9: Virtual Memory

Where are we in the course?

Operating System Concepts

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Demand Paging. Valid-Invalid Bit. Steps in Handling a Page Fault. Page Fault. Transfer of a Paged Memory to Contiguous Disk Space

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 8: Virtual Memory. Operating System Concepts

CS370 Operating Systems

Basic Page Replacement

Chapter 9: Virtual Memory

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Chapter 3: Virtual Memory ว ตถ ประสงค. Background สามารถอธ บายข อด ในการท ระบบใช ว ธ การจ ดการหน วยความจ าแบบเสม อนได

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory

CS420: Operating Systems

Virtual Memory Outline

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

First-In-First-Out (FIFO) Algorithm

CS370 Operating Systems

Swapping. Operating Systems I. Swapping. Motivation. Paging Implementation. Demand Paging. Active processes use more physical memory than system has

CS307: Operating Systems

1. Background. 2. Demand Paging

Virtual Memory COMPSCI 386

OPERATING SYSTEM. Chapter 9: Virtual Memory

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chapters 9 & 10: Memory Management and Virtual Memory

Chapter 10: Virtual Memory

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Chapter 6: Demand Paging

instruction is 6 bytes, might span 2 pages 2 pages to handle from 2 pages to handle to Two major allocation schemes

Operating System Concepts 9 th Edition

Memory Management Outline. Operating Systems. Motivation. Paging Implementation. Accessing Invalid Pages. Performance of Demand Paging

Virtual Memory - II. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XVI. University at Buffalo.

CSC Operating Systems Spring Lecture - XIV Virtual Memory - II. Tevfik Ko!ar. Louisiana State University. March 27 th, 2008.

Memory management, part 2: outline

Lecture 14 Page Replacement Policies

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

UNIT - IV. What is virtual memory?

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

CS 143A - Principles of Operating Systems

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Last Class: Demand Paged Virtual Memory

Virtual Memory B: Objec5ves

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

CS370 Operating Systems

Operating System - Virtual Memory

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

CS510 Operating System Foundations. Jonathan Walpole

Today. Adding Memory Does adding memory always reduce the number of page faults? FIFO: Adding Memory with LRU. Last Class: Demand Paged Virtual Memory

Basic Memory Management

Chapter 9: Virtual Memory

Roadmap. Tevfik Ko!ar. CSC Operating Systems Spring Lecture - XIV Virtual Memory - I. Louisiana State University.

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Perform page replacement. (Fig 8.8 [Stal05])

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

VII. Memory Management

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

CSE325 Principles of Operating Systems. Virtual Memory. David P. Duggan. March 7, 2013

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Virtual Memory - I. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XV. University at Buffalo.

a process may be swapped in and out of main memory such that it occupies different regions

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Virtual Memory Management

Memory - Paging. Copyright : University of Illinois CS 241 Staff 1

Memory Management Virtual Memory

Operating Systems, Fall

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

Chapter 8 Virtual Memory

Operating Systems Lecture 6: Memory Management II

Virtual Memory - I. Roadmap. Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSE 421/521 - Operating Systems Fall 2012

Virtual Memory III. Jo, Heeseung

Week 2: Tiina Niklander

Transcription:

Principles of Operating Systems Lecture 21-23 - Virtual Memory Ardalan Amiri Sani (ardalan@uci.edu) [lecture slides contains some content adapted from previous slides by Prof. Nalini Venkatasubramanian, and course text slides Silberschatz]

Virtual Memory Background Demand paging Performance of demand paging Page Replacement Page Replacement Algorithms Allocation of Frames Thrashing Demand Segmentation 2

Need for Virtual Memory Virtual Memory Separation of user logical memory from physical memory. Only PART of the program needs to be in memory for execution. Logical address space can therefore be much larger than physical address space. Need to allow pages to be swapped in and out. Virtual Memory typically implemented via Paging Segmentation 3

Paging/Segmentation Policies Fetch Strategies When should a page or segment be brought into primary memory from secondary (disk) storage? Demand Fetch Anticipatory Fetch Placement Strategies When a page or segment is brought into memory, where is it to be put? Paging - trivial Segmentation - significant problem Replacement Strategies Which page/segment should be replaced if there is not enough room for a required page/segment? 4

Demand Paging Bring a page into memory only when it is needed. Less I/O needed Less memory needed Faster response More users The first reference to an unmapped page will trap to OS with a page fault. OS looks at another table to decide Invalid reference - abort Valid but not in memory. 5

Valid-Invalid Bit With each page table entry a valid-invalid bit is associated (1 in-memory, 0 not in memory). Initially, valid-invalid bit is set to 0 on all entries. During address translation, if valid-invalid bit in page table entry is 0 --- page fault occurs. Example of a page-table snapshot Frame # Valid-invalid bit Page Table 6

Handling a Page Fault Page is needed - reference to page Step 1: Page fault occurs - trap to OS (process suspends). Step 2: Check if the virtual memory address is valid. Kill process if invalid reference. If valid reference, and page not in memory, continue. Step 3: Bring into memory - Find a free page frame, find content on disk and fetch disk content into page frame. When disk read has completed, add virtual memory mapping to indicate that page is in memory. Step 4: Restart instruction interrupted by illegal address trap. The process will continue as if page had always been in memory. 7

What happens if there is no free frame? Page replacement - find some page in memory that is not really in use and swap it. Need page replacement algorithm Performance Issue - need an algorithm which will result in minimum number of page faults and page replacements. Same page may be brought into memory many times. 8

Performance of Demand Paging Page Fault Ratio - 0 p 1.0 If p = 0, no page faults If p = 1, every reference is a page fault Effective Access Time EAT = (1-p) * memory access + p * (page fault overhead + swap page out (only when needed) + swap page in + restart overhead + memory access) 9

Demand Paging Example Memory is always full Need to get rid of a page on every fault Memory Access time = 1 microsecond 50% of the time the page that is being replaced has been modified and therefore needs to be swapped out. Page fault and restart overheads are negligible Swap Page Time (either in or out) = 10 msec = 10,000 microsec EAT = (1-p) *1 + p (15000 + 1) = 1 + 15000p microsec EAT is directly proportional to the page fault rate. 10

Page Replacement Determines which page need to be evicted to disk when space is needed in memory. Use modify(dirty) bit to keep track of modified pages Evicting unmodified pages is cheaper than modified ones since unmodified pages do not need to be written back to disk. With page replacement, large virtual memory can be provided on a smaller physical memory. 11

Page Replacement Algorithms Want lowest page-fault rate. Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string. Assume reference string in examples to follow is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. 12

Page Replacement Strategies The Principle of Optimality Replace the page that will not be used again the farthest time into the future. Random Page Replacement Choose a page randomly FIFO - First in First Out Replace the page that has been in memory the longest. LRU - Least Recently Used (and its approximations) Replace the page that has not been used for the longest time. LFU - Least Frequently Used Replace the page that is used least often. MFU - Most Frequently Used Replace the page that is used most often. 13

First-In-First-Out (FIFO) Algorithm Reference String: 1,2,3,4,1,2,5,1,2,3,4,5 Assume x memory frames ( x pages can be in memory at a time per process) 3 frames 9 Page faults 4 frames 10 Page faults FIFO Replacement - Belady s Anomaly -- more frames does not mean less page faults 14

Optimal Algorithm Replace page that will not be used for longest period of time. Not possible in a practical setting Generally used to measure how well an algorithm performs. Reference String: 1,2,3,4,1,2,5,1,2,3,4,5 4 frames 6 Page faults 15

Least Recently Used (LRU) Algorithm Use recent past as an approximation of near future. Choose the page that has not been used for the longest period of time. May require hardware assistance to implement. Reference String: 1,2,3,4,1,2,5,1,2,3,4,5 4 frames 8 Page faults 16

Implementation of LRU algorithm Counter Implementation Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter. When a page needs to be evicted, look at the counters to determine which page to evicted (page with smallest time value). Needs to search the page entries, which is slow Needs hardware to update entries; software update will be slow Stack Implementation Keeps a stack of page numbers in a doubly linked form When a page referenced, move it to the top of the stack No search required for replacement Stack update mainly in software, which is slow 17

LRU Approximation Algorithms Reference Bit With each page, associate a bit, initially = 0. When page is referenced, bit is set to 1 (by hardware). Replace the one which is 0 (if one exists). Do not know order however. 18

LRU Approximation Algorithms Additional Reference Bits Algorithm Record reference bits at regular intervals. Keep 8 bits (say) for each page in a table in memory. Periodically, shift reference bit into high-order bit, i.e. shift other bits to the right, dropping the lowest bit. During page replacement, interpret the 8 bits as unsigned integer. The page with the lowest number is the LRU page. 19

LRU Approximation Algorithms Second Chance Core is a FIFO replacement algorithm Implemented with circular queue (hence called the clock algorithm sometimes) Need a reference bit. When a page is selected, inspect the reference bit. If the reference bit = 0, replace the page. If page to be replaced (in clock order) has reference bit = 1, then set reference bit to 0 leave page in memory replace next page (in clock order) subject to same rules. 20

LRU Approximation Algorithms Enhanced Second Chance Need a reference bit and a modify bit as an ordered pair. 4 situations are possible: (0,0) - neither recently used nor modified - best page to replace. (0,1) - not recently used, but modified - not quite as good, because the page will need to be written to disk before replacement. (1,0) - recently used but not modified - probably will be used again soon. (1,1) - probably will be used again, will need to write out before replacement. Used in the Macintosh virtual memory management scheme 21

Counting Algorithms Keep a counter of the number of references that have been made to each page. LFU (least frequently used) algorithm replaces page with smallest count. Based on the argument that the page with the smallest count will not be used frequently in the future either Variation - shift bits right over time to gradually retire the effect of old references. MFU (most frequently used) algorithm replaces page with highest count. Based on the argument that the page with the smallest count was probably just brought in and has yet to be used. 22

Page Buffering Algorithm Keep pool of free frames When a page fault occurs, choose victim frame. Desired page is read into free frame from pool before victim is written out. Allows process to restart soon, victim is later written out and added to free frame pool. Expansion 1 Maintain a list of modified pages. When paging device is idle, write modified pages to disk and clear modify bit. Expansion 2 Keep frame contents in pool of free frames and remember which page was in frame.. If desired page is in free frame pool, no need to page in. 23

Allocation of Frames Single user case is simple User is allocated any free frame Problem: Demand paging + multiprogramming Two major allocation schemes: Fixed allocation Priority allocation Each process needs minimum number of pages based on instruction set architecture. Example IBM 370: 6 pages to handle MVC (storage to storage move) instruction Instruction is 6 bytes, might span 2 pages. 2 pages to handle from. 2 pages to handle to. Will need 8 pages if MVC used as an operand of EXECUTE 24

Fixed Allocation Equal Allocation e.g., if 100 frames and 5 processes, give each 20 pages. Proportional Allocation Allocate according to the size of process Sj = size of process Pj S = Sj m = total number of frames aj = allocation for Pj = (Sj/S) * m If m = 64, S1 = 10, S2 = 127 then a1 = 10/137 * 64 5 a2 = 127/137 * 64 59 25

Priority Allocation May want to give high priority process more memory than low priority process. Use a proportional allocation scheme using priorities instead of size 26

Global vs. Local Replacement Global Replacement Selects a replacement frame from the set of all frames. One process can take a frame from another. Process may not be able to control its page fault rate. Local Replacement Selects from process own set of allocated frames. Process slowed down even if other less used pages of memory are available. Global replacement has better throughput Hence more commonly used. 27

Priority Allocation (cont.) If process Pi generates a page fault select for replacement one of its frames (local allocation) select for replacement a frame form a process with lower priority number. (global allocation) 28

Thrashing If a process does not have enough frames, the page-fault rate is very high. This leads to: low CPU utilization. OS thinks that it needs to increase the degree of multiprogramming Another process is added to the system. System throughput plunges... Thrashing A process is busy swapping pages in and out. In other words, a process is spending more time paging than executing. 29

Thrashing (cont.) 30

Thrashing (cont.) Locality: set of pages that are actively used together. Computations have locality! Process migrates from one locality to another. Localities may overlap. 31

Thrashing (cont.) Why does thrashing occur? (size of locality) > total memory size 32

Working Set Model Working set is an approximation of the size of the locality Δ working-set window a fixed number of page references, e.g. 10,000 instructions WSSj (working set size of process Pj) = total number of pages referenced in the most recent Δ (varies in time) If Δ too small, will not encompass entire locality. If Δ too large, will encompass several localities. If Δ =, will encompass entire program. 33

Use working set model to avoid thrashing D = WSSj total demand frames If D > m (number of available frames) thrashing Policy: If D > m, then suspend one of the processes. 34

Measure working set in practice Use interval timer + the reference bit Example: Δ = 10,000 references Timer interrupts after every 5000 references. Whenever a timer interrupts, copy and set the values of all reference bits to 0. Keep in memory 2 bits for each page (indicated if page was used within last 10,000 to 15,000 references). If one of the bits in memory = 1 page in working set. Not completely accurate - cannot tell where reference occurred. Improvement - 10 bits and interrupt every 1000 time units. 35

Page fault Frequency Scheme Control thrashing by establishing acceptable page-fault rate (an upper bound and a lower bound). If page fault rate too low, process loses frame. If page fault rate too high, process needs and gains a frame. 36

Demand Paging Issues Prepaging Tries to prevent high level of initial paging. E.g., If a process is suspended, keep list of pages in working set and bring entire working set back before restarting process. Tradeoff - page fault vs. prepaging - depends on how many pages brought back are reused. Page Size Selection fragmentation table size I/O overhead locality 37

Demand Paging Issues Program Structure Array A[1024,1024] of integers Assume each row is stored on one page Assume only one frame in memory Program 1 for j := 1 to 1024 do for i := 1 to 1024 do A[i,j] := 0; 1024 * 1024 page faults Program 2 for i := 1 to 1024 do for j:= 1 to 1024 do A[i,j] := 0; 1024 page faults 38

Demand Paging Issues I/O considerations Say I/O is done to/from virtual memory. I/O is implemented by I/O controller. Process A issues I/O request CPU is given to other processes Page faults occur - process A s pages are paged out. I/O now tries to occur - but frame is being used for another process. Solution 1: never execute I/O to process memory - I/O takes place into system memory. Copying Overhead!! Solution 2: Lock pages in memory - cannot be selected for replacement. 39

Demand Segmentation Used when segmentation is used. OS allocates memory in segments, which it keeps track of through segment descriptors. Segment descriptor contains valid bit to indicate whether the segment is currently in memory. If segment is in main memory, access continues. If not in memory, segment fault is triggered. Then segment is then brought to memory. 40