Chapter 8: Virtual Memory. Operating System Concepts
|
|
- Georgia Casey
- 1 years ago
- Views:
Transcription
1 Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2009
2 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations Operating-System Examples 8.2 Silberschatz, Galvin and Gagne 20009
3 Objectives To describe the benefits of a virtual memory system To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames To discuss the principle of the working-set model 8.3 Silberschatz, Galvin and Gagne 20009
4 Background Code needs to be in memory to execute, but entire program rarely used Error code, unusual routines, large data structures Entire program code not needed at same time Consider ability to execute partially-loaded program Program no longer constrained by limits of physical memory Program and programs could be larger than physical memory 8.4 Silberschatz, Galvin and Gagne 20009
5 Background Virtual memory separation of user logical memory from physical memory Only part of the program needs to be in memory for execution Logical address space can therefore be much larger than physical address space Allows address spaces to be shared by several processes Allows for more efficient process creation More programs running concurrently Less I/O needed to load or swap processes Virtual memory can be implemented via: Demand paging Demand segmentation 8.5 Silberschatz, Galvin and Gagne 20009
6 Virtual Memory That is Larger Than Physical Memory 8.6 Silberschatz, Galvin and Gagne 20009
7 Virtual-address Space 8.7 Silberschatz, Galvin and Gagne 20009
8 Virtual Address Space Enables sparse address spaces with holes left for growth, dynamically linked libraries, etc System libraries shared via mapping into virtual address space Shared memory by mapping pages read-write into virtual address space Pages can be shared during fork(), speeding process creation 8.8 Silberschatz, Galvin and Gagne 20009
9 Shared Library Using Virtual Memory 8.9 Silberschatz, Galvin and Gagne 20009
10 Demand Paging Could bring entire process into memory at load time Or bring a page into memory only when it is needed Less I/O needed, no unnecessary I/O Less memory needed Faster response More users Page is needed reference to it invalid reference abort not-in-memory bring to memory Lazy swapper never swaps a page into memory unless page will be needed Swapper that deals with pages is a pager 8.10 Silberschatz, Galvin and Gagne 20009
11 Transfer of a Paged Memory to Contiguous Disk Space 8.11 Silberschatz, Galvin and Gagne 20009
12 Valid-Invalid Bit With each page table entry a valid invalid bit is associated (v in-memory memory resident, i not-in-memory) Initially valid invalid bit is set to i on all entries Example of a page table snapshot: Frame # valid-invalid bit v v v v i. page table During address translation, if valid invalid bit in page table entry is I page fault i i 8.12 Silberschatz, Galvin and Gagne 20009
13 Page Table When Some Pages Are Not in Main Memory 8.13 Silberschatz, Galvin and Gagne 20009
14 Page Fault If there is a reference to a page, first reference to that page will trap to operating system: page fault 1. Operating system looks at another table to decide: Invalid reference abort Just not in memory 2. Get empty frame 3. Swap page into frame via scheduled disk operation 4. Reset tables to indicate page now in memory Set validation bit = v 5. Restart the instruction that caused the page fault 8.14 Silberschatz, Galvin and Gagne 20009
15 Aspects of Demand Paging Extreme case start process with no pages in memory OS sets instruction pointer to first instruction of process, non-memory-resident -> page fault And for every other process pages on first access Pure demand paging Actually, a given instruction could access multiple pages -> multiple page faults Pain decreased because of locality of reference Hardware support needed for demand paging Page table with valid / invalid bit Secondary memory (swap device with swap space) Instruction restart 8.15 Silberschatz, Galvin and Gagne 20009
16 Instruction Restart Consider an instruction that could access several different locations block move auto increment/decrement location Restart the whole operation? What if source and destination overlap? 8.16 Silberschatz, Galvin and Gagne 20009
17 Steps in Handling a Page Fault 8.17 Silberschatz, Galvin and Gagne 20009
18 Performance of Demand Paging Stages in Demand Paging 1. Trap to the operating system 2. Save the user registers and process state 3. Determine that the interrupt was a page fault 4. Check that the page reference was legal and determine the location of the page on the disk 5. Issue a read from the disk to a free frame: 1. Wait in a queue for this device until the read request is serviced 2. Wait for the device seek and/or latency time 3. Begin the transfer of the page to a free frame 6. While waiting, allocate the CPU to some other user 7. Receive an interrupt from the disk I/O subsystem (I/O completed) 8. Save the registers and process state for the other user 9. Determine that the interrupt was from the disk 10. Correct the page table and other tables to show page is now in memory 11. Wait for the CPU to be allocated to this process again 12. Restore the user registers, process state, and new page table, and then resume the interrupted instruction 8.18 Silberschatz, Galvin and Gagne 20009
19 Performance of Demand Paging (Cont.) Page Fault Rate 0 p 1 if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT) EAT = (1 p) x memory access + p (page fault overhead + swap page out + swap page in + restart overhead ) 8.19 Silberschatz, Galvin and Gagne 20009
20 Demand Paging Example Memory access time = 200 nanoseconds Average page-fault service time = 8 milliseconds EAT = (1 p) x p (8 milliseconds) = (1 p x p x 8,000,000 = p x 7,999,800 If one access out of 1,000 causes a page fault, then EAT = 8.2 microseconds. This is a slowdown by a factor of 40!! If want performance degradation < 10 percent 220 > ,999,800 x p 20 > 7,999,800 x p p < < one page fault in every 400,000 memory accesses 8.20 Silberschatz, Galvin and Gagne 20009
21 Demand Paging Optimizations Copy entire process image to swap space at process load time Then page in and out of swap space Used in older BSD Unix Demand page in from program binary on disk, but discard rather than paging out when freeing frame Used in Solaris and current BSD 8.21 Silberschatz, Galvin and Gagne 20009
22 Copy-on-Write Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory If either process modifies a shared page, only then is the page copied COW allows more efficient process creation as only modified pages are copied In general, free pages are allocated from a pool of zero-fill-on-demand pages Why zero-out a page before allocating it? vfork() variation on fork() system call has parent suspend and child using copy-on-write address space of parent Designed to have child call exec() Very efficient 8.22 Silberschatz, Galvin and Gagne 20009
23 Before Process 1 Modifies Page C 8.23 Silberschatz, Galvin and Gagne 20009
24 After Process 1 Modifies Page C 8.24 Silberschatz, Galvin and Gagne 20009
25 What Happens if There is no Free Frame? Used up by process pages Also in demand from the kernel, I/O buffers, etc How much to allocate to each? Page replacement find some page in memory, but not really in use, page it out Algorithm terminate? swap out? replace the page? Performance want an algorithm which will result in minimum number of page faults Same page may be brought into memory several times 8.25 Silberschatz, Galvin and Gagne 20009
26 Page Replacement Prevent over-allocation of memory by modifying page-fault service routine to include page replacement Use modify (dirty) bit to reduce overhead of page transfers only modified pages are written to disk Page replacement completes separation between logical memory and physical memory large virtual memory can be provided on a smaller physical memory 8.26 Silberschatz, Galvin and Gagne 20009
27 Need For Page Replacement 8.27 Silberschatz, Galvin and Gagne 20009
28 Basic Page Replacement 1. Find the location of the desired page on disk 2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame - Write victim frame to disk if dirty 3. Bring the desired page into the (newly) free frame; update the page and frame tables 4. Continue the process by restarting the instruction that caused the trap Note now potentially 2 page transfers for page fault increasing EAT 8.28 Silberschatz, Galvin and Gagne 20009
29 Page Replacement 8.29 Silberschatz, Galvin and Gagne 20009
30 Page and Frame Replacement Algorithms Frame-allocation algorithm determines How many frames to give each process Which frames to replace Page-replacement algorithm Want lowest page-fault rate on both first access and re-access Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string String is just page numbers, not full addresses Repeated access to the same page does not cause a page fault In all our examples, the reference string is 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0, Silberschatz, Galvin and Gagne 20009
31 Graph of Page Faults Versus The Number of Frames 8.31 Silberschatz, Galvin and Gagne 20009
32 First-In-First-Out (FIFO) Algorithm Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 3 frames (3 pages can be in memory at a time per process) page faults Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5 Adding more frames can cause more page faults! Belady s Anomaly How to track ages of pages? Just use a FIFO queue 8.32 Silberschatz, Galvin and Gagne 20009
33 FIFO Page Replacement 8.33 Silberschatz, Galvin and Gagne 20009
34 FIFO Illustrating Belady s Anomaly 8.34 Silberschatz, Galvin and Gagne 20009
35 Optimal Algorithm Replace page that will not be used for longest period of time 9 is optimal for the example on the next slide How do you know this? Can t read the future Used for measuring how well your algorithm performs 8.35 Silberschatz, Galvin and Gagne 20009
36 Optimal Page Replacement 8.36 Silberschatz, Galvin and Gagne 20009
37 Least Recently Used (LRU) Algorithm Use past knowledge rather than future Replace page that has not been used in the most amount of time Associate time of last use with each page 12 faults better than FIFO but worse than OPT Generally good algorithm and frequently used But how to implement? 8.37 Silberschatz, Galvin and Gagne 20009
38 LRU Algorithm (Cont.) Counter implementation Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be changed, look at the counters to find smallest value Stack implementation Search through table needed Keep a stack of page numbers in a double link form: Page referenced: move it to the top requires 6 pointers to be changed But each update more expensive No search for replacement LRU and OPT are cases of stack algorithms that don t have Belady s Anomaly 8.38 Silberschatz, Galvin and Gagne 20009
39 Use Of A Stack to Record The Most Recent Page References 8.39 Silberschatz, Galvin and Gagne 20009
40 LRU Approximation Algorithms LRU needs special hardware and still slow Reference bit With each page associate a bit, initially = 0 When page is referenced bit set to 1 Replace any with reference bit = 0 (if one exists) We do not know the order, however Second-chance algorithm Generally FIFO, plus hardware-provided reference bit Clock replacement If page to be replaced has Reference bit = 0 -> replace it reference bit = 1 then: set reference bit 0, leave page in memory replace next page, subject to same rules 8.40 Silberschatz, Galvin and Gagne 20009
41 Second-Chance (clock) Page-Replacement Algorithm 8.41 Silberschatz, Galvin and Gagne 20009
42 Counting Algorithms Keep a counter of the number of references that have been made to each page Not common LFU Algorithm: replaces page with smallest count MFU Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used 8.42 Silberschatz, Galvin and Gagne 20009
43 Page-Buffering Algorithms Keep a pool of free frames, always Then frame available when needed, not found at fault time Read page into free frame and select victim to evict and add to free pool When convenient, evict victim Possibly, keep list of modified pages When backing store otherwise idle, write pages there and set to non-dirty Possibly, keep free frame contents intact and note what is in them If referenced again before reused, no need to load contents again from disk Generally useful to reduce penalty if wrong victim frame selected 8.43 Silberschatz, Galvin and Gagne 20009
44 Applications and Page Replacement All of these algorithms have OS guessing about future page access Some applications have better knowledge i.e. databases Memory intensive applications can cause double buffering OS keeps copy of page in memory as I/O buffer Application keeps page in memory for its own work Operating system can given direct access to the disk, getting out of the way of the applications Raw disk mode Bypasses buffering, locking, etc 8.44 Silberschatz, Galvin and Gagne 20009
45 Allocation of Frames Each process needs minimum number of frames Example: IBM pages to handle SS MOVE instruction: instruction is 6 bytes, might span 2 pages 2 pages to handle from 2 pages to handle to Maximum of course is total frames in the system Two major allocation schemes fixed allocation priority allocation Many variations 8.45 Silberschatz, Galvin and Gagne 20009
46 Fixed Allocation Equal allocation For example, if there are 100 frames (after allocating frames for the OS) and 5 processes, give each process 20 frames Keep some as free frame buffer pool Proportional allocation Allocate according to the size of process Dynamic as degree of multiprogramming, process sizes change s S m a i i = size of process p m = 64 = s = = i total number allocation for of p i i frames = si S m s1 =10 s 2 =127 a 1 = a 2 = Silberschatz, Galvin and Gagne 20009
47 Priority Allocation Use a proportional allocation scheme using priorities rather than size If process P i generates a page fault, select for replacement one of its frames select for replacement a frame from a process with lower priority number 8.47 Silberschatz, Galvin and Gagne 20009
48 Global vs. Local Allocation Global replacement process selects a replacement frame from the set of all frames; one process can take a frame from another But then process execution time can vary greatly But greater throughput so more common Local replacement each process selects from only its own set of allocated frames More consistent per-process performance But possibly underutilized memory 8.48 Silberschatz, Galvin and Gagne 20009
49 Non-Uniform Memory Access So far all memory accessed equally Many systems are NUMA speed of access to memory varies Consider system boards containing CPUs and memory, interconnected over a system bus Optimal performance comes from allocating memory close to the CPU on which the thread is scheduled And modifying the scheduler to schedule the thread on the same system board when possible Solved by Solaris by creating lgroups Structure to track CPU / Memory low latency groups Used my schedule and pager When possible schedule all threads of a process and allocate all memory for that process within the lgroup 8.49 Silberschatz, Galvin and Gagne 20009
50 Thrashing If a process does not have enough pages, the page-fault rate is very high Page fault to get page Replace existing frame But quickly need replaced frame back This leads to: Low CPU utilization Operating system thinking that it needs to increase the degree of multiprogramming Another process added to the system Thrashing a process is busy swapping pages in and out 8.50 Silberschatz, Galvin and Gagne 20009
51 Thrashing (Cont.) 8.51 Silberschatz, Galvin and Gagne 20009
52 Demand Paging and Thrashing Why does demand paging work? Locality model Process migrates from one locality to another Localities may overlap Why does thrashing occur? Σ size of locality > total memory size Limit effects by using local or priority page replacement 8.52 Silberschatz, Galvin and Gagne 20009
53 Locality In A Memory-Reference Pattern 8.53 Silberschatz, Galvin and Gagne 20009
54 Working-Set Model working-set window a fixed number of page references Example: 10,000 instructions WSS i (working set of Process P i ) = total number of pages referenced in the most recent (varies in time) if too small will not encompass entire locality if too large will encompass several localities if = will encompass entire program D = Σ WSS i total demand frames Approximation of locality if D > m Thrashing Policy if D > m, then suspend or swap out one of the processes 8.54 Silberschatz, Galvin and Gagne 20009
55 Working-set model 8.55 Silberschatz, Galvin and Gagne 20009
56 Keeping Track of the Working Set Approximate with interval timer + a reference bit Example: = 10,000 Timer interrupts after every 5000 time units Keep in memory 2 bits for each page Whenever a timer interrupts copy and sets the values of all reference bits to 0 If one of the bits in memory = 1 page in working set Why is this not completely accurate? Improvement = 10 bits and interrupt every 1000 time units 8.56 Silberschatz, Galvin and Gagne 20009
57 Page-Fault Frequency More direct approach than WSS Establish acceptable page-fault frequency rate and use local replacement policy If actual rate too low, process loses frame If actual rate too high, process gains frame 8.57 Silberschatz, Galvin and Gagne 20009
58 Working Sets and Page Fault Rates 8.58 Silberschatz, Galvin and Gagne 20009
59 Memory-Mapped Files Memory-mapped file I/O allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory A file is initially read using demand paging A page-sized portion of the file is read from the file system into a physical page Subsequent reads/writes to/from the file are treated as ordinary memory accesses Simplifies and speeds file access by driving file I/O through memory rather than read() and write() system calls Also allows several processes to map the same file allowing the pages in memory to be shared But when does written data make it to disk? Periodically and / or at file close() time For example, when the pager scans for dirty pages 8.59 Silberschatz, Galvin and Gagne 20009
60 Memory-Mapped File Technique for all I/O Some OSes uses memory mapped files for standard I/O Process can explicitly request memory mapping a file via mmap() system call Now file mapped into process address space For standard I/O (open(), read(), write(), close()), mmap anyway But map file into kernel address space Process still does read() and write() Copies data to and from kernel space and user space Uses efficient memory management subsystem Avoids needing separate subsystem COW can be used for read/write non-shared pages Memory mapped files can be used for shared memory (although again via separate system calls) 8.60 Silberschatz, Galvin and Gagne 20009
61 Memory Mapped Files 8.61 Silberschatz, Galvin and Gagne 20009
62 Memory-Mapped Shared Memory in Windows 8.62 Silberschatz, Galvin and Gagne 20009
63 Allocating Kernel Memory Treated differently from user memory Often allocated from a free-memory pool Kernel requests memory for structures of varying sizes Some kernel memory needs to be contiguous I.e. for device I/O 8.63 Silberschatz, Galvin and Gagne 20009
64 Buddy System Allocates memory from fixed-size segment consisting of physically-contiguous pages Memory allocated using power-of-2 allocator Satisfies requests in units sized as power of 2 Request rounded up to next highest power of 2 When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2 Continue until appropriate sized chunk available For example, assume 256KB chunk available, kernel requests 21KB Split into A L and A r of 128KB each One further divided into B L and B R of 64KB One further into C L and C R of 32KB each one used to satisfy request Advantage quickly coalesce unused chunks into larger chunk Disadvantage - fragmentation 8.64 Silberschatz, Galvin and Gagne 20009
65 Buddy System Allocator 8.65 Silberschatz, Galvin and Gagne 20009
66 Slab Allocator Alternate strategy Slab is one or more physically contiguous pages Cache consists of one or more slabs Single cache for each unique kernel data structure Each cache filled with objects instantiations of the data structure When cache created, filled with objects marked as free When structures stored, objects marked as used If slab is full of used objects, next object allocated from empty slab If no empty slabs, new slab allocated Benefits include no fragmentation, fast memory request satisfaction 8.66 Silberschatz, Galvin and Gagne 20009
67 Slab Allocation 8.67 Silberschatz, Galvin and Gagne 20009
68 Other Considerations -- Prepaging Prepaging To reduce the large number of page faults that occurs at process startup Prepage all or some of the pages a process will need, before they are referenced But if prepaged pages are unused, I/O and memory was wasted Assume s pages are prepaged and α of the pages is used Is cost of s * α save pages faults > or < than the cost of prepaging s * (1- α) unnecessary pages? α near zero prepaging loses 8.68 Silberschatz, Galvin and Gagne 20009
69 Other Issues Page Size Sometimes OS designers have a choice Especially if running on custom-built CPU Page size selection must take into consideration: Fragmentation Page table size Resolution I/O overhead Number of page faults Locality TLB size and effectiveness Always power of 2, usually in the range 2 12 (4,096 bytes) to 2 22 (4,194,304 bytes) On average, growing over time 8.69 Silberschatz, Galvin and Gagne 20009
70 Other Issues TLB Reach TLB Reach - The amount of memory accessible from the TLB TLB Reach = (TLB Size) X (Page Size) Ideally, the working set of each process is stored in the TLB Otherwise there is a high degree of page faults Increase the Page Size This may lead to an increase in fragmentation as not all applications require a large page size Provide Multiple Page Sizes This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation 8.70 Silberschatz, Galvin and Gagne 20009
71 Other Issues Program Structure Program structure Int[128,128] data; Each row is stored in one page Program 1 for (j = 0; j <128; j++) for (i = 0; i < 128; i++) data[i,j] = 0; 128 x 128 = 16,384 page faults Program 2 for (i = 0; i < 128; i++) for (j = 0; j < 128; j++) data[i,j] = 0; 128 page faults 8.71 Silberschatz, Galvin and Gagne 20009
72 Other Issues I/O interlock I/O Interlock Pages must sometimes be locked into memory Consider I/O - Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm 8.72 Silberschatz, Galvin and Gagne 20009
73 Reason Why Frames Used For I/O Must Be In Memory 8.73 Silberschatz, Galvin and Gagne 20009
74 Operating System Examples Windows XP Solaris 8.74 Silberschatz, Galvin and Gagne 20009
75 Windows XP Uses demand paging with clustering. Clustering brings in pages surrounding the faulting page Processes are assigned working set minimum and working set maximum Working set minimum is the minimum number of pages the process is guaranteed to have in memory A process may be assigned as many pages up to its working set maximum When the amount of free memory in the system falls below a threshold, automatic working set trimming is performed to restore the amount of free memory Working set trimming removes pages from processes that have pages in excess of their working set minimum 8.75 Silberschatz, Galvin and Gagne 20009
76 Solaris Maintains a list of free pages to assign faulting processes Lotsfree threshold parameter (amount of free memory) to begin paging Desfree threshold parameter to increasing paging Minfree threshold parameter to being swapping Paging is performed by pageout process Pageout scans pages using modified clock algorithm Scanrate is the rate at which pages are scanned. This ranges from slowscan to fastscan Pageout is called more frequently depending upon the amount of free memory available Priority paging gives priority to process code pages 8.76 Silberschatz, Galvin and Gagne 20009
77 Solaris 2 Page Scanner 8.77 Silberschatz, Galvin and Gagne 20009
78 End of Chapter 8 Silberschatz, Galvin and Gagne 2009
Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space
Chapter 9: Virtual Memory Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations
Chapter 9: Virtual Memory
Chapter 9: Virtual Memory Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations
OPERATING SYSTEM. Chapter 9: Virtual Memory
OPERATING SYSTEM Chapter 9: Virtual Memory Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory
Chapter 9: Virtual Memory
Chapter 9: Virtual Memory Background Demand Paging Chapter 9: Virtual Memory Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations
Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition
Chapter 9: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating
Chapter 9: Virtual Memory. Operating System Concepts 9th Edition
Chapter 9: Virtual Memory Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations
Chapter 9: Virtual Memory
Chapter 9: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating
Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition
Chapter 9: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating
Operating System Concepts
Chapter 9: Virtual-Memory Management 9.1 Silberschatz, Galvin and Gagne 2005 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped
Chapter 10: Virtual Memory. Background
Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples 10.1 Background Virtual memory separation of user logical
Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition
Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating
Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory
Chapter 0: Virtual Memory Background Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples Virtual memory separation of user logical memory
Where are we in the course?
Previous Lectures Memory Management Approaches Allocate contiguous memory for the whole process Use paging (map fixed size logical pages to physical frames) Use segmentation (user s view of address space
Page Replacement Algorithms
Page Replacement Algorithms MIN, OPT (optimal) RANDOM evict random page FIFO (first-in, first-out) give every page equal residency LRU (least-recently used) MRU (most-recently used) 1 9.1 Silberschatz,
Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,
Chapter 9: Virtual-Memory Management, Silberschatz, Galvin and Gagne 2009 Chapter 9: Virtual-Memory Management Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped
Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani
Memory Management Virtual Memory By : Kaushik Vaghani Virtual Memory Background Page Fault Dirty Page / Dirty Bit Demand Paging Copy-on-Write Page Replacement Objectives To describe the benefits of a virtual
Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSC Operating Systems Spring 2007
CSC 0 - Operating Systems Spring 007 Lecture - XIII Virtual Memory Tevfik Koşar Background Virtual memory separation of user logical memory from physical memory. Only part of the program needs to be in
Lecture 17. Edited from slides for Operating System Concepts by Silberschatz, Galvin, Gagne
Lecture 17 Edited from slides for Operating System Concepts by Silberschatz, Galvin, Gagne Page Replacement Algorithms Last Lecture: FIFO Optimal Page Replacement LRU LRU Approximation Additional-Reference-Bits
Background. Virtual Memory (2/2) Demand Paging Example. First-In-First-Out (FIFO) Algorithm. Page Replacement Algorithms. Performance of Demand Paging
Virtual Memory (/) Background Page Replacement Allocation of Frames Thrashing Background Virtual memory separation of user logical memory from physical memory. Only part of the program needs to be in memory
Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science
Virtual Memory CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture were based on those Operating Systems Concepts, 9th ed., by Silberschatz, Galvin, and
Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory
Virtual Memory Virtual Memory CSCI Operating Systems Design Department of Computer Science Virtual memory separation of user logical memory from physical memory. Only part of the program needs to be in
Virtual Memory COMPSCI 386
Virtual Memory COMPSCI 386 Motivation An instruction to be executed must be in physical memory, but there may not be enough space for all ready processes. Typically the entire program is not needed. Exception
Page Replacement. 3/9/07 CSE 30341: Operating Systems Principles
Page Replacement page 1 Page Replacement Algorithms Want lowest page-fault rate Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number
Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science
Virtual Memory CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those from an earlier edition of the course text Operating
Chapters 9 & 10: Memory Management and Virtual Memory
Chapters 9 & 10: Memory Management and Virtual Memory Important concepts (for final, projects, papers) addressing: physical/absolute, logical/relative/virtual overlays swapping and paging memory protection
CS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2016 Lecture 33 Virtual Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ How does the virtual
Topics: Virtual Memory (SGG, Chapter 09) CS 3733 Operating Systems
Topics: Virtual Memory (SGG, Chapter 09) 9.1-9.7 CS 3733 Operating Systems Instructor: Dr. Turgay Korkmaz Department Computer Science The University of Texas at San Antonio Office: NPB 3.330 Phone: (210)
Virtual Memory B: Objec5ves
Virtual Memory B: Objec5ves Benefits of a virtual memory system" Demand paging, page-replacement algorithms, and allocation of page frames" The working-set model" Relationship between shared memory and
instruction is 6 bytes, might span 2 pages 2 pages to handle from 2 pages to handle to Two major allocation schemes
Allocation of Frames How should the OS distribute the frames among the various processes? Each process needs minimum number of pages - at least the minimum number of pages required for a single assembly
CS6401- Operating System UNIT-III STORAGE MANAGEMENT
UNIT-III STORAGE MANAGEMENT Memory Management: Background In general, to rum a program, it must be brought into memory. Input queue collection of processes on the disk that are waiting to be brought into
CS 3733 Operating Systems:
CS 3733 Operating Systems: Topics: Virtual Memory (SGG, Chapter 09) Instructor: Dr. Dakai Zhu Department of Computer Science @ UTSA 1 Reminders Assignment 3: due March 30, 2018 (mid-night)! Part I: fixed
Principles of Operating Systems
Principles of Operating Systems Lecture 21-23 - Virtual Memory Ardalan Amiri Sani (ardalan@uci.edu) [lecture slides contains some content adapted from previous slides by Prof. Nalini Venkatasubramanian,
CS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2016 Lecture 32 Virtual Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Questions for you What is
Chapter 8: Main Memory Chapter 9: Virtual Memory. Operating System Concepts 9th Edition
Chapter 8: Main Memory Chapter 9: Virtual Memory Dr Prabhaker Mateti, CEG 4350 edition This collection is the combined edition of slides for Chapters 8 and 9 of SG, with several slides deleted or modified.
Basic Memory Management
Basic Memory Management CS 256/456 Dept. of Computer Science, University of Rochester 10/15/14 CSC 2/456 1 Basic Memory Management Program must be brought into memory and placed within a process for it
CS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 21 Main Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Why not increase page size
Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.
Memory Management To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time. Basic CPUs and Physical Memory CPU cache Physical memory
Swapping. Operating Systems I. Swapping. Motivation. Paging Implementation. Demand Paging. Active processes use more physical memory than system has
Swapping Active processes use more physical memory than system has Operating Systems I Address Binding can be fixed or relocatable at runtime Swap out P P Virtual Memory OS Backing Store (Swap Space) Main
!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?
Chapter 10: Virtual Memory Questions? CSCI [4 6] 730 Operating Systems Virtual Memory!! What is virtual memory and when is it useful?!! What is demand paging?!! When should pages in memory be replaced?!!
1. Background. 2. Demand Paging
COSC4740-01 Operating Systems Design, Fall 2001, Byunggu Yu Chapter 10 Virtual Memory 1. Background PROBLEM: The entire process must be loaded into the memory to execute limits the size of a process (it
Chapter 9: Virtual Memory
Chapter 9: Virtual Memory Multiprogramming Memory Management so far 1. Dynamic Loading The main Program gets loaded into memory Routines are stored in Relocatable Load format on disk As main program (or
Virtual Memory CHAPTER CHAPTER OBJECTIVES. 8.1 Background
Virtual Memory 8 CHAPTER In Chapter 7, we discussed various memory-management strategies used in computer systems. All these strategies have the same goal: to keep many processes in memory simultaneously
Even in those cases where the entire program is needed, it may not all be needed at the same time (such is the case with overlays, for example).
Chapter 10 VIRTUAL MEMORY In Chapter 9, we discussed various memory-management strategies used in computer systems. All these strategies have the same goal: to keep many processes in memory simultaneously
Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.
1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part I: Operating system overview: Memory Management 1 Hardware background The role of primary memory Program
Chapter 8 & Chapter 9 Main Memory & Virtual Memory
Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array
Operating System - Virtual Memory
Operating System - Virtual Memory Virtual memory is a technique that allows the execution of processes which are not completely available in memory. The main visible advantage of this scheme is that programs
MEMORY MANAGEMENT/1 CS 409, FALL 2013
MEMORY MANAGEMENT Requirements: Relocation (to different memory areas) Protection (run time, usually implemented together with relocation) Sharing (and also protection) Logical organization Physical organization
Memory management, part 2: outline
Memory management, part 2: outline Page replacement algorithms Modeling PR algorithms o Working-set model and algorithms Virtual memory implementation issues 1 Page Replacement Algorithms Page fault forces
Memory Management Outline. Operating Systems. Motivation. Paging Implementation. Accessing Invalid Pages. Performance of Demand Paging
Memory Management Outline Operating Systems Processes (done) Memory Management Basic (done) Paging (done) Virtual memory Virtual Memory (Chapter.) Motivation Logical address space larger than physical
Chapter 8 Virtual Memory
Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Modified by Rana Forsati for CSE 410 Outline Principle of locality Paging - Effect of page
Chapter 4 Memory Management
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7
Virtual Memory - I. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - XV. University at Buffalo.
CSE /5 - Operating Systems Fall 0 Lecture - XV Virtual Memory - I Tevfik Koşar University at Buffalo October rd, 0 Roadmap Virtual Memory Demand Paging Page Faults Page Replacement Page Replacement Algorithms
Virtual Memory - I. Roadmap. Background. Demand Paging. valid-invalid bit. Tevfik Koşar. CSE 421/521 - Operating Systems Fall 2012
CSE / - Operating Systems Fall Roadmap Lecture - XV Virtual Memory - I Virtual Memory Demand Paging Page Faults Page Replacement Page Replacement Algorithms FIFO Tevfik Koşar University at Buffalo October
Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology
Virtual Memory: Page Replacement CSSE 332 Operating Systems Rose-Hulman Institute of Technology Announcements Project E & presentation are due Wednesday Team reflections due Monday, May 19 The need for
Chapter 8 Virtual Memory
Chapter 8 Virtual Memory Contents Hardware and control structures Operating system software Unix and Solaris memory management Linux memory management Windows 2000 memory management Characteristics of
VIRTUAL MEMORY READING: CHAPTER 9
VIRTUAL MEMORY READING: CHAPTER 9 9 MEMORY HIERARCHY Core! Processor! Core! Caching! Main! Memory! (DRAM)!! Caching!! Secondary Storage (SSD)!!!! Secondary Storage (Disk)! L cache exclusive to a single
Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University
Operating Systems CSE 410, Spring 2004 Virtual Memory Stephen Wagner Michigan State University Virtual Memory Provide User an address space that is larger than main memory Secondary storage is used to
Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.
Chapter 4 Memory Management Ideally programmers want memory that is Memory Management large fast non volatile 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms
Address spaces and memory management
Address spaces and memory management Review of processes Process = one or more threads in an address space Thread = stream of executing instructions Address space = memory space used by threads Address
Paging and Page Replacement Algorithms
Paging and Page Replacement Algorithms Section 3.4 Tanenbaum s book Kartik Gopalan OS Involvement with Page Table Management Four times when OS deals with page-tables 1. Process creation create page table
Chapter 4 Memory Management. Memory Management
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7
Chapter 8 Virtual Memory
Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Operating Systems: Internals and Design Principles You re gonna need a bigger boat. Steven
stack Two-dimensional logical addresses Fixed Allocation Binary Page Table
Question # 1 of 10 ( Start time: 07:24:13 AM ) Total Marks: 1 LRU page replacement algorithm can be implemented by counter stack linked list all of the given options Question # 2 of 10 ( Start time: 07:25:28
Virtual Memory. Chapter 8
Virtual Memory 1 Chapter 8 Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time E.g., process may be swapped in and out of main memory
Chapter 8: Main Memory. Operating System Concepts 9 th Edition
Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel
Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition
Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation
Chapter 8: Memory- Management Strategies
Chapter 8: Memory Management Strategies Chapter 8: Memory- Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and
Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition
Chapter 7: Main Memory Operating System Concepts Essentials 8 th Edition Silberschatz, Galvin and Gagne 2011 Chapter 7: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure
CS 5523 Operating Systems: Memory Management
CS 5523 Operating Systems: Memory Management Instructor: Dr Tongping Liu Thank Dr Dakai Zhu, Dr Palden Lama, and Dr Tim Richards (UMASS) for providing their slides Outline Simple memory management: swap
Operating Systems Virtual Memory. Lecture 11 Michael O Boyle
Operating Systems Virtual Memory Lecture 11 Michael O Boyle 1 Paged virtual memory Allows a larger logical address space than physical memory All pages of address space do not need to be in memory the
Memory Allocation. Copyright : University of Illinois CS 241 Staff 1
Memory Allocation Copyright : University of Illinois CS 241 Staff 1 Allocation of Page Frames Scenario Several physical pages allocated to processes A, B, and C. Process B page faults. Which page should
CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University
CS 333 Introduction to Operating Systems Class 14 Page Replacement Jonathan Walpole Computer Science Portland State University Page replacement Assume a normal page table (e.g., BLITZ) User-program is
Week 2: Tiina Niklander
Virtual memory Operations and policies Chapters 3.4. 3.6 Week 2: 17.9.2009 Tiina Niklander 1 Policies and methods Fetch policy (Noutopolitiikka) When to load page to memory? Placement policy (Sijoituspolitiikka
Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza
Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement May 05, 2008 Léon Mugwaneza http://www.it.uu.se/edu/course/homepage/os/vt08 Review: Multiprogramming (with
Lecture 14 Page Replacement Policies
CS 423 Operating Systems Design Lecture 14 Page Replacement Policies Klara Nahrstedt Fall 2011 Based on slides by YY Zhou and Andrew S. Tanenbaum Overview Administrative Issues Page Replacement Policies
Operating System 1 (ECS-501)
Operating System 1 (ECS-501) Unit- IV Memory Management 1.1 Bare Machine: 1.1.1 Introduction: It has the ability to recover the operating system of a machine to the identical state it was at a given point
CS450/550 Operating Systems
CS450/550 Operating Systems Lecture 4 memory Palden Lama Department of Computer Science CS450/550 Memory.1 Review: Summary of Chapter 3 Deadlocks and its modeling Deadlock detection Deadlock recovery Deadlock
Virtual Memory Silberschatz: 9. Operating Systems. Autumn CS4023
Operating Systems Autumn 2017-2018 Outline 1 Virtual Memory Silberschatz: 9 Outline Virtual Memory Silberschatz: 9 1 Virtual Memory Silberschatz: 9 Dynamic Loading Virtual Memory Silberschatz: 9 Routine
ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)
ECE7995 Caching and Prefetching Techniques in Computer Systems Lecture 8: Buffer Cache in Main Memory (I) 1 Review: The Memory Hierarchy Take advantage of the principle of locality to present the user
Operating Systems Lecture 6: Memory Management II
CSCI-GA.2250-001 Operating Systems Lecture 6: Memory Management II Hubertus Franke frankeh@cims.nyu.edu What is the problem? Not enough memory Have enough memory is not possible with current technology
Lecture 12: Demand Paging
Lecture 1: Demand Paging CSE 10: Principles of Operating Systems Alex C. Snoeren HW 3 Due 11/9 Complete Address Translation We started this topic with the high-level problem of translating virtual addresses
Last class: Today: Paging. Virtual Memory
Last class: Paging Today: Virtual Memory Virtual Memory What if programs require more memory than available physical memory? Use overlays ifficult to program though! Virtual Memory. Supports programs that
Virtual Memory Design and Implementation
Virtual Memory Design and Implementation To do q Page replacement algorithms q Design and implementation issues q Next: Last on virtualization VMMs Loading pages When should the OS load pages? On demand
Role of OS in virtual memory management
Role of OS in virtual memory management Role of OS memory management Design of memory-management portion of OS depends on 3 fundamental areas of choice Whether to use virtual memory or not Whether to use
! What is virtual memory and when is it useful? ! What is demand paging? ! What pages should be. ! What is the working set model?
Virtual Memory Questions? CSCI [4 6] 730 Operating Systems Virtual Memory! What is virtual memory and when is it useful?! What is demand paging?! What pages should be» resident in memory, and» which should
CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES
CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES OBJECTIVES Detailed description of various ways of organizing memory hardware Various memory-management techniques, including paging and segmentation To provide
Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t
Memory Management Ch. 3 Memory Hierarchy Cache RAM Disk Compromise between speed and cost. Hardware manages the cache. OS has to manage disk. Memory Manager Memory Hierarchy Cache CPU Main Swap Area Memory
CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"
CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement" October 3, 2012 Ion Stoica http://inst.eecs.berkeley.edu/~cs162 Lecture 9 Followup: Inverted Page Table" With
Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time
Memory Management To provide a detailed description of various ways of organizing memory hardware To discuss various memory-management techniques, including paging and segmentation To provide a detailed
Chapter 8 Main Memory
Chapter 8 Main Memory 8.1, 8.2, 8.3, 8.4, 8.5 Chapter 9 Virtual memory 9.1, 9.2, 9.3 https://www.akkadia.org/drepper/cpumemory.pdf Images from Silberschatz Pacific University 1 How does the OS manage memory?
Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms
Virtual Memory Today! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms Reminder: virtual memory with paging! Hide the complexity let the OS do the job! Virtual address
Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory
Chapter 4: Memory Management Part 1: Mechanisms for Managing Memory Memory management Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design
Operating Systems. User OS. Kernel & Device Drivers. Interface Programs. Memory Management
Operating Systems User OS Kernel & Device Drivers Interface Programs Management Brian Mitchell (bmitchel@mcs.drexel.edu) - Operating Systems 1 Management is an important resource that needs to be managed
Memory Organization MEMORY ORGANIZATION. Memory Hierarchy. Main Memory. Auxiliary Memory. Associative Memory. Cache Memory.
MEMORY ORGANIZATION Memory Hierarchy Main Memory Auxiliary Memory Associative Memory Cache Memory Virtual Memory MEMORY HIERARCHY Memory Hierarchy Memory Hierarchy is to obtain the highest possible access
CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University
Frequently asked questions from the previous class survey CS : OPERATNG SYSTEMS [VRTUAL MEMORY] Shrideep Pallickara Computer Science Colorado State University Multi-level paging: How many levels deep?
e-pg Pathshala Subject: Computer Science Paper: Operating Systems Module 29: Allocation of Frames, Thrashing Module No: CS/OS/29 Quadrant 1 e-text
29.1 Introduction e-pg Pathshala Subject: Computer Science Paper: Operating Systems Module 29: Allocation of Frames, Thrashing Module No: CS/OS/29 Quadrant 1 e-text In an earlier module, we learnt what
Modeling Page Replacement: Stack Algorithms. Design Issues for Paging Systems
Modeling Page Replacement: Stack Algorithms 7 4 6 5 State of memory array, M, after each item in reference string is processed CS450/550 Memory.45 Design Issues for Paging Systems Local page replacement
ECE519 Advanced Operating Systems
IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (8 th Week) (Advanced) Operating Systems 8. Virtual Memory 8. Outline Hardware and Control Structures Operating
UNIT I OVERVIEW OF OPERATING SYSTEMS
UNIT I OVERVIEW OF OPERATING SYSTEMS Introduction - overview of operating system concepts - Process management and Scheduling, Memory management: partitioning, paging, segmentation, virtual memory, Device