Process size is independent of the main memory present in the system.

Similar documents
Memory Management Virtual Memory

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory

ECE519 Advanced Operating Systems

a process may be swapped in and out of main memory such that it occupies different regions

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University

Virtual Memory. Chapter 8

Role of OS in virtual memory management

Chapter 8 Virtual Memory

Virtual Memory. Chapter 8

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Chapter 8. Virtual Memory

COMP 346 WINTER 2018 MEMORY MANAGEMENT (VIRTUAL MEMORY)

Operating Systems: Internals and Design Principles. Chapter 7 Memory Management Seventh Edition William Stallings

Virtual to physical address translation

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Memory Management Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Virtual Memory Outline

OPERATING SYSTEM. Chapter 9: Virtual Memory

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Operating Systems Lecture 6: Memory Management II

The memory of a program. Paging and Virtual Memory. Swapping. Paging. Physical to logical address mapping. Memory management: Review

CPE300: Digital System Architecture and Design

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Operating System Support

CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

Chapter 8: Memory-Management Strategies

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Memory management, part 2: outline

Paging Policies, Load control, Page Fault Handling, Case studies January WT 2008/09

Memory Management (Chaper 4, Tanenbaum)

Chapter 8: Virtual Memory. Operating System Concepts

Fall COMP3511 Review

ECE468 Computer Organization and Architecture. Virtual Memory

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

ECE4680 Computer Organization and Architecture. Virtual Memory

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Operating System Concepts

Virtual Memory. control structures and hardware support

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapter 8: Main Memory

Chapter 8: Main Memory

Chapter 3 - Memory Management

Chapter 9 Memory Management

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Chapter 9: Virtual Memory


Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

How to create a process? What does process look like?

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Chapter 9: Virtual Memory

Memory Management: Virtual Memory and Paging CS 111. Operating Systems Peter Reiher

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapters 9 & 10: Memory Management and Virtual Memory

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Memory Management. Memory Management

Basic Memory Management

Operating Systems. Operating Systems Sina Meraji U of T

Virtual Memory Management

Chapter 4 Memory Management. Memory Management

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University

CS307: Operating Systems

Operating Systems Unit 6. Memory Management

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

stack Two-dimensional logical addresses Fixed Allocation Binary Page Table

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Chapter 10: Virtual Memory

Week 2: Tiina Niklander

Memory Management (Chaper 4, Tanenbaum)

Memory. Objectives. Introduction. 6.2 Types of Memory

The Operating System. Chapter 6

CS370 Operating Systems

15 Sharing Main Memory Segmentation and Paging

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

! What is main memory? ! What is static and dynamic allocation? ! What is segmentation? Maria Hybinette, UGA. High Address (0x7fffffff) !

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

CS399 New Beginnings. Jonathan Walpole

Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

Memory Management. Memory Management

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Chapter 8: Main Memory

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

Transcription:

Hardware control structure Two characteristics are key to paging and segmentation: 1. All memory references are logical addresses within a process which are dynamically converted into physical at run time. This allows to move a process in or out of main memory and can reside at different location in memory. 2. A process is broken into small chunks, which needed not to be contiguously located in the main memory. The biggest advantage with these two characteristics is that not whole process has to be in memory while executing, meaning those chunks not loaded can be brought in when ever required during execution and the ones not required can be swapped out. Those chunks in memory are called resident set of process. 1 Advantage More processes can be maintained in the main memory. Process size is independent of the main memory present in the system. 2 Fall 2010-1

Paging & Segmentation Characteristics Simple Paging Virtual Memory Paging Simple Segmentation Virtual Memory Segmentation Main memory partitioned into small fixed-size chunks called frames Program broken into pages by the compiler or memory management system Main memory partitioned into small fixed-size chunks called frames Program broken into pages by the compiler or memory management system Main memory not partitioned Program segments specified by the programmer to the compiler (i.e., the decision is made by the programmer) Main memory not partitioned Program segments specified by the programmer to the compiler (i.e., the decision is made by the programmer) Internal fragmentation within frames Internal fragmentation within frames No internal fragmentation No internal fragmentation No external fragmentation No external fragmentation External fragmentation External fragmentation Operating system must maintain a page table for each process showing which frame each page occupies Operating system must maintain a free frame list Operating system must maintain a page table for each process showing which frame each page occupies Operating system must maintain a free frame list Operating system must maintain a segment table for each process showing the load address and length of each segment Operating system must maintain a list of free holes in main memory Operating system must maintain a segment table for each process showing the load address and length of each segment Operating system must maintain a list of free holes in main memory Processor uses page number, offset to Processor uses page number, offset to Processor uses segment number, Processor uses segment number, calculate absolute address calculate absolute address offset to calculate absolute address offset to calculate absolute address All the pages of a process must be in main memory for process to run, unless overlays are used Not all pages of a process need be in main memory frames for the process to run. Pages may be read in as needed Reading a page into main memory may require writing a page out to disk All the segments of a process must be in main memory for process to run, unless overlays are used Not all segments of a process need be in main memory frames for the process to run. Segments may be read in as needed Reading a segment into main memory may require writing one or more segments out to disk 3 Locality & Virtual Memory Virtual memory scheme looks attractive but is it worth? A Process with large arrays of data bank. 1. Its execution may confined to a specific area of an array. So there is no point in loading all the chunks. 2. Occasionally, if reference for some data is made and it is not in the main memory it can be brought in. 4 Fall 2010-2

Locality & Virtual Memory Therefore, not only more processes can be loaded in the memory it also save time as less swapping is required. The swapping strategy is employed by the operating system, which piece to keep and which one to swap. Because, if for some reason processor gets busy in swapping in and out of these chunks, which is called thrashing the system will become very inefficient. So, it has to employ a scheme which avoids thrashing which has given birth to the principle of locality. Meaning program and data references within a process tends to cluster. Hence, for a short period of time only few chunks may be needed is a correct hypothesis. 5 Page Paging Virtual Address Page Table Entry Page Number P M Other Control Bits Frame # 6 Fall 2010-3

Page Segmentation Virtual Address Segment Table Entry Segment Number P M Other Con Bits Length Segment Base 7 Page Virtual Address Segment Number Page Number Joint Scheme Segment Table Entry P M Other Con Bits Length Segment Base Page Table Entry P M Other Control Bits Frame # 8 Fall 2010-4

Page Table Structure 1. Basic task for reading a word involves translation of logical address into physical one. 2. Page table size depends upon the process size. 3. It has to be in the main memory. 4. Hardware implementation is required. 9 Page Table Structure Virtual Address # Page # Register # PT Pointer Frame # M Memory + Page # Page Table Frame # Page Frame Program Paging Mechanism 10 Fall 2010-5

Page Table Structure Typically most system has one page table per process. Virtual space occupied by a process can be large. For VAX with page size 512 bytes (2 9 ). Possible Process size up to 2 G bytes (2 31 ). There could be 2 22 page table entries per process. Which is a huge amount used for page tables and is unacceptable. So the system in a similar way like pages are kept on virtual memory page tables are also kept on virtual memory and can be brought later. When a process is running a part of the page table which h includes the page table entry for the currently executing process resides in the main memory. There is a kind of two level scheme used for such mechanism. 11 Two Level Paging Structure Page Size 4 K Byte (2 12 ) Virtual Address Space is 4 G Byte (2 32 ) So the total Space is (2 20 ) Pages So this page table is kept in virtual memory and mapped by a root page table with 2 10 PTE occupying only 4 K Bytes Each Page Table Entry (PTE) need 4 bytes Therefore, 2 20 page table entries need 4 M Bytes Which is 2 10 pages for a page table 12 Fall 2010-6

Two Level Paging Structure 4 K Byte Page Table 4 M Byte Page Table 4 G Byte Address Space 13 Two Level Address Translation Register Virtual Address # PT Pointer 10 bits 10 bits 12 bits Frame # M Memory Page Frame + + Program Root Page Table Contains 1024 PTEs 4 K byte Page Table Contains 1024 PTEs Paging Mechanism 14 Fall 2010-7

Translation Look Aside Buffer 1. Every virtual memory reference basically cause two physical memory accesses, one to get the memory page table entry and the other actual data. 2. Memory access time is simply double. 3. High speed cache scheme is used for these page table entries and is known as translation look aside buffer. 4. Cache function in the usual way has entries for the most recent used data. 5. Given a virtual address, processor will examine first the TLB, if found a TLB hit results and the frame number is retrieved and the physical address is calculated. 6. In the case of miss TLB, processor use the page number to indexed the process page table and examine the page PTE, if the present bit is set indicating it is in the main memory. Therefore, processor can retrieve the frame number from the table and can form the physical address. The processor also update TLB with this page. 7. In the case, page is not in the main memory, page fault occurs and the o/s bring the desired page. 15 Translation Look Aside Buffer Virtual Address # Page # TLB Load Page Page Table TLB Hit Main Memory TLB Miss Secondary Memory Page Fault Frame # Real Address 16 Fall 2010-8

TLB Flow Diagram Return to Faulted Instruction Start O/S instruct CPU to read the page from disk CPU activates I/O CPU checks TLB PTE in TLB Yes Page loaded from disk to MM Page Fault Subroutine No Access PT Memory Full No Yes PT updated No Update TLB Page in Main Memory Yes PT Updated Physical Address 17 Associate Mapping Virtual Address Virtual Address 10 502 10 502 Real Address Page Table Direct Mapping Associative Mapping 37 502 Page # PTEs 37 10 37 312 23 570 100 37 502 110 Real Address TLB 18 Fall 2010-9

TLB & Cache Operation TLB Operation Virtual Address Page # Cache Operation TLB Real Address TLB Miss TLB Hit Tag Reminder Page Table + Cache Miss Hit Value Value Main Memory 19 Page Size Page size is very important while designing the hardware Smaller size reduce internal fragmentation Smaller page on the other hand means more pages for a process More pages pg means larger pagetable pg For some active processes pages tables have to be in virtual memory Which some time may cause double page fault for a single reference, one for the needed page table portion and second to bring the process page Physical characteristics of a disk (because of rotation) demands larger page size for efficient block transfer data Page fault rate is effected by the size of the page Smaller page size also means more pages in the memory and after a while pages in memory will all contain portions of the process near recent references, so page fault rate is low With increasing page size, each individual pages will contain locations further and further from any particular recent reference, principle of locality fades and page fault rate increase Eventually page fault rate will become small as page size approach process size 20 Fall 2010-10

Page Size Page fault is also determined by the allocation of number frames for a process Software policy in-terms of memory allocation for processes affects the hardware design decision about page size Page size is also dependent on the size of main memory and program size itself Trend is towards larger main memory for the system but on the other-hand applications are becoming complex and also the use of contemporary programming techniques are causing the principle of locality less affective Object oriented approach demands smaller programs and data modules which tend to scatter the references over large number of objects Multithreaded applications may some time cause an abrupt changes in the instruction stream For a given TLB size, as the memory for processes grows principle of locality decreases so hit ratio on TLB access decreases, in such circumstances TLB could be a bottleneck Increasing the size of TLB to have more entries is not a simple choice as TLB interact with other hardware aspects of the system Having multiple page size improves performance under different circumstances 21 Typical Paging Behavior 22 Fall 2010-11

Page Sizes 23 Address Translation in Segmentation Virtual Address # Seg # + Register # Base + ST Pointer Page Table + Seg # Length Base Page Frame There are number of advantages with segmentation: Data structure handling is simple Recompilation of programs Sharing Protection 24 Fall 2010-12

Paging & Segmentation Combined 1. Paging is transparent to the user. 2. In contrast to paging segmentation is visible to the programmer. 3. In combined approach user space is divided into number of segments at the discretion of the programmer. 4. Each segment is sub divided into small number of pages and the size of the page is equal to the frame size in the main memory. 5. If the segment is less than the size of a page, it occupies just one page. 25 Combined Address Translation Register # Virtual Address # ST Pointer Frame # Seg # Page # M Memory Seg Table Seg # + + Page # Page Table Page Frame 26 Fall 2010-13

Protection & Sharing Each segment table entry has its length and base address. Programs cannot access beyond the limits of asegment. Sharing can beachieved by including the references in segmenttables of more that one process. Similar mechanism is available in the paging but that is transparent to the user. A sophisticated mechanism called ring protection can be used based on: A program can access data only on the same ring of less privileged ring. Program can call services residing on the same or privileged ring. 27 Protection & Sharing Dispatcher Process A X No access allowed Process B No branch instruction Ref to data allowed X Process C Ref to data not allowed X 28 Fall 2010-14

Operating System Software 1. Virtual Memory Support 2. Paging, Segmentation or Both 3. Algorithms for Various Aspects of MMS The choice of 1and 2is hardware dependent, earlier UNIX systems were without virtual memory support, because address translation mechanism was not supported in the hardware for those system Virtually every system has support for virtual memory scheme using paging or segmentation, in the case of combined approach most of the issues are related to paging The choice of 3is the domain of operating system software and mostly related with performance: for example minimum page fault desire to avoid overheads, switching to other processes during page I/O also minimizing the probability referencing a word on missing page Although there is no definite policy to achieve performance but it really depends on the memory size, speed of main and secondary memory, number of processes competing for the resources and finally the execution behavior of the application 29 Virtual Memory Policies Table 30 Fall 2010-15

Fetch Policy Fetch policy determines when to bring a page into main memory: 1. Demand Paging In this scheme, when a reference to a page is actually made then that page is brought into the main memory. Typically, in the start the frequency of page fault will be high. As more pages are brought in the principle of locality will start working. 2. Pre-paging In this case pages other than demanded are brought in by a page fault, it is used to exploit the characteristics of rotational devices such as disks. If the pages are contiguously stored then it is likely to be efficient to bring a bunch of them at one time 31 Placement policy This policy determines where to put the pieces of a process in the main memory. In pure segmentation scheme, it can be first fit, best fit and so on. For paging or combined systems placement policy is irrelevant as the address translation hardware and the main memory access hardware will perform their job efficiently. For system like NUMA non-uniform memory access multiprocessor the distributed shared memory can be accessed by any processor, and it may affect the performance depending upon the physical distance between the memory and the processor. 32 Fall 2010-16

Replacement Policy Probably the most important aspect of the MMS, this policy determines which page or piece of data to be replaced. The objective here is to replace or remove that piece which is most likely not to accessed in future. Principle of locality brings high correlation among the references, most policies predict based on the past behavior. It should also be noted that the most sophisticated and elaborate policy will have greater hardware and software overheads. There are three interrelated concepts while dealing with this topic: 1. How many frames are to be allocated for an active process. 2. Pages considered for replacement should be confined to that particular process or it should include all the page frames in the main memory. 3. Among the selected set considered for replacement which particular piece should be replaced. The first two are called resident set management concepts and the last one basically deals with the actual replacement policy issues. There is a restriction on replacement policy because some of the pages cannot be replaced as they are locked by kernel. 33 Replacement Policy Following are typically the strategy algorithms for replacement policies: Optimal Least Recently Used (LRU) First In First Out (FIFO) Clock 34 Fall 2010-17

Optimal This policy is based on the assumption by replacing that page for which the next reference would be after a very long time. Is it practical with this assumption. It is not, because future references cannot be known to the operating system. This policy can be used as a standard reference set. 35 Least Recently Used This policy select that page for replacement which has not been referenced recently (longest time). Based on the principle of locality it is less likely to be referenced in the near future. It almost behaves like an optimal replacement policy. The problem is its implementation. At each reference tagging of time to both instruction and data. Overheads are much larger. 36 Fall 2010-18

First IN First Out (FIFO) This policy uses the round robin technique to remove the pages pg from a circular buffer allocated to a process. The main idea is that page which has been there for the longest duration may not be of use, but this assumption does not work all the time. A pointer is used to circle through the buffer and select an appropriate page for replacement. It is very simple to implement. 37 Clock This policy uses abit called use bit to replace a page: Use bit is set to 1 once a page pg is loaded into the memory. The set of frame are structured into a circular buffer. Once a page is replaced the pointer is set to the next element (frame) of the buffer. Operating system start scanning the buffer and looks for a frame with use bit 0. If it finds it, frame is replaced. If not means the bit is 1, so it sets that bit to zero and proceed to the next frame and so on. After a full circle if it has not found any frame with 0 use bit, it replace the original position frame (from where it started). 38 Fall 2010-19

Clocking Mechanism 39 Comparison of Policies 40 Fall 2010-20

Page Replacement Behavior 41 Improving Clock It seems that clock and LRU policies behave some what similar and infect clock policy can be improved using some other bits as well, for example incorporating modified bit along with the use bit. 1. Not accessed & not modified (u = 0, m = 0) 2. Accessed & not modified (u = 1, m = 0) 3. Not accessed and modified (u = 0, m = 1) 4. Accessed & modified (u = 1, m = 1) Modified Clock Scheme a. Starting from the current position, if 1 is satisfied, replace that frame and make no changes to any of the use bit while scanning b. If step a fails, scan again and look to find 3, if yes replace the frame and set the use bit to 0 for each frame during scanning c. If step b fails, return to original position, repeat step a and if necessary step b 42 42 Fall 2010-21

Improved Clock Scheme 43 Page Buffer LRU and clock schemes are better than FIFO like schemes but obviously, there are overheads and complexity involved. Also, the issue of replacing a modified page has greater impact on performance then replacing a non modified page. Things can be improved using a page bff buffering scheme. Page replacing scheme is simple FIFO. Two list are used: free page list for un modified pages and modified page list for modified pages. Page is infect not moved about in the main memory, actually its entry from the page table is removed and attached to one of these lists. These list actually behave like a cache. The modified list also makes a cluster, which reduces number of I/O operations and improve performance. Another factor can also make impact on performance and that is cache size. 44 Fall 2010-22

Resident Set Management 1. Resident Set Size Operating system has to make a decision about the allocation of the numbers of frames for a particular process: Smaller the allocated memory for each process, larger number of processes present in the main memory, having high probability of finding ready processes. Small number of frames for processes despite the principle of locality will generate more page faults. Beyond a certain size, there would not be any improvement on page fault occurrence. There could be two strategies to deal with these three factors: Fixed Allocation At the creation time fixed number of frames are allocated for a process, which could be based on the process type (application, interactive or batch), in this scheme at the occurrence of a page fault one of the page is replaced from the same process. Variable Allocation In this case number of frame allocation is not fixed, based on its behavior the number can be increased/decreased. It is an attractive scheme, but need the support of the operating system to see the behavior of the process, which has overheads. 45 Resident Set Management 2. Replacement Scope It could be global as well as local, but both are activated with page fault Local Replacement Scope: It only select the page from the same process which has caused the page fault Global Replacement Scope: This consider all the unlocked pages to be candidate for replacement regardless of the process which has caused the page fault It seems local replacement policy is much simpler to implement but there is no evidence that they behave better than global replacement scheme. Although, there is a relationship among the replacement scope and the resident set: Fixed allocation implies a local replacement policy Variable allocation implies global replacement policy Fixed Allocation Variable Allocation Local Replacement Fixed number of frames for a process Candidate page for replacement is from the same very process Flexible in allocation of frames Candidate page for replacement is from the same very process Global Replacement Not Possible Page could be any from all unlocked frames, this result in the change of size of the resident set 46 Fall 2010-23

Resident Set Management Fixed Allocation Local Scope This policy has two shortcomings: If allocation is tend to be small, larger page fault rate, performance bottle neck. Higher number frames for a process which really don t need it, will cause less number of processes in the main memory and processor may remain idle. Variable Allocation Global Scope It is simple and has been adopted in many systems, typically operating system keeps a list of free frames and as the page fault occurs a free frame is allocated and attached to the process. This means a process having more page faults grows with time. But what if all the free frames are used, now a replacement policy is required. As the selection is global one, means any unlocked frame is a candidate for replacement. It is very difficult to a policy which can give optimal outcome. Variable Allocation Local Scope This policy makes remedy for theabovecase as: Allocate frames based on the type of application or some other criteria and use demand paging or pre-paging for fill up. At page fault select the frame from that very process which has caused it. Periodically, change the number of allocated frames for a process, this is not simple but can yield performance. 47 Cleaning Policy It is the opposite of fetch policy, it determines when a page to be written out once it is modified. Demand Cleaning Themodified d page is onlywritten out once it has been selected for replacement Minimum page writes. Double page transfer wait (one to write back and the other to get the new one). Pre-cleaning The modified pages are written out before their frames are needed. A page is written out but still remains in the main memory. Time is utilized in writing these pages and meanwhile majority of them will be modified again. Using a buffering technique having two lists modified and unmodified can be useful. In which replaced pages are linked with these lists and periodically modified pages are written back in batches and moved to the unmodified list. The page on the unmodified list can be reclaimed or lost if its frame is assigned to another page. 48 Fall 2010-24

Load Control This is concerned with the number of resident processes in the memory and is referenced as multi-programming level. Load control is important as it determine the effectiveness of the memory management system. Less number of processes means greater probability of under utilization of processor in the case of processes suspension. More page faults if more processes are accommodated meaning small resident set. This figure shows that as the multi-programming level increases from small value the processor utilization increases but beyond a certain level it starts decreasing, indicating more and more page faults. 49 Process Suspension Multi-programming level can be reduced, by suspending (swapping out) one or more of the currently resident processes, following are the possible choices: Lowest Priority Process: This is a scheduling policy decision implementation pe e tato Faulting Process: Which means that faulting process does not have its working set resident, suspending that process will save a page replacement and I/O operation. Last Process Activated: This is the process least likely have its working set resident. Process with Smallest Resident Set: This will require least future effort to reload again, however, it penalize programs with small locality. Largest Process: This obtains the most free frames in an over committed memory making additional deactivation unlikely soon. Process with Largest Remaining Execution Window: This is an approximation of the shortest processing time scheduling policy first scheme. It should be noted that which policy is better really depends on the design factors of the operating system and the characteristics of the program being executed. 50 Fall 2010-25

UNIX & SOLARIS MM Paging System Page Table: One page table for each process with one entry per page. Disk Block Descriptor: Each page is associated with an entry that describe disk copy of the virtual page. Page Frame Data Table: Description of each frame details and index by the frame number. Swap Use Table: There is one swap use table for each swap device, with one entry for each page pg on the device. 51 UNIX & SOLARIS MM Page Frame Number Age Copy on write Modify Referenced Valid Protect Page Table Entry Swap Device Number Device Block Number Type of Storage Disk Block Descriptor Page State Reference Count Logical Device Block Number Pf Data Pointer Page Frame Data Table Entry Reference Count Page/Storage Unit Number Swap Use Table Entry 52 Fall 2010-26

UNIX & SOLARIS MM 53 UNIX & SOLARIS MM 54 Fall 2010-27

Page Replacement This is refined clock policy algorithm, two pointers are used: The operation is based on: Scan rate of the pointers Hand spread Initially both are set to a default value 55 LINUX MM Linux shares many characteristics of UNIX but it has its unique features for the memory management, it use three level page table structure consisting of the following types of tables: Page Directory: An active process has a single page directory equal to the size of one page, each entry in the page directory points to one page of the page middle directory, the page directory has to be in main memory for an active process. Page Middle Directory: It can branch into multiple pages and each entry in the middle page directory points to one page table. Page Table: The page table may also span into multiple pages, each entry of page table refers to one virtual page of the process. LINUX page table structure is platform independent, for page allocation it uses buddy system and for page replacement it uses clock scheme with replacing single bit use field with 8 bits. 56 Fall 2010-28

Windows 2000 MM Each user process can have 4 G Byte of memory, but by default it is divided and each user has 2 G Byte and the remaining 2 G Byte is reserved for operating system. W2K Paging: When a process is created it can make use of the whole space which is divided into pages and any of which can be brought into main memory. Alsoapage can beinanyof the following states: Available: Pages not currently used by any process. Reserved: A set of contiguous pages that virtual memory manager set aside for a process but dose not count against its quota of allocation until it is used. Therefore, when a process needs to write to memory some of it can be committed to the process. Committed: Pages for which the virtual memory manager has set aside space in its paging file. Reserved and committed memory concept is useful as firstly it minimizes the amount of disk space set aside for a particular process, keeping that disk space free for others and secondly enables a thread or process to declare an amount of memory that can quickly be allocated as desired. The resident set management scheme for W2K is variable allocation, local scope. Working set of processes are adjusted as: When available memory is large, virtual memory manager allows the resident set to grow: when a page fault occurs it brings the new page but no page is swapped out. When memory is hard to find, it swap less recently used pages out of working set. 57 Fall 2010-29