Virtual Memory. Chapter 8

Similar documents
a process may be swapped in and out of main memory such that it occupies different regions

Chapter 8 Virtual Memory

COMP 346 WINTER 2018 MEMORY MANAGEMENT (VIRTUAL MEMORY)

ECE519 Advanced Operating Systems

Memory Management Virtual Memory

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Virtual Memory. control structures and hardware support

Virtual Memory. Chapter 8

Chapter 8. Virtual Memory

Role of OS in virtual memory management

Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Process size is independent of the main memory present in the system.

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Memory Management Virtual Memory

Virtual Memory Outline

Virtual Memory: Page Replacement. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Operating Systems. Operating Systems Sina Meraji U of T

Memory Management Prof. James L. Frankel Harvard University

CS370 Operating Systems

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Operating Systems: Internals and Design Principles. Chapter 7 Memory Management Seventh Edition William Stallings

Paging Policies, Load control, Page Fault Handling, Case studies January WT 2008/09

Virtual to physical address translation

First-In-First-Out (FIFO) Algorithm

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Chapter 8: Virtual Memory. Operating System Concepts

Operating System Concepts

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

CPE300: Digital System Architecture and Design

CS370 Operating Systems

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Chapter 9: Virtual Memory

Chapter 3 - Memory Management

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Chapter 9: Virtual-Memory

Operating Systems Lecture 6: Memory Management II

Chapter 9: Virtual Memory

Page Replacement Algorithms

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Chapter 4 Memory Management

Week 2: Tiina Niklander

Basic Memory Management

Virtual Memory COMPSCI 386

Operating System Support

VIRTUAL MEMORY READING: CHAPTER 9

Operating Systems, Fall

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

Principles of Operating Systems

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

OPERATING SYSTEM. Chapter 9: Virtual Memory

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

CS307: Operating Systems

Virtual Memory III. Jo, Heeseung

Memory management: outline

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

Memory management: outline

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

1. Background. 2. Demand Paging

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Memory and multiprogramming

CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

Perform page replacement. (Fig 8.8 [Stal05])

Chapter 8. Operating System Support. Yonsei University

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

Practice Exercises 449

Operating Systems Virtual Memory. Lecture 11 Michael O Boyle

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

The memory of a program. Paging and Virtual Memory. Swapping. Paging. Physical to logical address mapping. Memory management: Review

Memory management, part 2: outline

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Chapter 4 Memory Management. Memory Management

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

COMPUTER SCIENCE 4500 OPERATING SYSTEMS

stack Two-dimensional logical addresses Fixed Allocation Binary Page Table

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

CS 5523 Operating Systems: Memory Management (SGG-8)

CS370 Operating Systems

Chapter 8 Main Memory

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Operating Systems Design Exam 2 Review: Spring 2011

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

Memory: Page Table Structure. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

(b) External fragmentation can happen in a virtual memory paging system.

Transcription:

Virtual Memory 1 Chapter 8

Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time E.g., process may be swapped in and out of main memory such that it occupies different regions A process may be broken up into pieces (pages or segments) that do not need to be located contiguously in main memory Insight: all pieces of a process do not need to be loaded in main memory during execution Computation may proceed for some time in a piece located in main memory; think about the locality and cache! This is enabled by the technology of Virtual Memory

Process Execution The OS can bring into main memory only a few pieces (but not all) of the program Each page/segment table entry now has a present bit that is set only if the corresponding piece is in main memory The resident set is the portion of the process that is in main memory What happens when a piece is not in the main memory? An interrupt (memory fault) is generated to go to secondary memory to get that piece

4 Process Execution OS places the process in a Blocking state OS issues a disk I/O Read request to bring into main memory the piece referenced to Another process is dispatched to run while the disk I/O takes place An interrupt is issued when the disk I/O completes This causes the OS to place the affected process in the Ready state Recall: What is this procedure called?

Advantages of Partial Loading More processes can be maintained in main memory Only load in some of the pieces of each process With more processes in main memory, it is more likely that a process will be in the Ready state at any given time A process can now execute even if it is larger than the main memory size! It is even possible to use more bits for logical addresses than the bits needed for addressing the physical memory

6 Virtual Memory: large as you wish! Example: 16 bits are needed to address a physical memory of 64KB Let s use a page size of 1KB so that 10 bits are needed for offsets within a page For the page number part of a logical address we may use a number of bits larger than 6, say 16 The memory referenced by a logical address is called virtual memory is maintained on secondary memory (e.g., disk): a program needs to be completely loaded into the virtual memory on a disk before execution pieces are bring into main memory only when needed

7 Virtual Memory: large as you wish! For 64 bits machine: Theoretically: 16.8 million terabytes!!! In practice: your computer case is a little too small to fit all that RAM.

8 Support Needed for VM For better performance, the file system is often bypassed and virtual memory is stored in a special area of the disk called the swap space Memory management hardware must support paging and/or segmentation OS must be able to manage the movement of pages and/or segments between secondary memory and main memory

9 Possibility of Thrashing To accommodate as many processes as possible, only a few pieces of each process is maintained in main memory But main memory may be full: when the OS brings one piece in, it must swap one piece out The OS must not swap out a piece of a process just before that piece is needed If it does this too often this leads to thrashing: The processor spends most of its time swapping pieces rather than executing user instructions

10 Locality and Virtual Memory Principle of locality of references: memory references within a process tend to cluster Hence: only a few pieces of a process will be needed over a short period of time Possible to make intelligent guesses about which pieces will be needed in the future This suggests that virtual memory may work efficiently (i.e., thrashing should not occur too often)

11 Memory Management Memory management depends on whether the hardware supports paging or segmentation or both Pure segmentation systems are rare. Segments are usually paged -- memory management issues are then those of paging We thus concentrate on issues associated with paging To achieve good performance, the fundamental design consideration is to decrease the page fault rate Page fault: the referred page is not in the main memory

1 Paging (No Virtual Memory)

1 Paging (with virtual memory) PTE Each page table entry contains a present bit to indicate whether the page is in main memory or not If it is in main memory, the entry contains the frame number of the corresponding page in main memory If it is not in main memory, the entry may contain the address of that page on disk

14 Paging (with virtual memory) A modified bit indicates if the page has been altered since it was last loaded into main memory If no change has been made, the page does not have to be written to the disk when it needs to be swapped out Other control bits may be present if protection is managed at the page level a read-only/read-write bit protection level bit: kernel page or user page (more bits are used when the processor supports more than protection levels)

1 Page Table Structure For different processes, page tables are variable in length A single register holds the starting physical address of the page table of the currently running process Page number field is typically greater than frame number field, why?

16 Page Table Structure Let s look at the size of a page table For example: 4 Gb program ^, and each page is of size 1 Kb (^10). Then how many entries do we need for a page table? ^(-10) = ^ Is this result acceptable? A solution: Sharing pages can alleviate this problem a bit

17 Page Table Structure Most computer systems support a very large virtual address space to 64 bits are used for logical addresses Another example: if (only) bits are used with 4k (^1) pages, a page table may have ^{0} entries The entire page table may take up too much main memory. Hence, page tables are often also stored in virtual memory When a process is running, part of its page table must be in main memory (including the page table entry of the currently executing page)

18 Translation Lookaside Buffer Because (and if) the page table is in main memory, each virtual memory reference causes at least two physical memory accesses one to fetch the page table entry one to fetch the data To overcome this overhead, a special buffer is set up for page table entries called the TLB - Translation Lookaside Buffer Contains page table entries that have been most recently used Works similar to main memory cache

19 Translation Lookaside Buffer Given a logical address, the processor examines the TLB (in a buffer, i.e., data register) If page table entry is present (a hit), the frame number is retrieved and the real (physical) address is computed If page table entry is not in the TLB (a miss), the page number is used to index the process s page table (in main or virtual memory) if present bit is set then the corresponding frame is accessed if not, a page fault is issued to bring in the referenced page into main memory from virtual memory TLB is updated to include the new page entry

0

1 TLB: further comments TLB use associative mapping hardware to simultaneously search all TLB entries to find a match on page number The pages in the TLB is guaranteed to be in the main memory The CPU uses two types of caches on each virtual memory reference first the TLB: to convert the logical address to the physical address once the physical address is formed, the CPU then looks in the cache for the referenced word

Multilevel Page Tables Since a page table will generally require several pages to be stored. One solution is to organize page tables into a multilevel hierarchy When levels are used, the page number is split into two numbers p1 and p p1 indexes the outer paged table (directory or root page table) in main memory who s entries points to a page containing page table entries which is indexed by p.

Multilevel Page Tables

4 Multilevel Page Tables

Inverted Page Table Another solution to the problem of maintaining large page tables is to use an Inverted Page Table (IPT) There is only one IPT entry per physical frame (rather than one per virtual page) this reduces a lot the amount of memory needed for page tables We generally have only one IPT for the whole system The 1st entry of the IPT is for frame #1... the n-th entry of the IPT is for frame #n and each of these entries contains the virtual page number Thus this table is inverted

6 For better performance, hashing is used to obtain a hash table entry which points to a IPT entry A page fault occurs if no match is found chaining is used to manage hashing collision/overflow

7

8 Fetch Policy Determines when a page should be brought into the main memory. Two common policies: Demand paging only brings pages into main memory when a reference is made to a location on the page (i.e., paging on demand only) many page faults when a process is first started but the rate should decrease as more pages are brought in Prepaging brings in more pages than needed locality of references suggests that it is more efficient to bring in pages that reside contiguously on the disk (that is slow) Effectiveness is not definitely established: the extra pages are often not referenced

9 Placement Policy (Recall) Determines where in real memory a process piece resides E.g., for dynamic partitioning systems, placement policy is a critical design consideration first-fit, next fit, and best fit are possible choices For paging: the chosen frame location is irrelevant since all memory frames are equivalent with the same size Placement policy for dynamic partitioning (Recall)

0 Replacement Policy What happens if more pages are needed and there aren't any free frames available? There are several possible ways to consider: Adjust the memory used by I/O buffering, etc., to free up some frames for user processes. Put the process requesting more pages into a wait queue until some free frames become available. Swap some (entire) processes out of memory completely, freeing up its page frames. Find some page in memory that isn't being used right now, and swap that page only out to disk, freeing up a frame that can be allocated to the process requesting it.

1 Replacement Policy Replacement policy deals with the selection of a page in main memory to be replaced when a new page is brought in This occurs whenever main memory is full (no free frame available) and sometimes during memory clean-up It occurs often since the OS tries to bring into main memory as many processes as it can to increase the multiprogramming level You may roughly view replacement as removal of a page from main memory

Replacement Policy Not all pages in main memory can be selected for replacement Some frames are locked, and thus cannot be paged out: Majority of the kernel is held on locked frames as well as key control structures and I/O buffers The OS might decide that the set of pages considered for replacement should be: from the set of pages in unlocked frames limited to the pages of a process that has a less probability to cause a future page fault

Exams 10 100 Everybody s midterm grade 80 60 40 0 0 1 7 9 11111719179179414447491796166676971777779 Course website is up-to-date now Final exam Wednesday 1/1 7:00 pm - 9:00 pm (evening) Berthoud Hall 41 (classroom)

4 Replacement Policy The decision for the set of pages to be considered (i.e., not including the locked pages) for replacement is related to the resident set management strategy: How many page frames are to be allocated to each process? (We will discuss this later) No matter what is the set of pages considered for replacement, the replacement policy deals with algorithms that will choose the page within that set The page to be replaced is often called a victim page

Replacement Algorithms The optimal policy (OPT) selects for replacement the page for which the time to the next reference is the longest produces the fewest number of page faults Assuming a fixed frame allocation for the process of three frames. The execution requires reference to five distinct pages. The stream contains a sequence of 1 requests

6 Replacement Algorithms The optimal policy (OPT) selects for replacement the page for which the time to the next reference is the longest Now, which one to replace? 1 4 1 F We check the time of the next reference!

7 Replacement Algorithms The optimal policy (OPT) selects for replacement the page for which the time to the next reference is the longest Now, which page to replace? 1 4 1 F F

8 Replacement Algorithms The optimal policy (OPT) selects for replacement the page for which the time to the next reference is the longest 1 4 Now, which page to replace? 1 F If there is a tie, we just use the page at a lower address! 4 F 4 4 F

Replacement Algorithms The optimal policy (OPT) selects for replacement the page for which the time to the next reference is the longest 9 1 4 1 4 4 4 F F F F F F

40 Replacement Algorithms The optimal policy (OPT) selects for replacement the page for which the time to the next reference is the longest 1 4 1 F F F What is the pros/cons of OPT? F 4 F 4 4 F

41 Replacement Algorithms The optimal policy (OPT) selects for replacement the page for which the time to the next reference is the longest produces the fewest number of page faults impossible to implement (need to know the future) 1 4 1 F F F F 4 F 4 4 F

4 Note on counting page faults When the main memory is empty, each new page we bring in is a result of a page fault For the purpose of comparing the different algorithms, it is not necessary to count these initial page faults Because the number of these is the same for all algorithms We assume demand paging as fetch policy

4 Least Recently Used (LRU) Policy Replaces the page that has not been referenced for the longest time by the principle of locality, this should be the page least likely to be referenced in the near future performs nearly as well as the optimal policy

44 Least Recently Used (LRU) Policy Replaces the page that has not been referenced for the longest time by the principle of locality, this should be the page least likely to be referenced in the near future performs nearly as well as the optimal policy 1 4 Now, which one to replace? 1 F We check the time of the previous reference!

4 Least Recently Used (LRU) Policy Replaces the page that has not been referenced for the longest time by the principle of locality, this should be the page least likely to be referenced in the near future performs nearly as well as the optimal policy 1 4 Now, which one to replace? 1 1 F 1 F

46 Least Recently Used (LRU) Policy Replaces the page that has not been referenced for the longest time by the principle of locality, this should be the page least likely to be referenced in the near future performs nearly as well as the optimal policy 1 4 1 1 1 4 4 4 F F F F F F F

47 Implementation of LRU Each page could be tagged (in the page table entry) with the time at each memory reference The LRU page for replacement has the biggest difference (i.e., the smallest tagged time value), which needs to be searched at each page fault This would require expensive hardware and a great deal of overhead Consequently very few computer systems provide sufficient hardware support for true LRU replacement policy

48 First-in-first-out (FIFO) Policy Treats page frames allocated to a process as a circular buffer When the buffer is full, the oldest page is replaced. Hence: first-in, first-out Simple to implement, only requiring a pointer that circles through the page frames of the process This is NOT the same as the LRU page E.g.: A frequently used page is often the oldest, so it will be repeatedly paged out by FIFO

49 First-in-first-out (FIFO) Policy Treats page frames allocated to a process as a circular buffer When the buffer is full, the oldest page is replaced. Hence: first-in, first-out Now, which one to replace? 1 4 1 We check the queue: 1

0 First-in-first-out (FIFO) Policy Treats page frames allocated to a process as a circular buffer When the buffer is full, the oldest page is replaced. Hence: first-in, first-out Now, which one to replace? 1 4 1 1 F 1 F We check the queue: 1

1 First-in-first-out (FIFO) Policy Treats page frames allocated to a process as a circular buffer When the buffer is full, the oldest page is replaced. Hence: first-in, first-out Now page 4 is new, which one to replace? 1 4 1 1 F 1 4 4 4 4 4 F F F F F We check the queue: 1

First-in-first-out (FIFO) Policy FIFO performs the worst (with some other issue) LRU recognizes that pages and are referenced more frequently than others but FIFO does not

What is the relationship of page fault ratio versus number of frames? In other words, when the number of frames increases, how does the page fault ratio change?

4 Belady s Anomaly Definition: Increasing the number of frames available can increase the number of page faults that occur! Also called FIFO anomaly, as only the FIFO replacement policy has this issue. Why: Because the most frequent pages could (and often) be the oldest in the memory, which have a higher probability to be swapped out. Because FIFO cannot recognize page use frequency

Belady s Anomaly Number of page faults = 9 Number of page faults = 10

6 Resident Set Size The OS must decide how many page frames to allocate to a process large page fault rate if too few frames are allocated low multiprogramming level if to many frames are allocated Fixed-allocation policy allocates a fixed number of frames that remains constant over time the number is determined at load time Variable-allocation policy number of frames allocated to a process varies over time may increase if page fault rate is high may decrease if page fault rate is very low requires more OS overhead to assess behavior of active processes

7 Replacement Scope Replacement scope is the set of frames to be considered for replacement when a page fault occurs Local replacement policy chooses only among the frames that are allocated to the process that issued the page fault Global replacement policy any unlocked frame is a candidate for replacement Consider the possible combinations of replacement scope and resident set size

8 Allocation and Replacement Consider the possible combinations of replacement scope and resident set size

9 The Working Set Strategy The variable-allocation method with local scope is based on the assumption of locality of references The working set for a process at time t, W(D,t), is the set of pages that have been referenced in the last D virtual time units Virtual time = time elapsed while the process was in execution (e.g., number of instructions executed) D is a window of time At any t, W(D,t) is non decreasing with D W(D,t) is an approximation of the program s locality

60 The Working Set Strategy The working set of a process first grows when it starts executing Then it stabilizes by the principle of locality It grows again when the process enters a new locality (transition period) Up to a point where the working set contains pages from two localities Then decreases after a sufficient long time spent in the new locality

61 The Working Set Strategy The working set concept suggest the following strategy to determine the resident set size Monitor the working set for each process When the resident set of a process is smaller than its working set, allocate more frames to it Practical problems with this working set strategy measurement of the working set for each process is impractical the optimal value for D is unknown and time varying Solution: rather than monitor the working set, monitor the page fault rate!

The Page-Fault Frequency Strategy Define an upper bound U and lower bound L for Page Fault Rates (PFR) Allocate more frames to a process if PFR is higher than U Allocate less frames if PFR is < L The resident set size should be close to the working set size W We suspend the process if the PFR > U and no more free frames are available 6

6 Cleaning Policy When does a modified page should be written out to disk? Demand cleaning a page is written out only when it s frame has been selected for replacement Precleaning modified pages are written before their frame are needed so that they can be written out in batches but makes little sense to write out so many pages if the majority of them will be modified again before they are replaced

64 Combined Segmentation and Paging To combine their advantages some processors and OS page the segments. Several combinations exists. Here is a simple one Each process has: one segment table several page tables: one page table per segment The virtual address consist of: a segment number: used to index the segment table who s entry gives the starting address of the page table for that segment a page number: used to index that page table to obtain the corresponding frame number an offset: used to locate the word within the frame

6 Combined Segmentation and Paging The Segment Base is the physical address of the page table of that segment Present and modified bits are present only in page table entry Protection and sharing info most naturally resides in segment table entry

66 Combined Segmentation and Paging

67 Homework Assignment 6 Chapter 8 Problems 8.1, 8.6, 8.10, and 8.17. Detail is presented on course website: http://inside.mines.edu/~hzhang/courses/csci44/assign ment.html Deadline: 11/1/017 (Tuesday) Write down your full name clearly. Note: Problems are NOT Review Questions in the text book! Turn in hard copy solutions before the beginning of the class.