Chapter 3 Memory Management: Virtual Memory

Similar documents
Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Operating Systems. Operating Systems Sina Meraji U of T

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Memory Management. Memory Management. G53OPS: Operating Systems. Memory Management Monoprogramming 11/27/2008. Graham Kendall.

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Chapter 3 - Memory Management

CS399 New Beginnings. Jonathan Walpole

Operating Systems (2INC0) 2017/18

Virtual Memory Management

Basic Memory Management

CS370 Operating Systems

Computer Systems II. Memory Management" Subdividing memory to accommodate many processes. A program is loaded in main memory to be executed

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Chapter 8 Memory Management

Chapters 9 & 10: Memory Management and Virtual Memory

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

21 Lecture: More VM Announcements From last time: Virtual Memory (Fotheringham, 1961)

Virtual Memory. Chapter 8

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Lecture 12: Demand Paging

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Chapter 3: Important Concepts (3/29/2015)

DAT (cont d) Assume a page size of 256 bytes. physical addresses. Note: Virtual address (page #) is not stored, but is used as an index into the table

Perform page replacement. (Fig 8.8 [Stal05])

a process may be swapped in and out of main memory such that it occupies different regions

Chapter 9: Virtual-Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Week 2: Tiina Niklander

Operating Systems Lecture 6: Memory Management II

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Memory management, part 2: outline

Chapter 8: Memory-Management Strategies

Operating Systems, Fall

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Chapter 9: Virtual Memory

Operating Systems: Internals and Design Principles. Chapter 7 Memory Management Seventh Edition William Stallings

Chapter 8: Main Memory

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

CS450/550 Operating Systems

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

a. What is a lower bound on the number of page faults? b. What is an upper bound on the number of page faults?

Operating Systems Unit 6. Memory Management

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

CS 5523 Operating Systems: Memory Management (SGG-8)

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Operating Systems and Computer Networks. Memory Management. Dr.-Ing. Pascal A. Klein

Memory Management. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

CS370 Operating Systems

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Principles of Operating Systems

Main Memory (Part I)

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Virtual Memory Design and Implementation

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Virtual Memory Outline

CS 550 Operating Systems Spring Memory Management: Page Replacement

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

1. Creates the illusion of an address space much larger than the physical memory

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 8. Virtual Memory

The Operating System. Chapter 6

Chapter 9: Virtual Memory

Address spaces and memory management

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Lecture 8 Memory Management Strategies (chapter 8)

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

Memory Management. Dr. Yingwu Zhu

CPE300: Digital System Architecture and Design

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

Chapter 8: Main Memory

Processes and Tasks What comprises the state of a running program (a process or task)?

Memory Management. Goals of Memory Management. Mechanism. Policies

Introduction to Virtual Memory Management

Practice Exercises 449

Operating System Principles: Memory Management Swapping, Paging, and Virtual Memory CS 111. Operating Systems Peter Reiher

Operating System Concepts

Chapter 4 Memory Management. Memory Management

Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms

Memory: Paging System

Computer Architecture. Lecture 8: Virtual Memory

Transcription:

Memory Management Where we re going Chapter 3 Memory Management: Virtual Memory Understanding Operating Systems, Fourth Edition Disadvantages of early schemes: Required storing entire program in memory Fragmentation Overhead due to relocation Evolution of virtual memory helps to: Remove the restriction of storing programs contiguously Eliminate the need for entire program to reside in memory during execution First we have to cover some background material Virtual Memory Objectives You will be able to describe: The basic functionality of the memory allocation methods covered in this chapter: paged, demand paging, segmented, and segmented/demand paged memory allocation The influence that these page allocation methods have had on virtual memory The difference between a first-in first-out page replacement policy, a least-recently-used page replacement policy, and a clock page replacement policy The mechanics of paging and how a memory allocation scheme determines which pages should be swapped out of memory The concept of the working set and how it is used in memory allocation schemes The impact that virtual memory had on multiprogramming Cache memory and its role in improving system response time 1 2

# inc JOB Page 0 Paged Memory Allocation Compiler Memory Manager OS OS RAM Page Frame 99 Page Frame 100 Paged Memory Allocation (continued) Page 4 Page 5 H/D 6 sectors Page Frame 123 Note: This scheme works well when sectors, pages and page frames are the same size Paged memory allocation scheme for a job of 350 lines. Notice pages are not required to be in adjacent blocks Paged Memory Allocation In this scheme the compiler divides the job into a number of pages. In this example the job can be broken into 6 pages. Even if the job only required 5 + a bit pages its allocated 6 pages (i.e. nearest round number) Its subsequently stored on six sectors of H/D. (I m assuming that the disk sector size matches the job page size). Later the OS memory manager loads the job into 6 page frames of RAM. The OS will probably not be able to load the jobs into 6 contiguous pages. Equally it doesn t worry about them being in order; Page 5 could be loaded into memory before Page 1 if it was read of the H/D first. This scheme works well if page size, memory block size (page frames), and size of disk section (sector, block) are all equal. Before executing a program, Memory Manager: Determines number of pages in program Locates enough empty page frames in main memory Loads all of the program s pages into them 3 4

Benefits - Problems Memory usage is more efficient An empty page frame can be used by any page of any job No compaction required However we need to keep track of where each job s pages are in memory. We still have entire job in memory Job1 Job2 Job3 Job Table Page Map Table Memory Map Table Memory Manager s Tables 1 1 JT address where Job1 s PMT is in memory Actual Address 0x074 0 1 2 3 4 5 MMT 0x074 Job1 page 2 held in page frame 0 2 2 PMT 0x03096 2 4 0 3 3 3 mapped Stored or calculated with some Math in H/W memory-address = page-frame-num* bytes-per-page-frame Memory Manager requires three tables to keep track of the job s pages: Job Table (JT) contains information about; Size of the job Memory location where its PMT is stored Typically the JT is placed into a special register in the CPU by the OS when the job is selected to run. The memory manager HW then has the address of the PMT (held in RAM) for this job. Page Map Table (PMT) The job assumes its pages are all in memory and all stored from page 0 to page XX in an orderly manner. In reality the pages are stored in real RAM page frames. So the PMT is really a cross ref between a page number and its corresponding page frame memory number. This is essentially an index into the MMT. Which will tell us where the page really is. Memory Map Table (MMT) contains; Location for each page frame could be stored in MMT or it could simply be calculated using H/W. The math is: memory-address = page-framenumber* bytes-per-page-frame 5 Mapped/Unmapped status (whether this page is in RAM or still on the H/D) 6

Memory Map Table Hardware Aside Lives in RAM and is accessed by the MMU Updated/modified by the OS Memory Map tables can be extremely large Consider 32 bit machines which can address 4GB of memory 2 32 = 4GB address space with 4-KB page size => 1 million pages That means a table with 1-million entries Note: On the new 64 bit machines the (e.g. Linux) kernel supports a larger 2 million GB address space!! Large, fast page mapping is a constraint on modern computers» This describes the mapping from virtual-tophysical address mappings 7 8

Tanenbaum section 4.3.3 Tanenbaum section 4.3.3 A brief look at the H/W: MMU and TLB Internals of the MMU /2 6 6 Present/Absent bit PMT MMT 5 5 12 bits allows us to access all 4096 bytes within a page 1 1 More over 6 6 4 4 Number of page frame 2 2 1 1 Page number Offset in that page 3 3 9 10

Paged Memory Allocation (continued) Page size = 100 Finish Hardware Aside Page number 1 Contains first 100 lines Displacement Describes how far away a line if from start of page-frame Displacement (offset) of a line: Determines how far away a line is from the beginning of its page Used to locate that line within its page frame How to determine page number and displacement of a line: Page number = the integer quotient from the division of the job space address by the page size Displacement = the remainder from the page number division 11 12

Paged Memory Allocation (continued) But you said earlier that the pages could be any where in memory so where is Page 2? (1) Consult the Page Map Table for this job CPU executes line 203. How does it now find line 204 in RAM to execute it? page number page size required line number xxx xxx xxx displacement page number 2 100 204 200 4displacement But... -> Steps to determine exact location of a line in memory: Determine page number and displacement of a line Refer to the job s PMT and find out which page frame contains the required page Get the address of the beginning of the page frame by multiplying the page frame number by the page frame size Add the displacement (calculated in step 1) to the starting address of the page frame Or Refer to the job s PMT and find out where in memory that page is located (as next slide) Add the displacement (calculated in step 1) to the starting address of the page frame The math here is actually carried out in H/W and the OS (memory manager is responsible for remembering the info ) 13 14

Exercise 1 MB System RAM with page frame size of 512 Bytes Job 1 has 680 lines of code One byte per instruction (or data value). 512 byte page size The current line being executed by the CPU in Job 1 is line 25 : LOAD R1, 518 //Load value at memory location 518 into register 1 Job 1 : Page Map Table Page no. Page Frame Num 0 5 1 3 Question What address in RAM is memory location 518 mapped to? Answer Line 518 must be in second page (512 bytes per page) Page number 1 is stored in RAM page frame 5 518 is displaced by 6 from the start of page number 1 (i.e. 518 512(per page) = 6) RAM page frame 5 is at location (6* 512) = 3072 (remember page frame number 5 is the 6 th page frame) We must displace by 6 => Its at memory location 3078 Therefore memory location 3078 holds the value that must be stored in Register 1 15 16

Paged Memory Allocation (continued) Advantages: Allows jobs to be allocated in non-contiguous memory locations Memory used more efficiently; more jobs can fit Disadvantages: Address resolution causes increased overhead Internal fragmentation still exists, though in last page Requires the entire job to be stored in memory location Size of page is crucial (not too small, not too large) Next... Demand Paging 17 18

Next step on the way to VM: Demand Paging Pages are brought into memory only as they are needed Programs are written sequentially so not all pages are necessary at once. For example: User-written error handling modules Mutually exclusive modules Options are not always accessible Demand Paging (continued) Demand paging made virtual memory widely available (More later..) More jobs with less main memory Needs high-speed direct access storage device Predefined policies How and when the pages are passed (or swapped ) Demand Paging: Pages are brought into memory only as they are needed, allowing jobs to be run with less main memory Takes advantage that programs are written sequentially so not all pages are necessary at once. For example: User-written error handling modules are processed only when a specific error is detected Error might not occur etc Mutually exclusive modules Can t do input and do CPU work for a module at the same time Certain program options are not always accessible Can t open file and edit its contents at the same time Demand paging made virtual memory widely available Can give appearance of an almost infinite or nonfinite amount of physical memory Allows the user to run jobs with less main memory than required in paged memory allocation Requires use of a high-speed direct access storage device that can work directly with CPU How and when the pages are passed (or swapped ) depends on predefined policies that determine when to make room for needed pages and how to do so. 19 20

OS tables revamped What Tanenbaum says is in the PMT (aka Page Table Entries, PKE) 3 new fields! Total job pages are 15, and the number of total available page frames is 12 (OS takes 4) Figure 3.5: A typical demand paging scheme The OS depends on following tables: Job Table Page Map Table with 3 new fields to determine Status: Tells if page is already in memory Modified: If page contents have been modified since last save Referenced: If the page has been referenced recently Used to determine which pages should remain in main memory and which should be swapped out It should make sense that you swap out the least used page. Memory Map Table 21 22

Demand Paging (continued) Not enough main memory to hold all the currently active processes? Swapping Process: Paging Example- 1 program thinks it has this much space but the machine only has this much Note: (Virtual) pages and (Real) page frames are the same size, here its 4K (as in the Pentium) but in real systems size ranges from 512bytes to 64KB We have 16 virtual pages and 8 page frames Transfers from RAM to disk are always in units of pages X means that the page is not (currently) mapped to a frame Assume pages (and frames) are counted from 0. To move in a new page, a resident page may have to be swapped back into secondary storage; Swapping involves Copying the resident page to the disk (if it was modified ; no need to save if it wasn t) Writing the new page into the (now) empty page frame Requires close interaction between hardware components, software algorithms, and policy schemes 23 24

Paging Example- - 2 Paging Example- - 3 Example Assume a program tries to access address 8192 This address is sent to the MMU The MMU recognizes that this address falls in virtual page 2 (assume pages start at zero) The MMU looks at its page mapping and sees that page 2 maps to physical page 6 The MMU translates 8192 to the relevant address in page frame 6 (this being 24576) This address is output by the MMU and the memory board simply sees a request for address 24576. It does not know that the MMU has intervened. The memory board simply sees a request for a particular location, which it honours. MMU frame 6 We have eight virtual pages which do not map to a physical page Each virtual page will have a present/absent bit which indicates if the virtual page is mapped to a physical page Present/Absent bit 25 26

Paging Example- 4 What happens if we try to use an unmapped page? For example, the program tries to access address 24576 (i.e. 24K) The MMU will notice that the page is unmapped and will cause the CPU to trap to the OS This trap is called a page fault The operating system will decide to evict one of the currently mapped pages and use that for the page that has just been referenced The page that has just been referenced is copied (from disc) to the virtual page that has just been freed. The virtual page frames are updated. The trapped instruction is restarted. Page faults Two reasons why we could/would get a page fault Page fault handler: The section of the operating system that determines Whether there are empty page frames in memory If so, requested page is copied from secondary storage Which page will be swapped out if all page frames are busy Decision is directly dependent on the predefined policy for page removal 27 28

Paging Example- 5 Example (trying to access address 24576) (1) The MMU would cause a trap to the operating system as the virtual page is not mapped to a physical location (shown by an X) (2) A virtual page that is mapped is elected for eviction (we ll assume that page 11 is nominated) (2a)Virtual page 11 is mark as unmapped (i.e. the present/absent bit is changed) (3) Physical page 7 is written to disc (we ll assume for now that this needs to be done). That is the physical page that virtual page 11 maps onto (4) Virtual page 6 is loaded to physical address 28672 (28K) (5)The entry for virtual page 6 is changed so that the present/absent bit is changed. (5a) Also the X is replaced by a 7 so that it points to the correct physical page When the trapped instruction is re-executed it will now work correctly 1 1 2 x 2 7 5 5 3 3 4 4 To Disk From Disk Demand Paging (continued) Advantages: Job no longer constrained by the size of physical memory (concept of virtual memory) Utilizes memory more efficiently than the previous schemes Disadvantages: Increased overhead caused by the tables and the page interrupts 29 30

Demand Paging Problems: Thrashing An excessive amount of page swapping between main memory and secondary storage Operation becomes inefficient Caused when a page is removed from memory but is called back shortly thereafter Can occur across jobs, when a large number of jobs are vying for a relatively few number of free pages Can happen within a job (e.g., in loops that cross page boundaries) How would the OS spot that its going on? Large amount of page faults occur Page fault: a failure to find a page in memory How would the OS eliminate it It may well remove other jobs pages from memory to allocate space for the job that is trashing Page Replacement Policies 31 32

Page Replacement Policies and Concepts Policy that selects the page to be removed Crucial to system efficiency. Types include: First-in first-out (FIFO) policy: Removes page that has been in memory the longest Least-recently-used (LRU) policy: Removes page that has been least recently accessed Also Most recently used (MRU) policy Least frequently used (LFU) policy Job Page A Page B Page C Page D FIFO Page Replacement Policy System has only two page frames 1 1 Sequence of page requests in Job A 2 2 3 3 A was first into memory and someone has to make room for C so A has to be first out An interrupt * is generated when a new page needs to be brought into memory. Here we have 9 of 11 (poor) This is an area that enjoys a lot of research activity. FIFO A nice analogy is the new jumper. If you buy a new jumper and you wish to put into your jumper drawer, but you find that its full you could decide to remove your oldest jumper to make room for the newest one LRU You could also decide to remove your least worn jumper to make room for the newest one Here we see that when page C is requested page A is removed from RAM because it was the first page into memory. This concept is analogous to a queue. The person who first to the queue will be at the head of the queue and then will the first one out of the queue. FIFO isn t necessarily bad but this example shows it can have poor performance 9/11 = 82% failure rate (or 18% success rate). FIFO anomaly: No guarantee that buying more memory will always result in better performance 33 34

LRU Page Replacement Policy Page Replacement Policies and Concepts (continued) B has only been used two times so it has to leave to make room for C Efficiency (ratio of page interrupts to page requests) is slightly better for LRU as compared to FIFO In LRU case, increasing main memory will cause either decrease in or same number of interrupts An interrupt * is generated when a new page is brought into memory. Here we have 8 of 11 Implementing LRU poilicy (NEXT SLIDE) One mechanism for the LRU uses an 8-bit reference byte and a bit-shifting technique to track the usage of each page currently in memory Here the principle of locality is used. If we recently used a page its likely we will again in the near future 35 36

LRU implementation Initially @ Time 0 each page has the leftmost bit of its reference byte set to 1, all other bits are zero Time moves on For those pages who have been referenced in the previous time interval the bits are right-shifted and the leftmost bit is set to 1; 0 for the others Mechanisms Of Paging After 4 time intervals least used Figure 3.11: Bit-shifting technique in LRU policy 37 38

The Mechanics of Paging The memory manager needs info to help it decide who to swap out. Status bit: Indicates if page is currently in memory Referenced bit: Indicates if page has been referenced recently Modified bit: Indicates if page contents have been altered Used to determine if page must be rewritten to secondary storage when it s swapped out Used by the FIFO algorithm Used by the LRU algorithm Q: Which page will LRU swap out? Answer Either of Page 1 or Page 2 as neither have been referenced or modified (so they won t require saving to secondary storage) 39 40

The Working Set The Working Set This is an innovation that improved the performance of demand paging schemes. This program needs 120ms to run but 900ms to load the pages into memory! Very wasteful Wouldn t it be great if we could load the useful pages as we picked the program to run = > much quicker execution 41 42

Working Set Working set: Set of pages residing in memory that can be accessed directly without incurring a page fault Improves performance of demand page schemes Requires the concept of locality of reference Segmented Memory Allocation To load a working set into memory the system must decide The maximum number of pages the operating system will allow for a working set If this is the first time its run how can it know the working set? Maybe this could be observed as the program is first loaded into memory and then this info is used for its next turn on the CPU 43 44

Segmented Memory Allocation Programmers normally decompose their programs into modules With segmented memory allocation each job is divided into several segments of different sizes, one for each module that contains pieces to perform related functions Main memory is no longer divided into page frames, Segments are set up according to the program s structural modules when a program is compiled or assembled Each segment is numbered Segment Map Table (SMT) is generated Segmented Memory Allocation (continued) Figure 3.13: Segmented memory allocation. Job1 includes a main program, Subroutine A, and Subroutine B. It s one job divided into three segments. What we want is for the system to conform to the programmers model of how the program is decomposed into functions/modules. We would like function blocks to be loaded as required to enable different functional parts of the program as required. 45 46

Segmented Memory Allocation (continued) Segmented Memory Allocation (continued) Advantages: Internal fragmentation is removed Disadvantages: Difficulty managing variable-length segments in secondary storage External fragmentation Figure 3.14: The Segment Map Table tracks each segment for Job 1 Memory Manager tracks segments in memory using following three tables: Job Table lists every job in process (one for whole system. Not shown on slide) Segment Map Table lists details about each segment (one for each job) Memory Map Table monitors allocation of main memory (one for whole system) Segments don t need to be stored contiguously The addressing scheme requires segment number and displacement 47 48

Segmented/Demand Paged Memory Allocation Subdivides segments into pages of equal size, smaller than most segments, and more easily manipulated than whole segments. It offers: Logical benefits of segmentation Physical benefits of paging Removes the problems of compaction, external fragmentation, and secondary storage handling The addressing scheme requires segment number, page number within that segment, and displacement within that page Segmented/Demand Paged Memory Allocation (continued) This scheme requires following four tables: Job Table lists every job in process (one for the whole system) Segment Map Table lists details about each segment (one for each job) Page Map Table lists details about every page (one for each segment) Memory Map Table monitors the allocation of the page frames in main memory (one for the whole system) 49 50

Segmented/Demand Paged Memory Allocation Segmented/Demand Paged Memory Allocation (continued) Advantages: Large virtual memory Segment loaded on demand Disadvantages: Table handling overhead Memory needed for page and segment tables Figure 3.16: Interaction of JT, SMT, PMT, and main memory in a segment/paging scheme To minimize number of references, many systems use associative memory to speed up the process Its disadvantage is the high cost of the complex hardware required to perform the parallel searches 51 52

Virtual Memory Summary Demand paging allows programs to be executed even though they are not stored entirely in memory Requires cooperation between the Memory Manager and the processor hardware Advantages of virtual memory management: Job size is not restricted to the size of main memory Memory is used more efficiently Allows an unlimited amount of multiprogramming 53 54

Virtual Memory (continued) Virtual Memory (continued) Advantages (continued): Eliminates external fragmentation and minimizes internal fragmentation Allows the sharing of code and data Facilitates dynamic linking of program segments Disadvantages: Increased processor hardware costs Increased overhead for handling paging interrupts Increased software complexity to prevent thrashing Table 3.6: Comparison of virtual memory with paging and segmentation 55 56

Summary (continued) Paged memory allocations allow efficient use of memory by allocating jobs in noncontiguous memory locations Increased overhead and internal fragmentation are problems in paged memory allocations Job no longer constrained by the size of physical memory in demand paging scheme LRU scheme results in slightly better efficiency as compared to FIFO scheme Segmented memory allocation scheme solves internal fragmentation problem Summary (continued) Segmented/demand paged memory allocation removes the problems of compaction, external fragmentation, and secondary storage handling Associative memory can be used to speed up the process Virtual memory allows programs to be executed even though they are not stored entirely in memory Job s size is no longer restricted to the size of main memory by using the concept of virtual memory CPU can execute instruction faster with the use of cache memory 57 58