Memory Management and Protection

Similar documents
Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 8: Main Memory

Chapter 8: Main Memory

Chapters 9 & 10: Memory Management and Virtual Memory

Chapter 8: Memory Management Strategies

Chapter 3: Virtual Memory ว ตถ ประสงค. Background สามารถอธ บายข อด ในการท ระบบใช ว ธ การจ ดการหน วยความจ าแบบเสม อนได

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Operating System Concepts

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Chapter 8: Main Memory. Operating System Concepts 8th Edition

Memory Management. Memory Management

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 9: Memory Management. Background

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

Module 8: Memory Management

Chapter 8: Memory Management

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Chapter 8: Memory-Management Strategies

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 8: Main Memory

Chapter 8: Main Memory

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

CS307: Operating Systems

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Chapter 10: Virtual Memory. Background

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

Logical versus Physical Address Space

OPERATING SYSTEM. Chapter 9: Virtual Memory

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Chapter 9: Virtual Memory

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

CS307: Operating Systems

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Module 8: Memory Management

Memory Management. Contents: Memory Management. How to generate code? Background

Chapter 8 Memory Management

Where are we in the course?

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Lecture 8 Memory Management Strategies (chapter 8)

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chapter 10: Virtual Memory

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,

Virtual Memory Outline

Basic Memory Management

Operating System Concepts 9 th Edition

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Chapter 9: Virtual-Memory

Background. Contiguous Memory Allocation

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

First-In-First-Out (FIFO) Algorithm

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

SHANDONG UNIVERSITY 1

VII. Memory Management

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Roadmap. Tevfik Koşar. CSC Operating Systems Spring Lecture - XII Main Memory - II. Louisiana State University

Main Memory (Part I)

Goals of Memory Management

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

CS370 Operating Systems

Memory Management. Virtual Memory. By : Kaushik Vaghani. Prepared By : Kaushik Vaghani

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Lecture 17. Edited from slides for Operating System Concepts by Silberschatz, Galvin, Gagne

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

CS 3733 Operating Systems:

Main Memory Yi Shi Fall 2017 Xi an Jiaotong University

Memory Management. Memory

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Chapter 9: Virtual Memory

CS370 Operating Systems

Principles of Operating Systems

Chapter 8 Main Memory

Chapter 8: Memory Management

Module 9: Virtual Memory

Memory Management Virtual Memory

Chapter 9 Memory Management

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

CSE 421/521 - Operating Systems Fall Lecture - XII Main Memory Management. Tevfik Koşar. University at Buffalo. October 18 th, 2012.

Memory Management. Minsoo Ryu. Real-Time Computing and Communications Lab. Hanyang University.

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

Transcription:

Part IV Memory Management and Protection Sadeghi, Cubaleska RUB 2008-09 Course Operating System Security Memory Management and Protection

Main Memory Virtual Memory Roadmap of Chapter 4 Main Memory Background Swapping Contiguous memory relocation Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 2

Background Program must be brought (from disk) into main memory and placed within a process for it to be run Main memory and registers are only storage CPU can access directly Register access in one CPU clock (or less) Main memory can take many cycles Cache sits between main memory and CPU registers Protection of memory required to ensure correct operation Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time 3

Process in Memory... A program becomes a process when an executable file is loaded into memory A program is a passive entity, a process is a active entity The terms job and process are used almost interchangeably Max... Return address Local variable 1... Local variable n Free virt. address space BPR Stackframe SPR stack heap uninitialized global variables data (global variables) 0 text (code segment)... 4

Representation of a Process in Memory Text section: the process program code Data section: contains the global variables Uninitialized global variables: These are not part of the executable file and their initial value is set to zeros Heap: Memory that is dynamically allocated during the process run time Stack: contains temporary data, such as function parameters, return addresses, and local variables Calling a method or function pushes a new stack frame onto the stack. The stack frame is destroyed when the function returns. Stack frame: Between Base Pointer Register (BPR) and Stack Pointer Register (SPR); Usually the stack for one function Stack: For all functions in the program (i.e., the stack consists of one or more stack frames) 5

Allocating Memory for a Process The heap is allowed to grow upward in memory (as it is used for dynamic memory allocation) The stack is allowed to grow downward in memory through successive function calls But, not in every architecture the stack grows downwards The space (or hole) between the heap and the stack is part of the process address space 6

Binding of Instructions and Data to Memory A user program is processed in several steps (see diagram) Address binding of instructions and data to memory addresses can happen at three different stages Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes Load time: Must generate relocatable code if memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers) other object modules system library dynamically loaded system library dynamic linking source program compiler or assembler object module linkage editor load module loader in-memory binary memory image compile time load time execution time (run time) 7

Advantages of Dynamic Loading Better memory-space utilization Unused routine is never loaded Routine is not loaded until it is called Particularly useful when large amounts of code are needed to handle infrequently occurring cases such as error routines No special support from the operating system is required Implemented through program design 8

Dynamic Linking Dynamic Linking: Linking postponed until execution time Dynamic linking is particularly useful for libraries Otherwise programs would need to be relinked to gain access to the new library Programs linked before the new library version was installed keep on using the old one System also known as shared libraries 9

Memory Layout for a Multiprogramming System OS keeps several processes in memory simultaneously In general, the main memory is too small to accommodate all processes the processes are kept initially on the disk in the process pool The processes pool (or job pool) consists of all processes residing on disk and waiting allocation of main memory The set of jobs in memory can be a subset of the jobs kept in the job pool OS picks and begins to execute one of the jobs in the memory OS then switches from one job to another ( multiprogramming system ) Operating application system process 1 process 2 process 3 process 4 main memory 10

Logical vs. Physical Address Space The concept of a logical address space that is bound to a separate physical address space is central to proper memory management The user program deals with logical addresses; it never sees the real physical addresses Logical addresses must be mapped to physical addresses before they are used The hardware device that maps virtual to physical address is called Memory Management Unit (MMU) Logical address (also referred to as virtual address): addresses generated by the CPU; Physical address: address seen by the memory unit Logical (virtual) and physical addresses are same in compile-time and load-time address-binding schemes different in execution-time address-binding scheme 11

Separate Memory Space for each Process OS needs to make sure that each process has a separate memory space To do this OS needs the ability to determine the range of legal addresses that the process may access, and to ensure that the process can access only these legal addresses This protection is provided by using of two registers 0 256000 300040 420940 Operating application system process 1 process 2 300040 base register 120900 Base register: hold the smallest legal physical memory address Limit register: Specifies the size of the range 880000 process 3 limit register A pair of base and limit registers define the logical address space of a process 1024000 main memory 12

HW Address Protection with Base and Limit Registers The CPU hardware compares every address generated by the process P in user mode with base and limit registers of the process Any attempt by a program to access OS memory or memory areas of other processes results in a trap (which treats the attempt as a fatal error) This scheme prevents a user program from (accidentally or deliberately) modifying the code or data structures of either OS or other users limit register base+limit register CPU logical address yes < physical address process P no trap: addressing error no trap: addressing error memory 13

Memory-Management Unit (MMU) MMU: Hardware device that maps logical to physical address The address mapping can be done by many different methods One method is by using a relocation register The value in the relocation register is added to every address generated by a user process at the time it is sent to memory CPU logical address 346 relocation register 14000 + MMU physical address 14346 main memory 14

Roadmap of Chapter 4 Main Memory Background Contiguous memory allocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 15

Contiguous Allocation Main memory is usually divided into two partitions: Resident operating system, usually held in low memory with interrupt vector User processes, held in higher memory locations In contiguous memory allocation, each process is contained in a single contiguous section of memory Relocation registers are used to protect user processes from each other, and from changing operating-system code and data MMU maps logical address dynamically 16

Hardware Support for Relocation and Limit Registers MMU maps logical address dynamically A possible mapping scheme uses a limit and a relocation register Each logical address must be less then the limit register (protection of OS and other data areas); the MMU maps the logical address dynamically by adding the value in the relocation register MMU limit register relocation register CPU logical address yes < + logical no address physical address memory trap: addressing error 17

Swapping A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution Backing store fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images operating system Schematic view of swapping: user space 1. swap out 2. swap in process P 1 process P 2 main memory backing store (e.g, disk) 18

Processes for Swapping System maintains a ready queue of ready-to-run processes which have memory images on disk Roll out, roll in is a name of a swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed Major part of swap time is transfer time total transfer time is directly proportional to the amount of memory swapped Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows) 19

Changing of Memory Allocation through Swapping time operating system operating system operating system operating system operating system operating system operating system process 1 process 1 process 1 unused process 4 process 4 process 4 process 1 process 2 process 2 process 2 process 2 unused unused process 3 unused process 3 unused process 3 process 3 process 3 main memory Memory allocation changes as processes come into memory processes leave memory 20

Multiple-partition Allocation Operating system maintains information about allocated partitions free partitions (holes) When a process arrives, it is allocated memory from a hole large enough to accommodate it Dynamic Storage Allocation Problem: How to satisfy a request of size n from a list of free holes First-fit algorithm: Allocate the first hole that is big enough Best-fit algorithm : Allocate the smallest hole that is big enough; must search entire list, unless ordered by size Produces the smallest leftover hole Worst-fit algorithm : Allocate the largest hole; must also search entire list Produces the largest leftover hole First-fit and best-fit algorithms are better than worst-fit in terms of speed and storage utilization 21

Fragmentation As processes are loaded and removed from memory, the free memory space is broken into little pieces. This is known as fragmentation External Fragmentation total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction Shuffle memory contents to place all free memory together in one large block Compaction is possible only if relocation is dynamic, and is done at execution time 22

Roadmap of Chapter 4 Main Memory Background Contiguous memory allocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 23

Paging Paging is a memory management scheme that permits the physical address space of a process to be noncontiguous Idea of paging Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8,192 bytes) Divide logical memory into blocks of same size called pages Keep track of all free frames To run a program of size n pages, need to find n free frames and load program Set up a page table to translate logical to physical addresses Internal fragmentation 24

Address Translation Scheme for Paging Address generated by CPU is divided into: Page number (p) used as an index into a page table which contains base address of each page in physical memory Page offset (d) is a offset in the page; is combined with base address to define the physical memory address that is sent to the memory unit page number p m - n page offset d n logical address space: 2 m page size 2 n Page table Each entry in the page table contains the base address of each page in the physical memory AND a bit indicating whether the page is valid (see explanation later) 25

Paging Hardware logical address physical address CPU p d f d f 0000 00 d (offset) f f 1111 11 page table f MMU physical memory The base address (entry in the page table) is combined with the page offset to determine the physical memory access 26

Paging Model of Logical and Physical Memory Paging model: page 0 page 1 page 2 page 3 logical memory of a process (4 pages) 0 1 2 3 1 4 3 7 page table of the process frame number 0 1 2 3 4 5 6 7 page 0 page 2 page 1 page 3 physical memory page 2 page 1 page 0 page 3 Example: 32-byte memory and 4-byte pages 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 a b c d e f g h i j k l m n o p logical memory 0 1 2 3 5 6 1 2 page table 0 4 8 12 16 20 24 28 physical memory i j k l m n o p a b c d e f g h frame 0 frame 1 frame 2 frame 3 frame 4 frame 5 frame 6 frame 7 27

Free Frames free-frame list 14 13 18 20 15 page 0 page 1 page 2 page 3 new process 13 14 15 16 17 18 19 20 21 (a) before allocation free-frame list 15 page 0 page 1 page 2 page 3 new process 0 14 1 13 2 18 3 20 new process page table 13 14 15 16 17 18 19 20 21 (b) after allocation page 1 page 0 page 2 page 3 28

Implementation of Page Table Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PRLR) indicates size of the page table Every data/instruction access requires two memory accesses one for the page table, and one for the data/instruction The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs) Some TLBs store address-space identifiers (ASIDs) in each TLB entry An ASID uniquely identifies each process to provide address-space protection for that process 29

Associative Memory (TLB) Associative memory parallel search Page # Frame # Address translation (p, d) If p is in associative register, get frame # out Otherwise get frame # from page table in memory 30

Paging Hardware With TLB logical address CPU p d page number frame number TLB hit f d physical address TLB TLB miss p f page table physical memory 31

Memory Protection Memory protection is implemented by associating protection bit with each frame Valid-invalid bit attached to each entry in the page table: valid indicates that the associated page is in the process logical address space, and is thus a legal page invalid indicates that the page is not in the process logical address space 00000 10468 12287 page 0 page 1 page 2 page 3 page 4 page 5 frame number 0 1 2 3 0 1 2 3 2 3 4 7 8 9 0 0 valid-invalid bit v v v v v v i i page table 0 1 2 3 4 5 6 7 8 9 page 0 page 1 page 2 page 3 page 4 page 5 page n 32

Shared Pages Private code and data Each process keeps a separate copy of the code and data The pages for the private code and data can appear anywhere in the logical address space Shared code Two or more processes can execute the same code at the same time One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). Shared code must appear in same location in the logical address space of all processes Example (see next slide) Three processes using the same editor (ed) Only one copy of the editor need to be held in the physical memory The data for each process is different Each process has its own copy of registers and data storage to hold the data for the process execution 33

Example for Shared Pages ed 1 ed 2 ed 3 data 1 process P 1 ed 1 ed 2 ed 3 data 2 process P 2 ed 1 ed 2 ed 3 data 3 process P 3 0 1 2 3 4 5 data 1 data 3 ed 1 ed 2 3 4 6 1 page table for P 1 3 4 6 7 page table for P 2 3 4 6 2 page table for P 3 6 7 8 9 10 ed 3 data 2 main memory 11 34

Structure of Page Tables Structure of page tables Hierarchical Paging Hashed Page Tables Inverted Page Tables Hierarchical page tables Break up the logical address space into multiple page tables A simple technique is a two-level page table 35

Hierarchical Page Tables: An Example Example: Two-level page-table scheme A logical address (on 32-bit machine with 1K page size) is divided into: - a page number consisting of 22 bits - a page offset consisting of 10 bits Since the page table is paged, the page number is further divided into: - a 12-bit page number - a 10-bit page offset Thus, a logical address is as follows: page number page offset p i p 2 d 12 10 10 - p i is an index into the outer page table - p 2 is the displacement within the page of the outer page table outer page table 1 500 100 708 929 900 page of page table page table 0 1 100 500 708 900 929 memory 36

Hierarch. Page Tables: Address Translation Scheme p 1 p 2 d p 1 p 2 outer page table page of page table d 37

Hashed Page Tables A common approach for handling address spaces larger then 32 bits is to use a hashed page table, with the hash value being the virtual page number The virtual page number is hashed into a page table This page table contains a chain of elements hashing to the same location Virtual page numbers are compared in this chain searching for a match If a match is found, the corresponding physical frame is extracted 38

Hashed Page Tables (cntd.) logical address p d r d physical address hash function q s p r physical memory hash table 39

Inverted Page Table One entry for each real page of memory Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs Use hash table to limit the search to one or at most a few page-table entries 40

Inverted Page Table Architecture logical address CPU pid p d i d physical address physical memory search i pid p page table 41

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 42

Segmentation Segmentation is a memory-management scheme that supports user view of memory A program is a collection of segments A segment is a logical unit such as: main program procedure function method object local variables global variables common block stack symbol table arrays 43

User s View of a Program The user program is compiled and the compiler automatically constructs segments reflecting the input program The compiler may create separate segments for the following The code (main program) Global variables The heap, from which memory is allocated The stack used by each thread The standard C library Symbol table subroutine Sqrt stack symbol table main program logical address 44

Logical View of Segmentation 1 2 1 4 3 4 2 3 user space physical memory space 45

Segmentation Architecture The logical address is a tuple (s, offset) s denotes the segment number offset is the segment Segment table maps two-dimensional physical addresses Each segment table entry has: Segment base contains the starting physical address where the segments reside in memory Segment limit specifies the length of the segment Registers Segment-table base register (STBR): points to the segment table s location in memory Segment-table length register (STLR): indicates number of segments used by a program segment number s is legal if s < STLR 46

Protection in Segmentation Architecture Protection With each entry in segment table associate: validation bit = 0 illegal segment read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level Since segments vary in length, memory allocation is a dynamic storage-allocation problem 47

Segmentation Hardware and Address Translation s limit base CPU s d segment table yes < + no trap: addressing error physical memory 48

Example of Segmentation subroutine stack segment 3 1400 2400 segment 0 segment 0 Sqrt symbol table segment 4 main program segment 1 segment 2 0 1 2 3 4 limit 1000 400 400 1100 1000 base 1400 6300 4300 3200 4700 segment table 3200 4300 4700 segment 3 segment 2 segment 4 5700 logical address space 6300 segment 1 6700 physical memory 49

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 50

Example: The Intel Pentium Supports both segmentation and segmentation with paging CPU generates logical address Given to segmentation unit Which produces linear addresses Linear address given to paging unit Which generates physical address in main memory Paging units form equivalent of MMU Logical to physical address translation CPU logical address segmentation unit linear address paging unit physical address physical memory page number page offset p 1 p 2 d 10 10 12 51

Intel Pentium Segmentation logical address: selector offset descriptor table segment descriptor + 32-bit linear address 52

Pentium Paging Architecture page directory page table offset 31 22 21 12 11 0 (logical address) page table 4 KB Cache CR3 register page directory 4 MB page page directory offset 31 22 21 0 53

Paging in Linux Linux has adopted a three-level paging strategy that works well both for 32-bit and 64-bit architectures A linear address in Linux is broken into four parts (global directory, middle directory, page table, offset) linear address global directory middle directory page table offset global directory middle directory page table CR3 register global directory entry middle directory entry page table entry page frame 54

Linux on Pentium Systems Linux uses three-level paging model The Pentium architecture only uses a two-level paging model How does Linux apply its three-level model on the Pentium? In this case, the size of the middle directory is zero bits, i.e., the middle entry is bypassed Each task in Linux has its own set of page tables The value of the CR3-register points to the global directory for the task currently executed During a context switch, the value of the CR3 register is saved and than later restored 55

Memory Management: Summary All memory management strategies have the same goal: to keep many processes in memory simultaneously to allow multiprogramming However, they tend to require that an entire process be in memory before it can execute Virtual memory is a technique that allows the execution of processes that are not completely in memory Programs can be larger than the physical memory Virtual memory abstracts main memory into extremely large, uniform array of storage, separating logical memory, as viewed by the user from physical memory 56

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 57

Concept of Virtual Memory Virtual memory separation of user logical memory from physical memory Only part of the program needs to be in memory for execution Logical address space can therefore be much larger than physical address space Allows address spaces to be shared by several processes Allows for more efficient process creation Virtual memory can be implemented via: Demand paging Demand segmentation 58

Virtual Memory That is Larger Than Physical Memory page 0 page 1 page 2 page v virtual memory memory map physical memory 59

Recall: Process in Memory... A program becomes a process when an executable file is loaded into memory A program is a passive entity, a process is a active entity The representation of a process in memory includes Text section Data section Stack Heap Program counter Max... Return address Local variable 1... Local variable n Free virt. address space heap BSS (uninitialized global variables) BPR Stackframe SPR stack data (global variables) 0 text (code segment)... 60

Virtual Address Space of a Process Process in a virtual memory The virtual address space of a process refers to the logical (or virtual) view of how a process is stored in a memory The process begins at certain logical address say, address 0 and exists in contiguous memory Process in a physical memory Physical memory may be organized in page frames, and physical pages assigned to a process may not be contiguous It is up to the memory management unit (MMU) to map logical pages to physical page frames in memory 61

Allocating Memory for a Process The heap is allowed to grow upward in memory (as it is used for dynamic memory allocation) The stack is allowed to grow downward in memory through successive function calls But, not in every architecture the stack grows downwards The space (or hole) between the heap and the stack is part of the process virtual address space This will require actual physical pages only if a heap or stack grows Virtual address spaces that include holes are known as sparse address spaces Using a sparse address spaces is beneficial because the holes can be filled as the stack or heap segments grow or if we wish to dynamically link libraries or other shared objects during program execution 62

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 63

Demand Paging Bring a page into memory only when it is needed Less I/O needed Less memory needed Faster response More users Page is needed reference to it invalid reference abort not-in-memory bring to memory Lazy swapper never swaps a page into memory unless page will be needed Swapper that deals with pages is a pager 64

Transfer of a Paged Memory to Contiguous Disk Space program A program B swap out swap in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 main memory 65

Valid-Invalid Bit (Reminder) With each page table entry a valid invalid bit is associated, i not-inmemory (v in-memory) Initially valid invalid bit is set to i on all entries Example of a page table snapshot Shown on the right side of the slide During address translation, if valid invalid bit in page table entry is i page fault #frame. valid-invalid bit v v v page table validinvalid bit v v v v i. i i 66

Page Table When Some Pages Are Not in Main Memory 0 1 2 3 4 5 6 7 A B C D E F G H frame 0 1 2 3 4 5 6 7 valid-invalid bit 4 6 9 v i v i i v i i page table 0 1 2 3 4 5 6 7 8 9 A C F C F A D G B E H logical memory 10 11 12 13 14 15 physical memory 67

Page Fault If there is a reference to a page, first reference to that page will trap to operating system: page fault 1.Operating system looks at another table to decide: Invalid reference abort Just not in memory 2.Get empty frame 3.Swap page into frame 4.Reset tables 5.Set validation bit = v 6.Restart the instruction that caused the page fault 68

Steps in Handling a Page Fault 3 page is on backing store operating system page table 2 trap load M reference 1 6 restart instruction i free frame 5 reset page table 4 bring in missing page physical memory 69

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 70

Copy-on-Write (COW) Virtual memory allows other benefits during process creation: Copy-on-Write Memory-Mapped Files Will be presented later Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory If either process modifies a shared page, only then is the page copied COW allows more efficient process creation as only modified pages are copied Free pages are allocated from a pool of zeroed-out pages 71

Impacts when a Process Modifies a Page Before process 1 modifies page C: physical Process 1 Process 2 memory page A page B page C After process 1 modifies page C: Process 1 physical memory page A Process 2 page B page C copy of page C 72

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 73

What Happens if There is no Free Frame? Page replacement: find some page in memory, but not really in use, swap it out algorithm performance want an algorithm which will result in minimum number of page faults Same page may be brought into memory several times Page replacement Prevent over-allocation of memory by modifying page-fault service routine to include page replacement Use modify (dirty) bit to reduce overhead of page transfers only modified pages are written to disk Page replacement completes separation between logical memory and physical memory large virtual memory can be provided on a smaller physical memory 74

Need For Page Replacement 0 H frame valid-invalid bit 1 2 3 load M J M logical memory for user 1 3 4 5 v v v i page table for user 1 0 1 2 3 4 monitor D H load M B 0 A 1 B 2 D 3 E logical memory for user 2 frame valid-invalid bit 6 v 2 v i 7 v page table for user 2 5 J 6 A 7 E physical memory M 75

Basic Page Replacement 1. Find the location of the desired page on disk 2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame 3. Bring the desired page into the (newly) free frame; update the page and frame tables frame valid-invalid bit 0 i f v page table 2 4 change to invalid reset page table for new page f victim physical memory swap out victim page 1 3 swap desired page in 4. Restart the process 76

Page Replacement Algorithms Goal of the page replacement algorithms: Want lowest page-fault rate Algorithms for replacement of pages First-In-First-Out (FIFO) algorithm Optimal algorithm (replace page that will not be used for longest period of time) Least Recently Used (LRU) algorithm (replacement of the page that has not been used for a longest period of time) Counting-based page replacement algorithms (keep a counter of the number of references that have been made to each page) LFU Algorithm: replaces page with smallest count MFU Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used 77

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 78

Allocation of Frames Each process needs minimum number of pages Example: IBM 370 6 pages to handle SS MOVE instruction: instruction is 6 bytes, might span 2 pages 2 pages to handle from 2 pages to handle to Two major allocation schemes fixed allocation priority allocation 79

Fixed vs. Priority Allocation Fixed allocation Equal allocation For example, if there are 100 frames and 5 processes, give each process 20 frames. Proportional allocation Allocate according to the size of process Priority allocation Use a proportional allocation scheme using priorities rather than size If process P i generates a page fault, select for replacement one of its frames select for replacement a frame from a process with lower priority number 80

Global vs. Local Allocation Global replacement process selects a replacement frame from the set of all frames; one process can take a frame from another Local replacement each process selects from only its own set of allocated frames 81

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 82

Thrashing If a process does not have enough pages, the page-fault rate is very high. This leads to: low CPU utilization operating system thinks that it needs to increase the degree of multiprogramming another process added to the system Thrashing a process is busy swapping pages in and out 83

CPU Utilization Against the Degree of Multiprogramming CPU utilization thrashing degree of multiprogramming Some explanation 84

Demand Paging and Thrashing Why does demand paging work? Locality model Process migrates from one locality to another Localities may overlap Why does thrashing occur? Σ size of locality > total memory size 85

Page-Fault Frequency page fault rate increase number of frames upper bound lower bound decrease number of frames number of frames Establish acceptable page-fault rate If actual rate too low, process loses frame If actual rate too high, process gains frame 86

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 87

Memory-Mapped Files Memory-mapped file I/O allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory Basic mechanism A file is initially read using demand paging A page-sized portion of the file is read from the file system into a physical page Subsequent reads/writes to/from the file are treated as ordinary memory accesses Simplifies file access by treating file I/O through memory rather than read() write() system calls Also allows several processes to map the same file allowing the pages in memory to be shared 88

Memory Mapped Files 1 2 3 4 5 6 3 6 1 2 3 4 5 6 process A virtual memory 1 5 4 2 physical memory process B virtual memory 1 2 3 4 5 6 disk file 89

Memory-mapped Shared Memory in the Win32 API Process 1 Process 2 shared memory memorymapped file shared memory shared memory Some explanation 90

Memory mapped I/O 91

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 92

Allocating Kernel Memory Treated differently from user memory Often allocated from a free-memory pool Kernel requests memory for structures of varying sizes Some kernel memory needs to be contiguous Strategies for managing free memory that is assigned to kernel processes Buddy system allocation Slab allocation 93

Buddy System Allocator Allocates memory from fixed-size segment consisting of physicallycontiguous pages Memory allocated using powerof-2 allocator Satisfies requests in units sized as power of 2 Request rounded up to next highest power of 2 When smaller allocation needed than is available, current chunk split into two buddies of nextlower power of 2 Continue until appropriate sized chunk available 64 KB B L 32 KB C L 128 KB A L 64 KB B R 32 KB C R 256 KB A L 128 KB A R 94

Slab Allocator Alternate strategy Slab is one or more physically contiguous pages Cache consists of one or more slabs Single cache for each unique kernel data structure Each cache filled with objects instantiations of the data structure When cache created, filled with objects marked as free When structures stored, objects marked as used If slab is full of used objects, next object allocated from empty slab If no empty slabs, new slab allocated Benefits include no fragmentation, fast memory request satisfaction 3 KB objects 7 KB objects kernel objects caches slabs physical contiguous pages 95

Roadmap of Chapter 4 Main Memory Background Contiguous memory relocation and swapping Paging Segmentation Example: Intel Pentium Virtual Memory Background Demand paging Copy-on-Write Page replacement Allocation of frames Trashing Memory-mapped files Allocating kernel memory Other issues and examples 96

Other Issues: Prepaging Goal: To reduce the large number of page faults that occurs at process startup Prepage all or some of the pages a process will need, before they are referenced But if prepaged pages are unused, I/O and memory was wasted Assume s pages are prepaged and α of the pages is used Is cost of s * α save pages faults > or < than the cost of prepaging s * (1- α) unnecessary pages? α near zero prepaging loses 97

Other Issues: Page Size Page size selection must take into consideration: fragmentation table size I/O overhead locality 98

Other Issues: TLB Reach TLB Reach - The amount of memory accessible from the TLB TLB Reach = (TLB Size) X (Page Size) Ideally, the working set of each process is stored in the TLB Otherwise there is a high degree of page faults Increase the page size This may lead to an increase in fragmentation as not all applications require a large page size Provide multiple page sizes This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation 99

Other Issues: Program Structure Program structure Int[128,128] data; Each row is stored in one page Program 1 for (j = 0; j <128; j++) for (i = 0; i < 128; i++) data[i,j] = 0; 128 x 128 = 16,384 page faults Program 2 for (i = 0; i < 128; i++) for (j = 0; j < 128; j++) data[i,j] = 0; 128 page faults 100

Other Issues I/O interlock I/O Interlock Pages must sometimes be locked into memory Example Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm Reason why frames used for I/O must be in memory buffer disk drive 101

Example: Windows XP Uses demand paging with clustering Clustering brings in pages surrounding the faulting page Processes are assigned working set minimum and working set maximum Working set minimum is the minimum number of pages the process is guaranteed to have in memory A process may be assigned as many pages up to its working set maximum When the amount of free memory in the system falls below a threshold, automatic working set trimming is performed to restore the amount of free memory Working set trimming removes pages from processes that have pages in excess of their working set minimum 102

Example: Solaris Maintains a list of free pages to assign faulting processes Lotsfree threshold parameter (amount of free memory) to begin paging Desfree threshold parameter to increasing paging Minfree threshold parameter to being swapping Paging is performed by pageout process Pageout scans pages using modified clock algorithm Scanrate is the rate at which pages are scanned This ranges from slowscan to fastscan Pageout is called more frequently depending upon the amount of free memory available 103

Literature Silberschatz, Galvin, Gagne Operating System Concepts, 8 th Edition John Wiley and Sons, 2009 Andrew Tanenbaum Modern operating Systems 104