Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Similar documents
Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Chapter 8: Memory-Management Strategies

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapter 8: Main Memory

Chapter 8: Main Memory

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Chapter 8 Memory Management

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

CS399 New Beginnings. Jonathan Walpole

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

CS307: Operating Systems

12: Memory Management

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

Memory Management. Memory Management

Module 8: Memory Management

Operating System Concepts

Basic Memory Management

Main Memory (Part I)

Chapter 8 Virtual Memory

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CS450/550 Operating Systems

Chapter 9 Memory Management

Background. Contiguous Memory Allocation

Chapter 4 Memory Management

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

Chapter 9: Virtual Memory

Chapters 9 & 10: Memory Management and Virtual Memory

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

Chapter 8 Main Memory

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

a process may be swapped in and out of main memory such that it occupies different regions

Memory management: outline

Memory management: outline

Chapter 4 Memory Management. Memory Management

Logical versus Physical Address Space

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Chapter 8: Virtual Memory. Operating System Concepts

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Memory Management. Contents: Memory Management. How to generate code? Background

Chapter 8: Main Memory

Memory Management and Protection

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Virtual Memory Outline

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Chapter 8: Main Memory. Operating System Concepts 8th Edition

Module 8: Memory Management

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

Chapter 8: Main Memory

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

Chapter 8: Memory Management

Motivation. Memory Management. Memory Management. Computer Hardware Review

Module 9: Virtual Memory

Chapter 7 Memory Management

CS370 Operating Systems

Chapter 9: Virtual Memory

Module 9: Virtual Memory

Chapter 9: Virtual-Memory

Operating Systems Lecture 6: Memory Management II

Chapter 9: Memory Management. Background

Memory Management. Memory Management

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Process. One or more threads of execution Resources required for execution

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

CS 5523 Operating Systems: Memory Management (SGG-8)

Goals of Memory Management

Preview. Memory Management

Operating System 1 (ECS-501)

Virtual Memory COMPSCI 386

Memory Management. Dr. Yingwu Zhu

1. Background. 2. Demand Paging

CS420: Operating Systems

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Chapter 9: Virtual Memory

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

Memory Management Virtual Memory

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Chapter 8: Memory Management Strategies

Chapter 9: Virtual Memory

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Transcription:

Requirements Relocation Memory management ability to change process image position Protection ability to avoid unwanted memory accesses Sharing ability to share memory portions among processes Logical organization structure of process image Physical organization structure of the physical memory hierarchy Terms Relocation: program loading Frame A fixed-length block of main memory. Page A fixed-length block of data that resides in secondary memory. Segment May temporarily be copied into a frame of main memory. sources compiler Compiling/ Assembling.o Development linker Linking binary executable loader Loading process image Memory load address A variable-length block of data that resides in secondary memory. Segmentation A whole segment may temporarily be copied into a an available region of main memory. Combined segmentation and paging A segment may be divided into pages which can be individually copied into main memory.

Relocation: addressing requirements Relocation: binding load address increasing address values code data stack process image branch, function call data reference stack pointer Binding translation of references to memory addresses e.g. call strlen call 0x1084700 mov R0, errno mov R0, 0x2007100 Static compile time absolute code binary executable contains absolute addresses load address must be known at compile time if load address changes, program must be recompiled load time relocatable code binary executable contains relative addresses e.g. offset from program counter (PC+0x10fe) loader translates relative addresses to absolute addresses (static relocation) Dynamic run time relocatable code binary executable contains relative addresses CPU manages relative addresses process image can be moved during execution (dynamic reloocation) Relocation: binding Relocation: binding main call myfunc 0x3070000 0x3072000 call 0x307f000 main call myfunc start start+0x1000 start+0x1100 call +0xdf00 0x2000000 0x2001000 0x2001100 call 0x200f000 program.o linking program.o linking loading myfunc 0x307f000 myfunc start + 0xf000 load address: 0x2000000 0x200f000 module.o executable load address: 0x3070000 (fixed) module.o executable process image Compile time binding Load time binding

Relocation: binding Linking main call myfunc program.o linking start start+0x1000 start+0x1100 call +0xdf00 loading 0x2000000 0x2001000 0x2001100 call +0xdf00 Static modules and libraries are linked into an executable binary Dynamic myfunc start + 0xf000 executable load address: 0x2000000 0x200f000 process image modules and static libraries are linked into an executable binary dynamic libraries are inserted only as a reference loaded at load time or at run time when needed module.o Run time binding Loading Memory partitioning Static the whole executable is loaded into memory Dynamic code is loaded only when needed Fixed partitioning Main memory is divided into a number of static partitions. 1.Same size for all partitions 2.Different partition sizes A process may be loaded into a partition of equal or greater size. Simple to implement; little operating system overhead Inefficient use of memory due to fragmentation internal (space wasted into a partition) external (partitions available cannot accommodate processes) Maximum number of active processes is fixed.

Fixed partitioning: examples Fixed partitioning: process assignment 2 MB 4 MB 6 MB 12 MB 16 MB new processes long-term scheduling A queue for each partition assign a process to the smallest (big enough) partition non-optimal queue i non-empty queue i+1 empty Fixed partitioning equal-size partitions (of each) Fixed partitioning unequal-size partitions Fixed partitioning: process assignment Memory partitioning An unique queue (which policy?) Dynamic partitioning First-came first-served (FCFS) simple Best-fit non-optimal memory usage Partitions are created dynamically Each process matches exactly partition size No internal fragmentation new processes Best-available-fit choose the first job that fits into partition Efficient use of main memory. Need for compaction To avoid external fragmentation.

Dynamic partitioning: example Dynamic partitioning: process assignment process 1 process 2 16 MB process 1 16 MB process 1 process 3 process 5 12 MB Best-fit choose the partition with closest size to the request small fragments residual First-fit process 3 process 4 process 5 6 MB 12 MB process 3 process 5 6 MB 12 MB process 6 20 MB 2 MB choose the first partition large enough Next-fit similar to first-fit, but start searching from the last allocation process 6: 20 MB Relocation: memory addresses Relocation: MMU Logical (or virtual) address address generated by CPU Limit Base Physical address address sent to physical memory logical address + physical address >= MMU CPU Translation Memory logical address physical address MMU error (trap to OS) usually a component of the CPU

Memory partitioning Buddy system: example evolution Buddy system 1 MB block 100 KB request A=128 KB 128 KB 1 MB 512 KB memory: set of blocks of size S i =2 k (L k U) 240 KB request A=128 KB 128 KB B= 512 KB usually 2 U = memory size request: size=r looks for B i : best fit block (size = S=2 U ) 1. if 2 U-1 < R 2 U, allocate block 2. else split block in two sub-blocks of size 2 U-1 each 3. select one of the new blocks and repeat from step 1 release block of size 2 k if there are two adjacent blocks with same size, merge them 64 KB request A=128 KB C=64 KB 64 KB B= 512 KB request A=128 KB C=64 KB 64 KB B= D= release B release A A=128 KB C=64 KB 64 KB D= 128 KB C=64 KB 64 KB D= 75 KB request E=128 KB C=64 KB 64 KB D= release C E=128 KB 128 KB D= release E 512 KB D= release D 1 MB Buddy system: tree view Memory partitioning 1 MB 512 KB Segmentation Process images are divided into a number of segments. splitting is handled by programmer (or compiler). 128 KB 64 KB Load process load segments into dynamic partitions. Segments may be not contiguous. No internal fragmentation. Improved memory utilization. A=128 KB C=64 KB 64 KB D= if two buddies are leaf nodes: at least one is allocated otherwise: merge them Protection Sharing External fragmentation. Logical address to physical address translation.

Segmentation: example Segmentation: addresses translation Code-1 (main prg) Static Code-2 (library) Dynamic (heap) Stack Process image (logical address space) Dynamic (heap) Code-1 Static Code-2 OS maintains the list of free partitions Logical address <segment, offset> Translated through the process Segment Table logical address segment offset seg-0 limit seg-1 limit seg-2 limit seg-3 limit seg-0 other info seg-1 other info seg-2 other info seg-3 other info seg-0 base seg-1 base seg-2 base seg-3 base + physical address Stack Physical memory Segment table Segmentation: protection and sharing Segmentation: segment table example (IA-32) Code (private) Sdata (shared) LCode (library) Process A Segment table A process cannot access to a segment not pointed by its segment table Access to data/code segments can be restricted to valid operations Shared segments have identical entries in different segment tables Code (private) Sdata (shared) LCode (library) (private) Code Segment table Code (private) Sdata (shared) LCode (library) Process B 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 BASE 31:24 G D / B L A V L LIMIT 19:16 P DPL B TYPE BASE 23:16 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 BASE 15:0 LIMIT 15:0 L: 64-bit code segment AVL: available for system software BASE: base address D/B: default operation size (16 (0) or 32 (1) bit) DPL: descriptor privilege level G: granularity LIMIT: segment size (bytes if G=0, 4KB pages if G=1) P: present S: type (0=system; 1=code or data) TYPE: segment type (data, code, stack, read/write,...) 64-bits segment descriptor-0 segment descriptor-1 segment descriptor-2 segment descriptor-3 segment descriptor-n Segment table

Memory partitioning Paging: example Paging Partition main memory into equal-size frames. Process images are divided into equal-size pages. Same length as frames. Load process load pages into available frames. Frames may be not contiguous. Not visible to programmer. No external fragmentation. A small amount of internal fragmentation. Logical address to physical address translation. Process A Process B P2 P3 P4 P5 P6 P2 P3 Memory P3 P3 P2 P2 P4 P6 P5 Frame-0 Frame-1 Frame-2 Frame-3 Frame-4 Frame-5 Frame-6 Frame-7 Frame-8 Frame-9 Frame-10 Frame-11 Frame-12 Frame-13 Frame-14 Frame-15 Frame-16 Frame-17 Contiguous pages map in non-contiguous, out of order frames Only the last page of a process can suffer of fragmentation (average wasted space = page-size/2) OS keeps the list of free pages (linked list or bitmap) Paging: address translation Logical address <page, offset> page: k bits ; page-size=2 k address: n bits max pages usable by a process=2 n-k Translated through the process Page Table logical address page # offset page-0 other info page-0 base page-1 other info page-1 base page-2 other info page-2 base page-3 other info page-3 base frame # offset physical address Page table other info: valid/non-valid, read-only/readwrite/executable, required privilege level,... Registers fast Paging: page table implementation limits the table size context switch overhead Memory more registers to save small context switch overhead slow only table pointer is kept in a register optionally the table length too each memory access requires additional memory accesses (to page table)

Paging: page table implementation Translation look-aside buffer (TLB) leverages the locality principle small cache with some page table entries the whole page table is in memory same tradeoffs of I/D caches miss rate, miss penalty, size, cost context switches cause the TLB flush logical address page # offset page # page info TLB frame # frame # offset physical address Paging: multilevel page tables Process A Process B P2 P3 P4 P5 P6 P2 P3 Paging: protection and sharing Memory P3 S P2 P4 P6 P5 Frame-0 Frame-1 Frame-2 Frame-3 Frame-4 Frame-5 Frame-6 Frame-7 Frame-8 Frame-9 Frame-10 Frame-11 Frame-12 Frame-13 Frame-14 Frame-15 Frame-16 Frame-17 Similarly to segmentation, A process cannot access to a page not pointed by its page table Access to data/code pages can be restricted to valid operations Shared pages have identical entries in different page tables Paging: inverted page table Address size: 32 or 64 bits too many entries in page table Multilevel page table not all the entries are actually stored ( paged page table) Inverted page table A single page table (no more one for process) One entry for each physical frame logical page address owning process information More logical pages can map on the same physical frame A linear search is not feasible search time: O(n) Implemented as hash table Multilevel page table search time (ideal): O(1) shared pages are handled as collisions

Paging: inverted page table Memory partitioning pid page# offset f offset Segmentation with paging search pid page# f Physical memory combined approach segments are divided into pages segment table contains Inverted page table logical linear base address for segment one page table for each process pid page# hash function Inverted page table: hash implementation offset f search pid page# f pid f page# offset IA-32 or pointer to page table one page table for each segment MULTICS Segmentation and paging example: IA-32 Segment Descriptor Table seg selector 16 bit seg register segment descriptor 32 bit offset access limit base address + 32-bits linear address dir page offset Physical address Virtual memory Logical address

Terms Keys Virtual memory Storage allocation scheme Secondary memory can be addressed as though it were part of main memory. Size of virtual storage limited by: the architecture addressing scheme the amount of secondary memory available not by the actual size of main memory Virtual address address assigned to a location in virtual memory that location is accessed as it were part of main memory Virtual address space Virtual storage assigned to a process. Address space Range of memory addresses available to a process. Real address Address of a location in main memory All memory references within a process are logical addresses dynamically translated into physical addresses at run time (mapping can change) a process may be swapped in and out of main memory it occupies different regions of main memory at different times A process may be broken up into a number of pieces contiguous allocation is not needed paging and/or segmentation All of the portions of a process in main memory during execution: not necessary portions can stay in secondary memory and be fetched when needed portions in main memory can be moved in secondary memory when space is needed Main memory swapping Secondary memory Virtual memory Process A portions Process B portions Locality and virtual memory On demanding paging Page fault rate Programs tend to use only a small subset of addresses in a time interval not all process image portions are needed at the same time current used portions: working set Page fault: access to a page not in main memory W: working set size Pages in main memory W Working set evolution stable stable transients stable stable Time Load a page in main memory only when needed CPU try to access a page not in main memory: page fault valid bit in page table signals if a page is in main memory (page valid) Fast process start Reduced memory usage allows higher multiprogramming degree Effective access time: EAT = 1 pt mem pt fault t mem p: page fault probability t mem : access time for a frame (typ.: 10ns) access to TLB access to page table access to memory t fault : time for handling page fault (typ.: 1ms)

Virtual memory access Page loading SW actions If there exist a free frame y CPU checks TLB found? n read page form disk load page else access page table page in main n memory? y trap: jump to OS routine memory full? page replacement select a victim frame modified? no: load page into selected frame update TLB yes: write frame in secondary memory and load page update page table a bit (dirty bit) is needed in page table generate physical address HW actions Page replacement policies Page replacement: FIFO Optimal minimizes page faults not realizable requires information on future accesses FIFO Least recently used (LRU) not recently used (NRU or clock ) First page loaded is the first swapped out does not take into account the page rank (rank: access rate) suffers of the Belady anomaly increasing the frames allocated for a process can increase page faults e.g. accesses: 0 1 2 3 0 1 4 0 1 2 3 4 3 frames 9 page faults 4 frames 10 page faults Second chance

Page replacement: LRU Page replacement: NRU Select the page not accessed for the longest time very expensive to implement tag with access time stack with access sequence use approximation (NRU: not recently used) Add a bit (used bit) in page table set the used bit when a page is loaded or an access occurs select a frame with unset used bit for replacing scan the list in round robin fashion page table entries to avoid to victimize the same page too often reset used bits while searching variants: reset all used bits if a search fails reset all used bits after a frame is selected reset all used bits on a periodic scheduling Page replacement: second chance Page replacement Privilege dirty frames Comparison Algo 1 Algo 2 similar to NRU, but consider also the dirty bit first search: look for used=0, dirty=0 second search: look for used=0, dirty=1 third search: look for used=1, dirty=0 fourth search: look for used=1, dirty=1 used is cleared on a periodic schedule (timer interrupt) first search: look for used=0, dirty=0 second search: look for used=0, dirty=1 and clear every used bit checked third search: look for dirty=0 (all used bits are now 0) fourth search: look for dirty=1 Page faults per 1000 references Allocated frames

Frame allocation Frame allocation: fixed Fixed a process has a fixed number of frames allocated minimum number of frames: 1+max_addresses_accessible_by_a_single_instruction Variable process' frames can change over time Replacement scope local when a process needs a new page, free a frame of that process global search frames globally (not possible for fixed allocation) Equal each process has the same number of frames Size based a process has a number of frames proportional to its image size Other priority... Frame allocation: variable Frame allocation: trashing Working set based WS(t, ) = pages accessed in [t-,t] W = size of WS too small: not significative too big: too much pages Page fault based measure process' page fault rate W(t, ) below a threshold release frames above a threshold increase number of frames allocated Too few frames allocated to a process too much page faults: execution time bounded by swapping Can occur as a result of a wrong feedback process i has too few frames page fault process i is suspended CPU is less used execute a new process allocate frames new process removes frames to other processes other processes can suffer of thrashing throughput falls the whole system is working only in swapping pages

Frame allocation: trashing Page size Virtual memory: other considerations CPU usage trashing small little internal fragmentation many entries in page tables I/O overhead Program structure impact array: row scan multiprogramming degree Locked frames some frames must be kept in main memory kernel (at least a portion) security critical data array: column scan (can cause more page faults)