Memory management. Knut Omang Ifi/Oracle 10 Oct, 2012

Similar documents
Motivation. Memory Management. Memory Management. Computer Hardware Review

Memory Management. Motivation

Chapter 4 Memory Management

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Memory Management Prof. James L. Frankel Harvard University

CS450/550 Operating Systems

Challenge managing memory

Chapter 4 Memory Management. Memory Management

12: Memory Management

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Operating Systems and Computer Networks. Memory Management. Dr.-Ing. Pascal A. Klein

Operating Systems, Fall

Operating Systems, Fall

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Perform page replacement. (Fig 8.8 [Stal05])

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Operating Systems, Fall

Week 2: Tiina Niklander

Preview. Memory Management

Requirements, Partitioning, paging, and segmentation

Memory Management. Memory Management. G53OPS: Operating Systems. Memory Management Monoprogramming 11/27/2008. Graham Kendall.

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Last Class: Demand Paged Virtual Memory

Memory Management. CSE 2431: Introduction to Operating Systems Reading: , [OSC]

Address Translation. Tore Larsen Material developed by: Kai Li, Princeton University

Chapter 3 - Memory Management

Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms

Chapter 7 Memory Management

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Operating Systems Lecture 6: Memory Management II

Memory management, part 2: outline

CS399 New Beginnings. Jonathan Walpole

Memory Management. q Basic memory management q Swapping q Kernel memory allocation q Next Time: Virtual memory

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Chapter 3: Important Concepts (3/29/2015)

Memory Management Virtual Memory

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

Registers Cache Main memory Magnetic disk Magnetic tape

Requirements, Partitioning, paging, and segmentation

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Memory: Paging System

Chapter 8 Virtual Memory

Virtual Memory Outline

Lecture 7. Memory Management

Memory Management. Memory Management

Memory Management. Memory Management

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

COS 318: Operating Systems. Virtual Memory and Address Translation

Operating Systems. User OS. Kernel & Device Drivers. Interface Programs. Memory Management

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

CS370 Operating Systems

Operating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory

Memory Management. Memory Management Requirements

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

CS370 Operating Systems

Memory Management. To do. q Basic memory management q Swapping q Kernel memory allocation q Next Time: Virtual memory

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Memory management: outline

Memory management: outline

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

a process may be swapped in and out of main memory such that it occupies different regions

3. Memory Management

Basic Memory Management

ECE 598 Advanced Operating Systems Lecture 10

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Today. Adding Memory Does adding memory always reduce the number of page faults? FIFO: Adding Memory with LRU. Last Class: Demand Paged Virtual Memory

Chapter 8: Virtual Memory. Operating System Concepts

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Virtual Memory Design and Implementation

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Virtual Memory Design and Implementation

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

CS370 Operating Systems

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 13: Address Translation

Lecture 14 Page Replacement Policies

Virtual Memory. Chapter 8

CS510 Operating System Foundations. Jonathan Walpole

Chapter 9: Virtual-Memory

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

Memory Management. Dr. Yingwu Zhu

General Objective:To understand the basic memory management of operating system. Specific Objectives: At the end of the unit you should be able to:

Chapters 9 & 10: Memory Management and Virtual Memory

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Operating Systems (2INC0) 2017/18

Paging and Page Replacement Algorithms

Memory Management william stallings, maurizio pizzonia - sistemi operativi

Memory Management Virtual Memory

Operating System Concepts

COMPUTER SCIENCE 4500 OPERATING SYSTEMS

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Memory Management. Today. Next Time. Basic memory management Swapping Kernel memory allocation. Virtual memory

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

UNIT III MEMORY MANAGEMENT

Transcription:

Memory management Knut Omang Ifi/Oracle 1 Oct, 212 (with slides from V. Goebel, C. Griwodz (Ifi/UiO), P. Halvorsen (Ifi/UiO), K. Li (Princeton), A. Tanenbaum (VU Amsterdam), and M. van Steen (VU Amsterdam))

Today Basic memory management Swapping Page as memory unit Segmentation Virtual memory Page/segment allocation implementation Paging Page replacement algorithms

Memory Management Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory cache some medium-speed, medium price main memory gigabytes of slow, cheap disk storage Memory manager handles the memory hierarchy 3

Computer Hardware Review < 1 KB 4 MB 2-64 GB 1-2 TB Typical memory hierarchy numbers shown are rough approximations 4

Models for Memory Management with no memory abstraction support - Physical addresses used directly - still need to organize 5

No memory abstraction - implications Only one program can easily be present at a time! If another program needs to run, the entire state must be saved to disk No protection... 6

Multiprogramming Processes have to wait for I/O Goal Do other work while a process waits Give CPU to another process Processes may be concurrently ready So assuming n processes: If I/O waiting probability for any process is p Probable CPU utilization can be estimated as CPU utilization = 1 p n 7

Multiprogramming Degree of multiprogramming CPU utilization as a function of number of processes in memory 8

Multiprogramming Several programs Concurrently loaded into memory OS must arrange memory sharing Memory partitioning Memory Needed for different tasks within a process Shared among processes Process memory demand may change over time Use of secondary storage Move (parts of) blocking processes from memory Higher degree of multiprogramming possible Makes sense if processes block for long times 9

Memory Management for Multiprogramming Process may not be entirely in memory Reasons Other processes use memory Their turn Higher priority Process is waiting for I/O Too big For its share For entire available memory Approaches Swapping Paging Overlays Paging Swapping Overlays Registers Cache(s) DRAM Disk 2x 1x 1 9 x 1

Memory Management for Multiprogramming Swapping Remove a process from memory With all of its state and data Store it on a secondary medium Disk, Flash RAM, other slow RAM, historically also Tape Paging Remove part of a process from memory Store it on a secondary medium Sizes of such parts are fixed Page size Overlays Manually replace parts of code and data Programmer s rather than OS s work Only for very old and memory-scarce systems 11

Memory Management Techniques - How to assign memory to processes Memory partitioning: Fixed partitioning Dynamic partitioning Simple paging Simple segmentation Virtual memory paging Virtual memory segmentation 12

Fixed Partitioning Divide memory Into static partitions At system initialization time (boot or earlier) Advantages Very easy to implement Can support swapping process in and out 13

Fixed Partitioning Two fixed partitioning schemes Equal-size partitions Unequal-size partitions Equal-size partitions Big programs can not be executed Unless program parts are loaded from disk Small programs use entire partition A problem called internal fragmentation Operating system x x fff 14

Fixed Partitioning Two fixed partitioning schemes Equal-size partitions Unequal-size partitions Unequal-size partitions Bigger programs can be loaded at once Smaller programs can lead to less internal fragmentation Advantages require assignment of jobs to partitions Operating system Operating system 2MB 4MB 15 6MB 12MB 16MB

Fixed Partitioning Approach Has been used in mainframes Uses the term job for a running program Jobs run as batch jobs Jobs are taken from a queue of pending jobs Problem with unequal partitions Choosing a job for a partition Operating system 2MB 4MB 6MB 12MB 16MB 16

Fixed Partitioning One queue per partition Internal fragmentation is minimal Jobs wait although sufficiently large partitions are available Operating system 2MB 4MB 6MB 12MB 16MB 17

Fixed Partitioning Single queue Jobs are put into next sufficiently large partition Waiting time is reduced Internal fragmentation is bigger A swapping mechanism can reduce internal fragmentation Move a job to another partition Operating system 2MB 4MB 6MB 12MB 16MB 18

Problems: Relocation and Protection Cannot be sure where program will be loaded in memory address locations of variables, code routines cannot be absolute must keep a program out of other processes partitions Base and limit values: Simplest form of virtual memory (translate: virt --> phys) address locations added to base value to map to phys. addr address locations larger than limit value is an error 19

2 Registers: Base and Bound Built in Cray-1 bound A program can only access physical memory in [base, base+bound] virtual address + base > error On a context switch: save/restore base, bound registers Pros: Simple physical address Cons: fragmentation, hard to share, and difficult to use disks

Dynamic Partitioning Divide memory Partitions are created dynamically for jobs Removed after jobs are finished External fragmentation Problem increases with system running time Occurs with swapping as well Addresses of process 2 changed External fragmentation Solutions to address change: Address Translation Operating system 21 Swapped in Process 5 Process 12 2MB 14MB free 2MB 14MB 6MB 6MB free Process 4 Process 2 56MB 14MB 14MB free 6MB free 36MB free Process 3 22MB 1 free 4MB free

Dynamic Partitioning Reduce external fragmentation Compaction Compaction Takes time Consumes processing resources Reduce compaction need Placement algorithms Operating system Swapped in Process 2 14MB Process 6MB 4 Process 4 6MB free Process 3 6MB free 1 Process 3 6MB free 1 16MB free free 4MB free 22

Dynamic Partitioning: Placement Algorithms Use most suitable partition for process Typical algorithms First fit Next fit Best fit First Next Best 4MB 16MB 16MB 16MB 6MB 4MB 4MB 4MB 6MB 6MB 6MB 4MB 16MB 16MB 16MB 12 4MB 12 12 16MB 6MB 16MB 32MB 16MB 32MB 32MB 23

Dynamic Partitioning: Placement Algorithms Use most suitable partition for process Typical algorithms First fit Next fit Best fit First 4MB 12MB 4MB 6MB 16MB 12 12MB 16MB 32MB 1MB Best 6MB 4MB 4MB 16MB 12 12MB 16MB 32MB 12MB 1MB 24

Dynamic Partitioning: Placement Algorithms Comparison of First fit, Next fit and Best fit Example is naturally artificial First fit Simplest, fastest of the three Typically the best of the three Next fit Typically slightly worse than first fit Problems with large segments Best fit Slowest Creates lots of small free blocks Therefore typically worst 25

Memory management: bookkeeping Two main strategies: Bitmaps Bit indicate free/allocated Using linked lists Keep list(s) of free/allocated segments 26

Memory Management: bitmaps/lists Part of memory with 5 processes, 3 holes tick marks show allocation units shaded regions are free Corresponding bit map Same information as a list 27

Buddy System Mix of fixed and dynamic partitioning Partitions have sizes 2 k, L k U Maintain a list of holes with sizes Assign a process Find smallest k so that process fits into 2 k Find a hole of size 2 k If not available, split smallest hole larger than 2 k Split recursively into halves until two holes have size 2 k Process 32kB Process Process 128kB 256kB 28 Process 128kB 128kB 256kB 32kB 64kB 32kB 128kB 64kB 512kB Process 32kB Process 256kB 256kB 1MB Process 256kB 256kB 512kB 256kB

Memory use within a process Memory needs of known size Program code Global variables Memory needs of unknown size Dynamically allocated memory Stack Several in multithreaded programs Process PCB program Initialized global variables (data) Uninitialized global vars Uninitialized global variables data stack Possibly stacks for more threads 29

Memory Addressing Addressing in memory Addressing needs are determined during programming Must work independently of position in memory Actual physical address are not known Leave enough room for growth (but not too much!) PCB program data stack 3

Paging Paging Equal lengths Determined by processor One page moved into one memory frame Process is loaded into several frames Not necessarily consecutive No external fragmentation Little internal fragmentation Depends on frame size Process Process 2 3 4 51 31

Paging Virtual address VPage # offset Page table PPage#....... PPage#... PPage # offset Physical address page table size > error Use a page table to translate Various bits in each entry Context switch: similar to the segmentation scheme What should be the page size? Pros: simple allocation, easy to share Cons: big page table and cannot deal with holes easily

Segmentation Segmentation Different lengths Determined by programmer Memory frames Programmer (or compiler toolchain) organizes program in parts Move control Needs awareness of possible segment size limits Pros and Cons Principle as in dynamic partitioning No internal fragmentation Less external fragmentation because on average smaller segments 33

Segmentation Virtual address segment offset seg size. + physical address Have a table of (seg, size) Protection: each entry has > error (nil, read, write) On a context switch: save/restore the table or a pointer to the table in kernel memory Pros: Efficient, easy to share Cons: Complex management and fragmentation within a segment

Paging Typical for paging and swapping Address translation At execution time With processor support Simple paging and segmentation Without virtual memory and protection Can be implemented by address rewriting at load time by jump tables setup at load time Lookup table ( part 2, offset in part 2) + Simplified Address translation Code part 1 Code part 2 35

Segmentation with paging and virtual address space Virtual address Vseg # VPage # offset seg. size Page table PPage#....... PPage#... > PPage # offset error Physical address

Other needs (protection) Protection of process from itself (stack grows into heap) Protection of processes from each other (write to other process) Solutions to protection: Address Translation PCB program data stack program data stack program data stack 37

Why Address Translation and Virtual Memory? Use secondary storage Extend expensive DRAM with reasonable performance Protection Programs do not step over each other and communicate with each other require explicit IPC operations Convenience Flat address space Programs share same view of the world Programs/program parts can be moved 38

Page Replacement Algorithms Page fault forces choice which page must be removed make room for incoming page Modified page must first be saved unmodified just overwritten Better not to choose an often used page will probably need to be brought back in soon 39

Optimal Page Replacement Algorithm Replace page needed at the farthest point in future Optimal but unrealizable Estimate by logging page use on previous runs of process although this is impractical 4

Not Recently Used (NRU) Two status bits associated with each page: R page referenced (read or written) M page modified (written) Pages belong to one of four set of pages according to the status bits: Class : not referenced, not modified (R=, M=) Class 1: not referenced, modified (R=, M=1) Class 2: referenced, not modified (R=1, M=) Class 3: referenced, modified (R=1, M=1) NRU removes a page at random from lowest numbered, non-empty class Low overhead 41

FIFO Page Replacement Algorithm Maintain a linked list of all pages in order they came into memory Page at beginning of list replaced Disadvantage page in memory the longest may be often used 42

43 Page most recently loaded Page first loaded R-bit Second Chance Modification of FIFO R bit: when a page is referenced again, the R bit is set, and the page will be treated as a newly loaded page Reference string: A B C D A E F G H I E D C B A 1 F E D C B A 1 G F E D C B A 1 D C B A D C B A 1 The R-bit for page A is set H G F E D C B A 1 Now the buffer is full, next page fault results in a replacement H G F E D C B A 1 Page I will be inserted, find a page to page out by looking at the first page loaded: -if R-bit = replace -if R-bit = 1 clear R-bit, move page last, and finally look at the new first page A H G F E D C B Page A s R-bit = 1 move last in chain and clear R-bit, look at new first page (B) I A H G F E D C Page B s R-bit = page out, shift chain left, and insert I last in the chain Second chance is a reasonable algorithm, but inefficient because it is moving pages around the list

Clock More efficient way to implement Second Chance Circular list in form of a clock Pointer to the oldest page: R-bit = replace and advance pointer R-bit = 1 set R-bit to, advance pointer until R-bit =, replace and advance pointer Reference string: A B C D A E F G H I H A 1 BI G C F E D 44

Least Recently Used (LRU) Replace the page that has the longest time since last reference Based on observation: pages that are heavily used in the last few instructions will probably be used again in the next few instructions Several ways to implement this algorithm 45

Least Recently Used (LRU) LRU as a linked list: Reference string: A B C D A E F G H A C I Now the buffer is Page full, Move fault, next C A page last replace last in in fault in the LRU the chain results (B) with in a replacement I (most recently used) EF GD HI AC EF DG CH A D E BF AG H D AC EH FG BC DA GE F BC FD E B EC D D B Page most recently used Page least recently used Expensive - maintaining an ordered list of all pages in memory: most recently used at front, least at rear update this list every memory reference!! 46

Implementing LRU 1 2 3 1 1 1 1 1 1 1 1 Update algorithm: When access occurs hardware sets row bits then resets column bits 2 1 1 Page ref: 12 3 Least recently used: Read binary value across Smallest value = LRU 47

Implementing LRU LRU using a matrix pages referenced in order,1,2,3,2,1,,3,2,3 48

Implementing LRU Problem: Requires special hardware Lots of bits to update Software approximation? NFU (not frequently used) Scan R (referenced) bit of page table every clock interrupt aging by bit shift 49

Least Recently Used (LRU) Simulating LRU by using aging: reference counter for each page after a clock tick: shift bits in the reference counter to the right (rightmost bit is deleted) add a page s referece bit in front of the reference counter (left) page with lowest counter is replaced Clock tick 1 1 1 1 Clock tick 1 1 1 1 Clock tick 2 1 1 1 1 Clock tick 3 1 1 Clock tick 4 1 1 1 1 1 1 11 1 111 1 1111 1 1111 2 2 2 1 2 11 2 11 2 111 3 3 1 3 1 3 1 3 1 3 11 4 4 4 1 4 11 4 11 4 11 5 5 1 5 1 5 1 5 11 5 11 6 6 1 6 1 6 11 6 11 6 11 5

LRU-K & 2Q LRU-K: bases page replacement in the last K references on a page [O Neil et al. 93] 2Q: uses 3 queues to hold much referenced and popular pages in memory [Johnson et al. 94] 2 FIFO queues for seldom referenced pages 1 LRU queue for much referenced pages Retrieved from disk Reused, move to LRU queue NOT Reused, move to FIFO queue NOT reused, page out FIFO LRU FIFO NOT reused, page out Reused, re-arrange LRU queue Reused, move back to LRU queue 51

Summary: Memory Management Algorithms Paging and segmentation Extended in address translation and virtual memory lectures Placement algorithms for partitioning strategies Mostly obsolete for system memory management since hardware address translation is available But still necessary for managing kernel memory memory within a process memory of specialized systems (esp. database systems) Address translation solves Solves addressing in a loaded program Hardware address translation Supports protection from data access Supports new physical memory position after swapping in Virtual memory provides Provide larger logical (virtual) than physical memory Selects process, page or segment for removal from physical memory 52