Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

Similar documents
Process. One or more threads of execution Resources required for execution

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

Process. Memory Management

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Chapter 9: Memory Management. Background

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Background. Contiguous Memory Allocation

Chapter 8: Main Memory

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

Chapter 8: Main Memory

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

Chapter 8: Main Memory

Module 8: Memory Management

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

Chapter 8: Memory-Management Strategies

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Chapter 8: Memory Management Strategies

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Memory Management. Memory Management

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 8: Main Memory

Module 8: Memory Management

Chapter 8: Memory Management

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

Logical versus Physical Address Space

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CS307: Operating Systems

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

Goals of Memory Management

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Roadmap. Tevfik Koşar. CSC Operating Systems Spring Lecture - XII Main Memory - II. Louisiana State University

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

Chapter 8 Memory Management

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Main Memory (Part I)

CS370 Operating Systems

Lecture 8 Memory Management Strategies (chapter 8)

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s

SHANDONG UNIVERSITY 1

Memory Management. Memory Management

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Memory Management. Jo, Heeseung

CS307 Operating Systems Main Memory

12: Memory Management

Memory Management. Memory

CS399 New Beginnings. Jonathan Walpole

CS420: Operating Systems

Chapters 9 & 10: Memory Management and Virtual Memory

Chapter 9 Memory Management

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Chapter 8 Main Memory

CSE 421/521 - Operating Systems Fall Lecture - XII Main Memory Management. Tevfik Koşar. University at Buffalo. October 18 th, 2012.

Requirements, Partitioning, paging, and segmentation

MEMORY MANAGEMENT. Jo, Heeseung

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Memory Management. Contents: Memory Management. How to generate code? Background

Principles of Operating Systems

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Main Memory Yi Shi Fall 2017 Xi an Jiaotong University

Memory Management william stallings, maurizio pizzonia - sistemi operativi

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Memory Management. Memory Management Requirements

Operating systems. Part 1. Module 11 Main memory introduction. Tami Sorgente 1

Chapter 7 Memory Management

Memory Management and Protection

6. Which of the following operating systems was the predecessor of Microsoft MS-DOS? a. Dynix b. Minix c. CP/M

Chapter 8: Memory Management

3. Memory Management

Requirements, Partitioning, paging, and segmentation

CS 3733 Operating Systems:

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

The Memory Management Unit. Operating Systems. Autumn CS4023

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

VII. Memory Management

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Memory Management. Goals of Memory Management. Mechanism. Policies

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01

Operating Systems Unit 6. Memory Management

Chapter 9 Real Memory Organization and Management

Chapter 9 Real Memory Organization and Management

Memory Management. CSE 2431: Introduction to Operating Systems Reading: , [OSC]

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Lecture 7. Memory Management

! What is main memory? ! What is static and dynamic allocation? ! What is segmentation? Maria Hybinette, UGA. High Address (0x7fffffff) !

Administrivia. Deadlock Prevention Techniques. Handling Deadlock. Deadlock Avoidance

Memory Management. Dr. Yingwu Zhu

P r a t t hr h ee e : e M e M m e o m r o y y M a M n a a n g a e g m e e m n e t 8.1/72

Memory Management Basics

Last class: Today: Deadlocks. Memory Management

Transcription:

Memory Management 1

Learning Outcomes Appreciate the need for memory management in operating systems, understand the limits of fixed memory allocation schemes. Understand fragmentation in dynamic memory allocation, and understand dynamic allocation approaches. Understand how program memory addresses relate to physical memory addresses, memory management in base-limit machines, and swapping An overview of virtual memory management, including paging and segmentation. 2

Process One or more threads of execution Resources required for execution Memory (RAM) Program code ( text ) Data (initialised, uninitialised, stack) Buffers held in the kernel on behalf of the process Others CPU time Files, disk space, printers, etc. 3

OS Memory Management Keeps track of what memory is in use and what memory is free Allocates free memory to process when needed And deallocates it when they don t Manages the transfer of memory between RAM and disk. 4

Ideally, programmers want memory that is Fast Large Nonvolatile Not possible Memory management coordinates how memory hierarchy is used. Focus usually on RAM Disk Memory Hierarchy 5

OS Memory Management Two broad classes of memory management systems Those that transfer processes to and from external storage during execution. Called swapping or paging Those that don t Simple Might find this scheme in an embedded device, dumb phone, or smartcard. 6

Basic Memory Management Monoprogramming without Swapping or Paging Three simple ways of organizing memory - an operating system with one user process 7

Monoprogramming Okay if Only have one thing to do Memory available approximately equates to memory required Otherwise, Poor CPU utilisation in the presence of I/O waiting Poor memory utilisation with a varied job mix 8

Idea Recall, an OS aims to Maximise memory utilisation Maximise CPU utilization (ignore battery/power-management issues) Subdivide memory and run more than one process at once!!!! Multiprogramming, Multitasking 9

Modeling Multiprogramming Degree of multiprogramming CPU utilization as a function of number of processes in memory 10

General problem: How to divide memory between processes? Given a workload, how to we Keep track of free memory? Locate free memory for a new process? Overview of evolution of simple memory management Static (fixed partitioning) approaches Simple, predicable workloads of early computing Dynamic (partitioning) approaches More flexible computing as compute power and complexity increased. Introduce virtual memory Segmentation and paging Process B Process C Process D 11

Problem: How to divide memory One approach divide memory into fixed equal-sized partitions Any process <= partition size can be loaded into any partition Partitions are free or busy Process A Process B Process C Process D 12

Simple MM: Fixed, equal-sized partitions Any unused space in the partition is wasted Called internal fragmentation Processes smaller than main memory, but larger than a partition cannot run. Process A Process B Process C Process D 13

Simple MM: Fixed, variable-sized partitions Divide memory at boot time into a selection of different sized partitions Can base sizes on expected workload Each partition has queue: Place process in queue for smallest partition that it fits in. Processes wait for when assigned partition is empty to start 14

Issue Some partitions may be idle Small jobs available, but only large partition free Workload could be unpredicatable 15

Alternative queue strategy Single queue, search for any jobs that fit Small jobs in large partition if necessary Increases internal memory fragmentation 16

Fixed Partition Summary Simple Easy to implement Can result in poor memory utilisation Due to internal fragmentation Used on IBM System 360 operating system (OS/MFT) Announced 6 April, 1964 Still applicable for simple embedded systems Static workload known in advance 17

Dynamic Partitioning Partitions are of variable length Allocated on-demand from ranges of free memory Process is allocated exactly what it needs Assumes a process knows what it needs 18

Dynamic Partitioning In previous diagram We have 16 meg free in total, but it can t be used to run any more processes requiring > 6 meg as it is fragmented Called external fragmentation We end up with unusable holes 21

Recap: Fragmentation External Fragmentation: The space wasted external to the allocated memory regions. Memory space exists to satisfy a request, but it is unusable as it is not contiguous. Internal Fragmentation: The space wasted internal to the allocated memory regions. allocated memory may be slightly larger than requested memory; this size difference is wasted memory internal to a partition. 22

Dynamic Partition Allocation Algorithms Also applicable to malloc()-like in-application allocators Given a region of memory, basic requirements are: Quickly locate a free partition satisfying the request Minimise CPU time search Minimise external fragmentation Minimise memory overhead of bookkeeping Efficiently support merging two adjacent free partitions into a larger partition 23

Classic Approach Represent available memory as a linked list of available holes. Base, size Kept in order of increasing address Simplifies merging of adjacent holes into larger holes. Can be stored in the holes themselves Address Address Address Address Size Size Size Size Link Link Link Link 24

Coalescing Free Partitions with Linked Lists Four neighbor combinations for the terminating process X 25

Dynamic Partitioning Placement Algorithm First-fit algorithm Scan the list for the first entry that fits If greater in size, break it into an allocated and free part Intent: Minimise amount of searching performed Aims to find a match quickly Generally can result in smaller holes at the front end of memory that must be searched over when trying to find a free block. May have lots of unusable holes at the beginning. External fragmentation Tends to preserve larger blocks at the end of memory Address Address Address Address Size Size Size Size Link Link Link Link 26

Dynamic Partitioning Placement Algorithm Next-fit Like first-fit, except it begins its search from the point in list where the last request succeeded instead of at the beginning. Spread allocation more uniformly over entire memory More often allocates a block of memory at the end of memory where the largest block is found The largest block of memory is broken up into smaller blocks May not be able to service larger request as well as first fit. Address Address Address Address Size Size Size Size Link Link Link Link 27

Dynamic Partitioning Placement Best-fit algorithm Algorithm Chooses the block that is closest in size to the request Poor performer Has to search complete list does more work than first- or next-fit Since smallest block is chosen for a process, the smallest amount of external fragmentation is left Create lots of unusable holes Address Address Address Address Size Size Size Size Link Link Link Link 28

Dynamic Partitioning Placement Algorithm Worst-fit algorithm Chooses the block that is largest in size (worst-fit) (whimsical) idea is to leave a usable fragment left over Poor performer Has to do more work (like best fit) to search complete list Does not result in significantly less fragmentation Address Address Address Address Size Size Size Size Link Link Link Link 29

Dynamic Partition Allocation Summary Algorithm First-fit and next-fit are generally better than the others and easiest to implement You should be aware of them simple solutions to a still-existing OS or application function memory allocation. Note: Largely have been superseded by more complex and specific allocation strategies Typical in-kernel allocators used are lazy buddy, and slab allocators 31

Compaction We can reduce external fragmentation by compaction Shuffle memory contents to place all free memory together in one large block. Only if we can relocate running programs? Pointers? Generally requires hardware support Process A Process B Process C Process D Process A Process B Process C Process D 32

Some Remaining Issues with Dynamic Partitioning We have ignored Relocation How does a process run in different locations in memory? Protection How do we prevent processes interfering with each other Process A Process B Process C Process D 33

Example Logical Address-Space Layout Logical addresses refer to specific locations within the program Once running, these address must refer to real physical memory When are logical addresses bound to physical? 0x0000 0xFFFF 34

When are memory addresses bound? Compile/link time Compiler/Linker binds the addresses Must know run location at compile time Recompile if location changes Load time Compiler generates relocatable code Loader binds the addresses at load time Run time Logical compile-time addresses translated to physical addresses by special hardware. 35

Hardware Support for Runtime Binding and Protection For process B to run using logical addresses Process B expects to access addresses from zero to some limit of memory size limit 0x0000 Process B 36

Hardware Support for Runtime Binding and Protection For process B to run using logical addresses Need to add an appropriate offset to its logical addresses Achieve relocation Protect memory lower than B Must limit the maximum logical address B can generate Protect memory higher than B 0xFFFF limit Process B base 0x0000 37

Hardware Support for Relocation and Limit Registers 38

Base and Limit Registers 0xFFFF Also called Base and bound registers Relocation and limit registers Base and limit registers Restrict and relocate the currently active process Base and limit registers must be changed at Load time Relocation (compaction time) On a context switch base=0x8000 limit = 0x2000 0x9FFF limit 0x8000 0x0000 Process A 0x1FFF 0x0000 Process B Process C Process D 39 base

Base and Limit Registers 0xFFFF Also called Base and bound registers Relocation and limit registers Base and limit registers Restrict and relocate the currently active process Base and limit registers must be changed at Load time Relocation (compaction time) On a context switch base=0x4000 limit = 0x3000 0x6FFF limit 0x4000 0x0000 Process A 0x2FFF 0x0000 Process B Process C Process D 40 base

Base and Limit Registers Pro Supports protected multi-processing (-tasking) Cons Physical memory allocation must still be contiguous The entire process must be in memory Do not support partial sharing of address spaces No shared code, libraries, or data structures between processes 41

Timesharing 0xFFFF Thus far, we have a system suitable for a batch system Limited number of dynamically allocated processes Enough to keep CPU utilised Relocated at runtime Protected from each other But what about timesharing? We need more than just a small number of processes running at once Need to support a mix of active and inactive processes, of varying longevity Process A Process B Process C Process D 0x0000 42

Swapping A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. Backing store fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. Can prioritize lower-priority process is swapped out so higher-priority process can be loaded and executed. Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. slow 43

Schematic View of Swapping 44

So far we have assumed a process is smaller than memory What can we do if a process is larger than main memory? 45

Overlays Keep in memory only those instructions and data that are needed at any given time. Implemented by user, no special support needed from operating system Programming design of overlay structure is complex 46

Overlays for a Two-Pass Assembler 47

Virtual Memory Developed to address the issues identified with the simple schemes covered thus far. Two classic variants Paging Segmentation Paging is now the dominant one of the two Some architectures support hybrids of the two schemes E.g. Intel IA-32 (32-bit x86) 48

Virtual Memory - Paging Partition physical memory into small equal sized chunks Called frames Divide each process s virtual (logical) address space into same size chunks Called pages Virtual memory addresses consist of a page number and offset within the page OS maintains a page table contains the frame location for each page Used by to translate each virtual address to physical address The relation between virtual addresses and physical memory addresses is given by page table Process s physical memory does not have to be contiguous 49

Paging No external fragmentation Small internal fragmentation (in last page) Allows sharing by mapping several pages to the same frame Abstracts physical organisation Programmer only deal with virtual addresses Minimal support for logical organisation Each unit is one or more pages 52

Memory Management Unit (also called Translation Look-aside Buffer TLB) The position and function of the MMU 53

MMU Operation Assume for now that the page table is contained wholly in registers within the MMU in practice it is not Internal operation of simplified MMU with 16 4 KB pages 54

Virtual Memory - Segmentation Memory-management scheme that supports user s view of memory. A program is a collection of segments. A segment is a logical unit such as: main program, procedure, function, method, object, local variables, global variables, common block, stack, symbol table, arrays 55

Logical View of Segmentation 1 1 4 2 3 4 2 3 user space physical memory space 56

Segmentation Architecture Logical address consists of a two tuple: <segmentnumber, offset>, Addresses identify segment and address with segment Segment table each table entry has: base contains the starting physical address where the segments reside in memory. limit specifies the length of the segment. Segment-table base register (STBR) points to the segment table s location in memory. Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR. 57

Segmentation Hardware 58

Example of Segmentation 59

Segmentation Architecture Protection. With each entry in segment table associate: validation bit = 0 illegal segment read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level. Since segments vary in length, memory allocation is a dynamic partition-allocation problem. A segmentation example is shown in the following diagram 60

Sharing of Segments 61

Segmentation Architecture Relocation. dynamic by segment table Sharing. shared segments same physical backing multiple segments ideally, same segment number Allocation. First/next/best fit external fragmentation 62

Comparison Comparison of paging and segmentation 63