Memory Management (Chaper 4, Tanenbaum)

Similar documents
Memory Management (Chaper 4, Tanenbaum)

Chapter 9 Memory Management

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

16 Sharing Main Memory Segmentation and Paging

Chapter 8 Memory Management

CS399 New Beginnings. Jonathan Walpole

Classifying Information Stored in Memory! Memory Management in a Uniprogrammed System! Segments of a Process! Processing a User Program!

Memory Management. Dr. Yingwu Zhu

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

15 Sharing Main Memory Segmentation and Paging

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Memory management: outline

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Memory management: outline

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapter 8: Memory-Management Strategies

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Chapter 8 Main Memory

Chapter 8: Main Memory

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

CS370 Operating Systems

Chapter 8: Main Memory

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

Lecture 8 Memory Management Strategies (chapter 8)

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Operating Systems Unit 6. Memory Management

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

CS307: Operating Systems

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

! What is main memory? ! What is static and dynamic allocation? ! What is segmentation? Maria Hybinette, UGA. High Address (0x7fffffff) !

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Memory Allocation. Static Allocation. Dynamic Allocation. Dynamic Storage Allocation. CS 414: Operating Systems Spring 2008

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Goals of Memory Management

Memory Management. Contents: Memory Management. How to generate code? Background

Module 8: Memory Management

Virtual Memory 1. Virtual Memory

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

How to create a process? What does process look like?

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

Chapter 8: Main Memory

Chapter 9: Memory Management. Background

Chapter 8: Main Memory

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

Virtual Memory 1. Virtual Memory

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

Memory Management. An expensive way to run multiple processes: Swapping. CPSC 410/611 : Operating Systems. Memory Management: Paging / Segmentation 1

Chapters 9 & 10: Memory Management and Virtual Memory

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Chapter 8 Virtual Memory

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Chapter 8: Memory Management Strategies

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Memory Management and Protection

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Chapter 3 - Memory Management

Logical versus Physical Address Space

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Process size is independent of the main memory present in the system.

Operating System Support

CS307 Operating Systems Main Memory

Basic Memory Management

12: Memory Management

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Memory and multiprogramming

Background. Contiguous Memory Allocation

Memory Management. Dr. Yingwu Zhu

Chapter 8: Memory Management

CS5460: Operating Systems Lecture 14: Memory Management (Chapter 8)

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

3. Memory Management

Module 8: Memory Management

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

ECE519 Advanced Operating Systems

Main Memory (Part I)

Chapter 7 Memory Management

Principles of Operating Systems

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

Chapter 9 Real Memory Organization and Management

Chapter 9 Real Memory Organization and Management

Simple idea 1: load-time linking. Our main questions. Some terminology. Simple idea 2: base + bound register. Protection mechanics.

Motivation. Memory Management. Memory Management. Computer Hardware Review

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

Memory Management. Memory Management

Preview. Memory Management

Transcription:

Memory Management (Chaper 4, Tanenbaum) Memory Mgmt

Introduction The CPU fetches instructions and data of a program from memory; therefore, both the program and its data must reside in the main (RAM and ROM) memory. Modern multiprogramming systems are capable of storing more than one program, together with the data they access, in the main memory. A fundamental task of the memory management component of an operating system is to ensure safe execution of programs by providing: Sharing of memory Memory protection Memory Mgmt 2

Issues in sharing memory Transparency Several processes may co exist, unaware of each other, in the main memory and run regardless of the number and location of processes. Safety (or protection) Processes must not corrupt each other (nor the OS!) Efficiency CPU utilization must be preserved and memory must be fairly allocated. Want low overheads for memory management. Relocation Ability of a program to run in different memory locations. Memory Mgmt 3

Storage allocation Information stored in main memory can be classified in a variety of ways: Program (code) and data (variables, constants) Read only (code, constants) and read write (variables) Address (e.g., pointers) or data (other variables); binding (when memory is allocated for the object): static or dynamic The compiler, linker, loader and run time libraries all cooperate to manage this information. Memory Mgmt 4

Creating an executable code Before a program can be executed by the CPU, it must go through several steps: Compiling (translating) generates the object code. Linking combines the object code into a single selfsufficient executable code. Loading copies the executable code into memory. May include run time linking with libraries. Execution dynamic memory allocation. Memory Mgmt 5

From source to executable code translation compiler linking linker shared libraries source code object code (module) libraries (object) loading loader execution. code data executable code (load module) workspace. secondary storage main memory Memory Mgmt 6

Address binding (relocation) The process of associating program instructions and data (addresses) to physical memory addresses is called address binding, or relocation. Static new locations are determined before execution. Compile time: The compiler or assembler translates symbolic addresses (e.g., variables) to absolute addresses. Load time: The compiler translates symbolic addresses to relative (relocatable) addresses. The loader translates these to absolute addresses. Dynamic new locations are determined during execution. Run time: The program retains its relative addresses. The absolute addresses are generated by hardware. Memory Mgmt 7

Functions of a linker A compile time linker collects (if possible) and puts together all the required pieces to form the executable code. Issues: Relocation where to put pieces. Cross reference where to find pieces. Re organization new memory layout. 0x0 0x0 main main() { int X;... printf(...); } X= = + 0x0 libc printf(...) {... }.. a.out main() { int X;... printf(...); } printf(...) {... } 0x2000 0x2100 0x8000 X=... Memory Mgmt 8

Functions of a loader A loader places the executable code in main memory starting at a pre determined location (base or start address). This can be done in several ways, depending on hardware architecture: Absolute loading: always loads programs into a designated memory location. Relocatable loading: allows loading programs in different memory locations. Code Data Load Module Code Data Workspace Memory Image Dynamic (run time) loading: loads functions when first called (if ever). Main Memory Memory Mgmt 9

Absolute and relocatable modules main() { int x; 0x1000 main: 0x0000 main: 0x0000 main:... incr(&x); jsr 0x1200 jsr 0x0200 jsr incr } incr(int* a) {... } 0x1200 incr: 0x0200 incr: 0x0000 Relocatable load module incr Source code Absolute load module Relocatable (relative) load Module aka position independent code (PIC) Relocatable dynamic load module Memory Mgmt 10

Simple management schemes An important task of a memory management system is to bring (load) programs into main memory for execution. The following contiguous memory allocation techniques were commonly employed by earlier operating systems * : Direct placement Overlays Partitioning *Note: Techniques similar to those listed above are still used by some modern, dedicated specialpurpose operating systems and real time systems. Memory Mgmt 11

Direct placement max user 0 OS (drivers, buffers) unused User Program Operating System Memory allocation is trivial. No special relocation is needed, because the user programs are always loaded (one at a time) into the same memory location (absolute loading). The linker produces the same loading address for every user program. Examples: Early batch monitors MS DOS Memory Mgmt 12

Overlays 0,0 1,0 2,0 3,0 1,1 1,2 1,1 1,0 0,0 Operating System 2,1 2,3 2,2 3,1 memory snapshots overlay tree 2,1 2,0 0,0 3,2 Operating System Historically, before the sharing of main memory among several programs was automated, users developed techniques to allow large programs to execute (fit) in smaller memory. A program was organized (by the user) into a tree like structure of object modules, called overlays. The root overlay was always loaded into the memory, whereas the sub tree were (re )loaded as needed by simply overlaying existing code. Memory Mgmt 13

Partitioning A simple method to accommodate several programs in memory at the same time (to support multiprogramming) is partitioning. In this scheme, the memory is divided into a number of contiguous regions, called partitions. Two forms of memory partitioning, depending on when and how partitions are created (and modified), are possible: Static partitioning Dynamic partitioning These techniques were used by the IBM OS/360 operating system MFT (Multiprogramming with Fixed Number of Tasks) and MVT (Multiprogramming with Variable Number of Tasks.) Memory Mgmt 14

Static partitioning small jobs average jobs large jobs Operating System 10 K 100 K 500 K Main memory is divided into fixed number of (fixed size) partitions during system generation or startup. Programs are queued to run in the smallest available partition. An executable prepared to run in one partition may not be able to run in another, without being re linked. This technique uses absolute loading. Memory Mgmt 15

Dynamic partitioning Any number of programs can be loaded to memory as long as there is room for each. When a program is loaded (relocatable loading), it is allocated memory in exact amount as it needs. Also, the addresses in the program are fixed after loaded, so it cannot move. The operating system keeps track of each partition (their size and locations in the memory.) A B A B... K Operating System Operating System Operating System Operating System Partition allocation at different times Memory Mgmt 16

Address translation In order to provide basic protection among programs sharing the memory, both of the above partitioning techniques use a hardware capability known as memory address mapping, or address translation. In its simplest form, this mapping works as follows: 2000 Limits Register User program in a partition 3000 4750 5000 Jump 1750 (1750) (4750) + < Memory access error 3000 Base Register Memory Mgmt 17

Fragmentation Fragmentation refers to the unused memory that the memory management system cannot allocate. Internal fragmentation Waste of memory within a partition, caused by the difference between the size of a partition and the process loaded. Severe in static partitioning schemes. External fragmentation Waste of memory between partitions, caused by scattered noncontiguous free space. Severe in dynamic partitioning schemes. Compaction (aka relocation) is a technique that is used to overcome external fragmentation. Memory Mgmt 18

Swapping The basic idea of swapping is to treat main memory as a preemptable resource. A high speed swapping device is used as the backing storage of the preempted processes. SWAP IN Swapping device (usually, a hard disk) SWAP OUT Operating System Memory Memory Mgmt 19

Swapping cont. Swapping is a medium term scheduling method. Memory becomes preemptable. processes on disk swap in swap out processes in memory dispatch suspend process running SWAPPER DISPATCHER Swapping brings flexibility even to systems with fixed partitions, because: if needed, the operating system can always make room for high priority jobs, no matter what! Note that, although the mechanics of swapping are fairly simple in principle, its implementation requires specific support (e.g., special file system and dynamic relocation) from the OS that uses it. Memory Mgmt 20

More on swapping The responsibilities of a swapper include: Selection of processes to swap out criteria: suspended/blocked state, low priority, time spent in memory Selection of processes to swap in criteria: time spent on swapping device, priority Allocation and management of swap space on a swapping device. Swap space can be: system wide dedicated (e.g., swap partition or disk) Memory Mgmt 21

Memory protection The second fundamental task of a memory management system is to protect programs sharing the memory from each other. This protection also covers the operating system itself. Memory protection can be provided at either of the two levels: Hardware: address translation (most common!) Software: language dependent: strong typing language independent: software fault isolation Memory Mgmt 22

Dynamic relocation With dynamic relocation, each program generated address (logical address) is translated to hardware address (physical address) at runtime for every reference, by a hardware device known as the memory management unit (MMU). address translation CPU program (logical or virtual) address MMU hardware (physical or real) address Memory Memory Mgmt 23

Two views of memory Dynamic relocation leads to two different views of main memory, called address spaces. 0 0 0 A P A start P A end B P B start P B end S B S C P C start S A C logical view (logical address space) physical view (physical address space) P C end Memory Mgmt 24

Base and bounds relocation Logical Address bounds > base + Physical Address fault! In principle, this is the same technique used earlier by IBM 360 mainframes. Each program is loaded into a contiguous region of memory. This region appears to be private and the bounds register limits the range of the logical address of each program. Hardware implementation is cheap and efficient: 2 registers plus an adder and a comparator. Memory Mgmt 25

Segmentation A segment is a region of contiguous memory. The most important problem with base and bounds relocation is that there is only one segment for each process. Segmentation generalizes the base and bounds technique by allowing each process to be split over several segments (i.e., multiple base bounds pairs). A segment table holds the base and bounds of each segment. Although the segments may be scattered in memory, each segment is mapped to a contiguous region. Additional fields (Read/Write and Shared) in the segment table adds protection and sharing capabilities to segments (i.e., permissions) Memory Mgmt 26

Relocation with segmentation Logical Address Segment # Offset Seg # Base Bounds 0 0X1000 0X1120 1 0X3000 0X3340 2 0X0000 0X0FFF 3 4 0X2000 0X4000 0X2F00 0X4520 R/W 0 0 1 0 1 S 1 0 0 1 0 > fault! + Physical Address See next slide for memory allocation example. Memory Mgmt 27

Segmentation an example 0x6000 0x5520 Logical addresses Mapping Physical addresses 0x5000 0x4000 0x3F00 0x3000 0x2000 0x1340 0x1000 0x0120 0x0000 Seg 4 Seg 1 Seg 3 Seg 0 Seg 2 0x4520 0x4000 0x3340 0x3000 0x2F00 0x2000 0x1120 0x1000 0x0000 Memory Mgmt 28

More on segments When a process is created, an empty segment table is inserted into the process control block. Table entries are filled as new segments are allocated for the process. The segments are returned to the free segment pool when the process terminates. Segmentation, as well as the base and bounds approach, causes external fragmentation (because they require contiguous physical memory) and requires memory compaction. An advantage of the approach is that only a segment, instead of a whole process, may be swapped to make room for the (new) process. Memory Mgmt 29

Paging Physical memory is divided into a number of fixed size blocks, called frames. The logical memory is also divided into chunks of the same size, called pages. The size of frame/page is determined by the hardware and can be any value between 512 bytes (VAX) and 16 megabytes (MIPS 10000)! Why have such large frames? Why not? A page table defines (maps) the base address of pages for each frame in the main memory. The major goals of paging are to make memory allocation and swapping easier and to reduce fragmentation. Paging also allows allocation of non contiguous memory (i.e., pages need not be adjacent.) Memory Mgmt 30

Relocation with paging Logical Address Page # Page offset 0 1 2 3 4 5 6 7 Page table Page Table Entry (PTE) Frame # Frame offset Physical Address Memory Mgmt 31

Paging an example 0 1 2 3 4 Logical Address Space Logical Address 2 300 0 1 2 3 4 Page table 8 3 1 6 0 0 1 2 3 4 5 6 7 1 300 8 Physical Address Physical Memory Memory Mgmt 32

Hardware support for paging There are different hardware implementations of page tables to support paging. A set of dedicated registers, holding base addresses of frames. In memory page table with a page table base register (PTBR). Same as above with multi level page tables. With the latter two approaches, there is a constant overhead of accessing a memory location (What? Why?) Memory Mgmt 33

Multi level paging an example Second level Page Tables Page Number P1 P2 Page Offset Top level Page Table To frames 1 3 57. Frame 57 840 840 Memory Mgmt 34

One level paging the PDP 11 The larger PDP 11 models have 16 bit logical addresses (i.e., 64 KB) and up to 4MB of memory with page size of 8 KB. Note: Physical address space larger than logical space! There are two separate logical address spaces; one for instructions and one for data. The two page tables have eight entries, each controlling one of the eight frames per process. Data i 0 1 2 3 4 5 6 7 Instruction i 0 1 2 3 4 5 6 7 Frames 1 1 1 1 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 Virtual page number Logical address Offset Frame number 0 1 2 3 4 5 6 7 8 Memory Mgmt 35

Two level paging the VAX The VAX is the successor of the PDP 11, with 32 bit logical addresses. The VAX has 512 byte pages. 4 GB VAX virtual address reserved 2 21 9 Virtual page number Offset 3 GB Shared among processes Space: 00 User program and data 01 User stack 10 System 11 Reserved 2 GB system 1GB user stack user program and data... 0 User processes Memory Mgmt 36

Other MMU architectures Some SPARC processors used by the Sun workstations have a paging MMU with three level page tables and 4KB pages. Motorola 68030 processor uses on chip MMU with programmable multi level (1 to 5) page tables and 256 byte to 32KB pages. PowerPC processors support complex address translation mechanisms and, based on the implementation, provide 2 80 (64 bit) and 2 52 (32 bit) byte long logical address spaces. Intel Pentium processors support both segmented and paged memory with 4KB pages. Memory Mgmt 37

Inverted page tables Page # Logical address Offset (Hash) Inverted Page Table Page # Frame # Chain Simplified Inverted paging used on IBM RS6000 Hash Table Frame # Offset Physical address The inverted page table has one entry for each memory frame. Hashing is used to speedup table search. The inverted page table can either be per process or system wide. In the latter case, a new entry, PID is added to the table. Adv: independent of size of address space; small table(s) if we have large logical address spaces. Memory Mgmt 38

Segmentation with paging Logical Address S # P # Offset Physical Address Frame # Offset + Frame i Segment Base Register Segment Table Page Table Main Memory Memory Mgmt 39

Address translation on a Pentium Logical address selector offset 16 32 This is a simplified summary of combined address translation on a Pentium when paging is enabled. + Descriptor table OR Linear address (4KB pages) dir table offset The physical address of the current page directory is stored in the CR3 register. 4KB entry 4MB entry Page directory Page table 4KB frame dir offset Linear address (4MB pages) 4MB frame Memory Mgmt 40

Associative memory Problem: Both paging and segmentation schemes introduce extra memory references to access translation tables. Solution? Translation buffers (like caches) Based on the notion of locality (at a given time a process is only using a few pages or segments), a very fast but small associative (content addressable) memory is used to store a few of the translation table entries. This memory is known as a translation look aside buffer or TLB. MIPS R2000/R3000 TLB entry (64 bits) Virtual Physical Page Number PID 000000 Frame Number Flags 00000000 20 6 6 20 4 8 Memory Mgmt 41

Address translation with TLB Logical address Page # Offset VPN TLB flags PFN Page Table TLB update First try TLB PTE TLB miss TLB hit TLB is finite in size! Frame # Offset Physical address Memory Mgmt 42

Memory caching CPU Main Memory 0 Similar to storing memory addresses in TLBs, frequently used data in main memory can also be stored in fast buffers, called cache memory, or simply cache. Basically, memory access occurs as follows: Word Transfer. Cache Block Transfer Block 2 n 1 for each memory reference if data is not in cache <miss> if cache is full remove some data if read access issue memory read place data in cache return data else <hit> if read access return data else store data in cache The idea is to make frequent memory accesses faster! Memory Mgmt 43

Cache terminology Cache hit: item is in the cache. Cache miss: item is not in the cache; must do a full operation. Categories of cache miss: Compulsory: the first reference will always miss. Capacity: non compulsory misses because of limited cache size. Effective access time: P(hit) * cost of hit + P(miss)* cost of miss, where P(hit) = 1 P(miss) Memory Mgmt 44

Issues in cache design Although there are many different cache designs, all share a few common design elements: Cache size how big is the cache? The cache only contains a copy of portions of main memory. The larger the cache the slower it is. Common sizes vary between 4KB and 4MB. Mapping function how to map main memory blocks into cache lines? Common schemes are: direct, fully associative, and set associative. Replacement algorithm which line will be evicted if the cache lines are full and a new block of memory is needed. A replacement algorithm, such as LRU, FIFO, LFU, or Random is needed only for associative mapping (Why?) Memory Mgmt 45

Issues in cache design cont. Write policy What if CPU modifies a (cached) location? This design issue deals with store operations to cached memory locations. Two basic approaches are: write through (modify the original memory location as well as the cached data) and write back (update the memory location only when the cache line is evicted.) Block (or line) size how many words can each line hold? Studies have shown that a cache line of 4 to 8 addressable units (bytes or words) provide close to optimal number of hits. Number of caches how many levels? Unified or split cache for data and instructions? Studies have shown that a second level cache improves performance. Pentium and Power PC processors each have on chip level 1 (L1) split caches. Pentium Pro processors have on chip level 2 (L2) cache, as well. Memory Mgmt 46

Mapping function Since there are more main memory blocks (Block i for i=0 to n) than cache lines (Line j for j=0 to m, and n >> m), an algorithm is needed for mapping main memory blocks to cache lines. Direct mapping maps each block of memory into only one possible cache line. Block i maps to Line j, where i = j modulo m. Associative mapping maps any memory block to any line of the cache. Set associative mapping cache lines are grouped into sets and a memory block can be mapped to any line of a cache set. Block i maps to Set j where i=j modulo v and v is the number of sets with k lines each. Memory Mgmt 47

Set associative cache organization memory address Tag Set Word set 0 B 0 B 1 B 2 set j Compare B j Cache hit Set v 1 Main memory Cache miss Cache Memory Mgmt 48

Dynamic memory allocation Static memory allocation schemes are not sufficient at run time because of the unpredictable nature of executing programs. For certain programs, such as recursive functions or programs that use complex data structures, the memory needs cannot be known in advance. Two basic operations are needed: allocate and free. For example, in UNIX, these are malloc() and free(). Dynamic allocation can be handled using either stack (hierarchical, restrictive) or heap (more general, but less efficient) allocation. Memory Mgmt 49

Stack organization Memory allocation and freeing operations are partially predictable. Since the organization is hierarchical, the freeing operates in reverse (opposite) order. top top B s stack frame top top A s stack frame A s stack frame A s stack frame top Current stack After call to A After call to B After returning from B After returning from A Memory Mgmt 50

Heap organization Allocation and release of heap space is totally random. Heaps are used for allocation of arbitrary list structures and complex data organizations. As programs execute (and allocate and free structures), heap space will fill with holes (unallocated space, i.e., fragmentation) Statistically, for first fit, even with some optimization, given N allocated blocks, another 0.5N blocks will be lost due to fragmentation. This result is known as the fifty percent rule. (pg. 254 in Silberschatz textbook) Free List A snapshot of heap Memory Mgmt 51

Free memory management Bit maps This technique divides memory into fixed size blocks (e.g., sectors of 256 byte blocks) and keeps an array of bits (bit map), one bit for each block. Linked lists A free list keeps track of the unused memory. There are several algorithms that can be used, depending on the way the unused memory blocks are allocated: first fit, best fit, and worst fit. Buddy system This algorithm takes advantage of binary systems. As a process requests memory, it is given a block of memory with a size of power two in which the process can just fit. Memory Mgmt 52

Reclaiming memory How do we know when memory can be freed? It is trivial when a memory block is used by one process. However, this task becomes difficult when a block is shared (e.g., accessed through pointers) by several processes. Two problems with reclaiming memory: Dangling pointers: occur when the original allocator frees a shared pointer. Memory leaks: occur when we forget to free storage, even when it will not or cannot be used again. This is a common and serious problem; lots of tools. Unacceptable in an OS. Memory Mgmt 53

Reference counts Memory reclamation can be done by keeping track of reference counters (i.e., the outstanding pointers to each block of memory.) When the counter goes down to zero, the memory block is freed. This scheme works fine with hierarchical structures, but becomes tricky with circular structures. Examples: Smalltalk and Java use a similar scheme. UNIX s in memory file descriptors And, after a system crash, fsck program runs (during rebooting) to check the integrity of on disk file systems. Memory Mgmt 54

Garbage collection As an alternative to an explicit free operation, some systems implicitly do so (free storage) by simply deleting pointers. These systems search through all deleted pointers and reclaim the storage referenced by them. Some languages, e.g., Lisp and Java, support this kind of reclaimed (free) memory management. Garbage collection is often expensive; it could use more than 20% of the total CPU time! Also, it is difficult to code and debug garbage collectors. But, garbage collection eliminates a large class of application programmer errors (a good thing)! Memory Mgmt 55