Heap Management portion of the store lives indefinitely until the program explicitly deletes it C++ and Java new Such objects are stored on a heap

Similar documents
Run-time Environments - 3

Memory Management william stallings, maurizio pizzonia - sistemi operativi

Memory Allocation. Static Allocation. Dynamic Allocation. Dynamic Storage Allocation. CS 414: Operating Systems Spring 2008

Preview. Memory Management

Run-time Environments -Part 3

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Dynamic Memory Allocation

CIS Operating Systems Contiguous Memory Allocation. Professor Qiang Zeng Spring 2018

ECE 598 Advanced Operating Systems Lecture 10

Operating systems. Part 1. Module 11 Main memory introduction. Tami Sorgente 1

CIS Operating Systems Memory Management. Professor Qiang Zeng Fall 2017

Process s Address Space. Dynamic Memory. Backing the Heap. Dynamic memory allocation 3/29/2013. When a process starts the heap is empty

Optimizing Dynamic Memory Management

CS61C : Machine Structures

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Memory Management. Memory Management

! What is main memory? ! What is static and dynamic allocation? ! What is segmentation? Maria Hybinette, UGA. High Address (0x7fffffff) !

Lectures 13 & 14. memory management

Run-Time Environments/Garbage Collection

Review! Lecture 5 C Memory Management !

CS61C : Machine Structures

Dynamic Memory Allocation. Basic Concepts. Computer Organization 4/3/2012. CSC252 - Spring The malloc Package. Kai Shen

Memory Management. Memory Management Requirements

Memory Management. COMP755 Advanced Operating Systems

Dynamic Memory Allocation: Basic Concepts

Memory management. Johan Montelius KTH

Dynamic Storage Allocation

Motivation for Dynamic Memory. Dynamic Memory Allocation. Stack Organization. Stack Discussion. Questions answered in this lecture:

CS61C : Machine Structures

Engine Support System. asyrani.com

Dynamic Memory Allocation: Basic Concepts

A.Arpaci-Dusseau. Mapping from logical address space to physical address space. CS 537:Operating Systems lecture12.fm.2

10.1. CS356 Unit 10. Memory Allocation & Heap Management

CIS Operating Systems Non-contiguous Memory Allocation. Professor Qiang Zeng Spring 2018

CS399 New Beginnings. Jonathan Walpole

In multiprogramming systems, processes share a common store. Processes need space for:

Memory Management Basics

Operating Systems. 11. Memory Management Part 3 Kernel Memory Allocation. Paul Krzyzanowski Rutgers University Spring 2015

Operating Systems: Internals and Design Principles. Chapter 7 Memory Management Seventh Edition William Stallings

Memory Management. q Basic memory management q Swapping q Kernel memory allocation q Next Time: Virtual memory

Events, Memory Management

Dynamic Memory Management! Goals of this Lecture!

Dynamic Memory Management

Goals of Memory Management

Dynamic Memory Allocation

Memory Management. Today. Next Time. Basic memory management Swapping Kernel memory allocation. Virtual memory

Robust Memory Management Schemes

Advanced Programming & C++ Language

CS 111. Operating Systems Peter Reiher

Requirements, Partitioning, paging, and segmentation

Dynamic Memory: Alignment and Fragmentation

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

ECE 598 Advanced Operating Systems Lecture 12

General Objective:To understand the basic memory management of operating system. Specific Objectives: At the end of the unit you should be able to:

Programming Languages

In Java we have the keyword null, which is the value of an uninitialized reference type

6. Which of the following operating systems was the predecessor of Microsoft MS-DOS? a. Dynix b. Minix c. CP/M

UNIT III MEMORY MANAGEMENT

Introduction to Computer Systems /18 243, fall th Lecture, Oct. 22 th

e-pg Pathshala Subject: Computer Science Paper: Operating Systems Module 35: File Allocation Methods Module No: CS/OS/35 Quadrant 1 e-text

Dynamic Memory Management

Today. Dynamic Memory Allocation: Basic Concepts. Dynamic Memory Allocation. Dynamic Memory Allocation. malloc Example. The malloc Package

Today. Dynamic Memory Allocation: Basic Concepts. Dynamic Memory Allocation. Dynamic Memory Allocation. malloc Example. The malloc Package

Chapter 3 - Memory Management

CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 11

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

CSCE Operating Systems Non-contiguous Memory Allocation. Qiang Zeng, Ph.D. Fall 2018

Process. Memory Management

3. Memory Management

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

Dynamic Memory Allocation. Zhaoguo Wang

CSCI 4500 / 8506 Sample Questions for Quiz 5

6.172 Performance Engineering of Software Systems Spring Lecture 9. P after. Figure 1: A diagram of the stack (Image by MIT OpenCourseWare.

Chapter 9 Real Memory Organization and Management

Chapter 9 Real Memory Organization and Management

File system internals Tanenbaum, Chapter 4. COMP3231 Operating Systems

Requirements, Partitioning, paging, and segmentation

Memory Management. To do. q Basic memory management q Swapping q Kernel memory allocation q Next Time: Virtual memory

Dynamic Memory Allocation

CSCI-UA /2. Computer Systems Organization Lecture 19: Dynamic Memory Allocation: Basics

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Class Information ANNOUCEMENTS

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Process. One or more threads of execution Resources required for execution

CS 33. Storage Allocation. CS33 Intro to Computer Systems XXVII 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Dynamic Memory Allocation I Nov 5, 2002

Chapter 7 Memory Management

CS Operating Systems

CS Operating Systems

Part II: Memory Management. Chapter 7: Physical Memory Chapter 8: Virtual Memory Chapter 9: Sharing Data and Code in Main Memory

An Efficient Heap Management Technique with Minimum Fragmentation and Auto Compaction

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

The Memory Management Unit. Operating Systems. Autumn CS4023

Dynamic Memory Allocation I

CS61C : Machine Structures

A Comprehensive Complexity Analysis of User-level Memory Allocator Algorithms

Disks and I/O Hakan Uraz - File Organization 1

CSE 351, Spring 2010 Lab 7: Writing a Dynamic Storage Allocator Due: Thursday May 27, 11:59PM

Segmentation. Multiple Segments. Lecture Notes Week 6

Recitation #12 Malloc Lab - Part 2. November 14th, 2017

Transcription:

Heap Management The heap is the portion of the store that is used for data that lives indefinitely, or until the program explicitly deletes it. While local variables typically become inaccessible when their procedures end, many languages enable us to create objects or other data whose existence is not tied to the procedure activation that creates them. For example, both C++ and Java give the programmer new to create objects that may be passed from procedure to procedure, so they continue to exist long after the procedure that created them is gone. Such objects are stored on a heap. Allocating memory There are two ways that memory gets allocated for data storage: 1. Compile Time (or static) Allocation Memory for named variables is allocated by the compiler Exact size and type of storage must be known at compile time For standard array declarations, this is why the size has to be constant 2. Dynamic Memory Allocation Memory allocated "on the fly" during run time dynamically allocated space usually placed in a program segment known as the heap or the free store Exact amount of space or number of items does not have to be known by the compiler in advance. For dynamic memory allocation, pointers are crucial The Memory Manager The memory manager keeps track of all the free space in heap storage at all times. It performs two basic functions: Allocation. When a program requests memory for a variable or object, the memory manager produces a chunk of contiguous heap memory of the requested size. If possible, it satisfies an allocation request using free space in the heap; if no chunk of the needed size is available, it seeks to increase the heap storage space by getting consecutive bytes of virtual memory from the operating system. If space is exhausted, the memory manager passes that information back to the application program. Deallocation. The memory manager returns deallocated space to the pool of free space, so it can reuse the space to satisfy other allocation requests. Memory

managers typically do not return memory to the operating system, even if the program's heap usage drops. Dynamic Memory Allocation We can dynamically allocate storage space while the program is running, but we cannot create new variable names "on the fly". For this reason, dynamic allocation requires two steps: 1. Creating the dynamic space. 2. Storing its address in a pointer (so that the space can be accessed) To dynamically allocate memory in C++, we use the new operator. Deallocation: Deallocation is the "clean-up" of space being used for variables or other data storage Compile time variables are automatically deallocated based on their known extent (this is the same as scope for "automatic" variables) It is the programmer's job to deallocates dynamically created space To deallocates dynamic memory, we use the delete operator Implementing dynamic allocation Where does memory for a program's heap come from? The program makes calls to the OS (system calls) to add memory to the heap of a running process (move the top of heap pointer) On Linux, this is void *sbrk(int incr) adds at least incr bytes to the end of the program's data segment The key idea is to get big chunks of memory from the OS and then hand out small regions to the program on demand OS calls can be a hundred times slower than local subroutines How is the memory in the heap managed? Data structures need to be kept to: Keep track of what memory is in use, and Keep track of where the free memory regions are Calls to malloc(), free(), etc., update these data structures 7

A simple heap structure 1. Divide the heap into blocks 2. heap blocks: header ( bits) allocated block (multiple of, double-word aligned) optional padding for alignment or other reasons 3. 4-byte header: 29 bits for size of block (why 29?) 1 bit to indicate free/allocated 2 bits available for other purposes; we won t use them size is size of heap block, not allocated block end of heap marked by header w/ size = 0 free Chunk 1 24 24 0 low 0 1 1 1 0 1 high r s p q Each cell above is 4 bytes (e.g. 4 byte chunks on a -bit system) Headers: size and used flag (1 marked in use and 0 indicates it is free) Blue areas: allocated blocks Red areas: wasted space due to alignment How can we find the address of the next block? What kinds of things could be stored at the areas pointed to by the pointers? Above could be generated by this sequence: r = malloc (); (malloc find usable block on free list) s = malloc(1); p = malloc(4); p () q = malloc(); free(r);

Managing and Coalescing Free Space When an object is deallocated manually, the memory manager must make its chunk free, so it can be allocated again. In some circumstances, it may also be possible to combine (coalesce) that chunk with adjacent chunks of the heap, to form a larger chunk. There is an advantage to doing so, since we can always use a large chunk to do the work of small chunks of equal total size, but many small chunks cannot hold one large object, as the combined chunk could. If we keep a bin for chunks of one fixed size, as Lea does for small sizes, then we may prefer not to coalesce adjacent blocks of that size into a chunk of double the size. It is simpler to keep all the chunks of one size in as many pages as we need, and never coalesce them. Then, a simple allocation/deallocation scheme is to keep a bitmap, with one bit for each chunk in the bin. A 1 indicates the chunk is occupied; 0 indicates it is free. When a chunk is deallocated, we change its 1 to a 0. When we need to allocate a chunk, we find any chunk with a 0 bit, change that bit to a 1, and use the corresponding chunk. If there are no free chunks, we get a new page, divide it into chunks of the appropriate size, and extend the bit vector. Reducing Fragmentation At the beginning of program execution, the heap is one contiguous unit of free space. As the program allocates and deallocates memory, this space is broken up into free and used chunks of memory, and the free chunks need not reside in a contiguous area of the heap. We refer to the free chunks of memory as holes. With each allocation request, the memory manager must place the requested chunk of memory into a large-enough hole. Unless a hole of exactly the right size is found, we need to split some hole, creating a yet smaller hole. With each deallocation request, the freed chunks of memory are added back to the pool of free space. We coalesce contiguous holes into larger holes, as the holes can only get smaller otherwise. If we are not careful, the memory may end up getting fragmented, consisting of large numbers of small, noncontiguous holes. It is then possible that no hole is large enough to satisfy a future request, even though there may be sufficient aggregate free space. 9

Problem: free creates holes (fragmentations) results, lots of free space but cannot satisfy request Best-Fit and Next-Fit Object Placement We reduce fragmentation by controlling how the memory manager places new objects in the heap. It has been found empirically that a good strategy for minimizing fragmentation for real-life programs is to allocate the requested memory in the smallest available hole that is large enough. This best-fit algorithm tends to spare the large holes to satisfy subsequent, larger requests. An alternative, called first-fit, where an object is placed in the first (lowestaddress) hole in which it fits, takes less time to place objects, but has been found inferior to best-fit in overall performance. To implement best-fit placement more efficiently, we can separate free space into bins, according to their sizes. One practical idea is to have many more bins for the smaller sizes, because there are usually many more small objects. For sizes that do not have a private bin, we find the one bin that is allowed to include chunks of the desired size. Within that bin, we can use either a first-fit or a best-fit strategy; i.e., we either look for and select the first chunk that is sufficiently large or, we spend more time and find the smallest chunk that is sufficiently large. Note that when the fit is not exact, the remainder of the chunk will generally need to be placed in a bin with smaller sizes. While best-fit placement tends to improve space utilization, it may not be the best in terms of spatial locality. Chunks allocated at about the same time by a program tend to have similar reference patterns and to have similar lifetimes. Placing them close together thus improves the program s spatial locality. One useful adaptation of the best-fit algorithm is to modify the placement in the case when a chunk of the exact requested size cannot be found. In this case, we use a next-fit strategy, trying to allocate the object in the chunk that has last been split, whenever enough space for the new object remains in that chunk. Next-fit also tends to improve the speed of the allocation operation. We can summaries the placement algorithms selecting among free blocks of main memory as follows:

Best-Fit: Closest in size to the request First-Fit: Scans the main memory from the beginning and first available block that is large enough Next-Fit: Scans the memory from the location of last placement and chooses next available block that is large enough Compaction is time consuming OS must be clever in plugging holes while assigning processes to memory Allocation of 1 MB block using three placement algorithms 71

Example Suppose the heap consists of seven chunks. The sizes of the chunks, in order, are,,,,,, bytes. If your request space for objects of the following sizes:,,, 1, in that order, what does the free space list look like after satisfying the requests, if the method of selecting chunks is (a) First fit, b) Best fit). First Fit First fit first fit 1 1 first fit first fit 1 Best fit Best fit Best fit Best fit 1 Best fit 2 2 1 4 72