Memory Management. Memory Management

Similar documents
Chapter 7 Memory Management

Memory Management. Memory Management

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Requirements, Partitioning, paging, and segmentation

Lecture 7. Memory Management

Memory Management. Memory Management Requirements

Memory Management william stallings, maurizio pizzonia - sistemi operativi

Paging, and segmentation

3. Memory Management

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Requirements, Partitioning, paging, and segmentation

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Memory Management Basics

Operating System Support

Administrivia. Deadlock Prevention Techniques. Handling Deadlock. Deadlock Avoidance

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

UNIT III MEMORY MANAGEMENT

Operating Systems: Internals and Design Principles. Chapter 7 Memory Management Seventh Edition William Stallings

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

Memory: Overview. CS439: Principles of Computer Systems February 26, 2018

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Process. Memory Management

Memory Management. CSE 2431: Introduction to Operating Systems Reading: , [OSC]

Memory management. Knut Omang Ifi/Oracle 10 Oct, 2012

General Objective:To understand the basic memory management of operating system. Specific Objectives: At the end of the unit you should be able to:

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Chapter 9 Real Memory Organization and Management

Chapter 9 Real Memory Organization and Management

Background. Contiguous Memory Allocation

CIS Operating Systems Contiguous Memory Allocation. Professor Qiang Zeng Spring 2018

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Main Memory (Part I)

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 13: Address Translation

Process. One or more threads of execution Resources required for execution

CIS Operating Systems Memory Management. Professor Qiang Zeng Fall 2017

Chapter 9 Memory Management

Process size is independent of the main memory present in the system.

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

16 Sharing Main Memory Segmentation and Paging

! What is main memory? ! What is static and dynamic allocation? ! What is segmentation? Maria Hybinette, UGA. High Address (0x7fffffff) !

Chapters 9 & 10: Memory Management and Virtual Memory

6. Which of the following operating systems was the predecessor of Microsoft MS-DOS? a. Dynix b. Minix c. CP/M

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Operating systems. Part 1. Module 11 Main memory introduction. Tami Sorgente 1

Memory Management (1) Memory Management

Memory Management (1) Memory Management. CPU vs. memory. No.8. Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

15 Sharing Main Memory Segmentation and Paging

Address spaces and memory management

CS399 New Beginnings. Jonathan Walpole

12: Memory Management

Compile: compiler. Load: loader. compiler linker loader memory. source object load code module module 2

Operating Systems (2INC0) 2017/18

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Basic Memory Management

Memory Management (Chaper 4, Tanenbaum)

Classifying Information Stored in Memory! Memory Management in a Uniprogrammed System! Segments of a Process! Processing a User Program!

MEMORY MANAGEMENT: Real Storage. Unit IV

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

a process may be swapped in and out of main memory such that it occupies different regions

CS370 Operating Systems

CSCI 4717 Computer Architecture. Memory Management. What is Swapping? Swapping. Partitioning. Fixed-Sized Partitions (continued)

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Roadmap. Tevfik Koşar. CSC Operating Systems Spring Lecture - XII Main Memory - II. Louisiana State University

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

The Memory Management Unit. Operating Systems. Autumn CS4023

COMPUTER SCIENCE 4500 OPERATING SYSTEMS

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

stack Two-dimensional logical addresses Fixed Allocation Binary Page Table

Memory Management. Jo, Heeseung

MEMORY MANAGEMENT. Jo, Heeseung

CS420: Operating Systems

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

Operating Systems 2230

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

Chapter 9: Memory Management. Background

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Last Class: Deadlocks. Where we are in the course

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Virtual Memory. Chapter 8

Memory management: outline

Memory management: outline

Memory Management. Memory Management

Last class: Today: Deadlocks. Memory Management

Memory Management. COMP755 Advanced Operating Systems

How to create a process? What does process look like?

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Chapter 8 Virtual Memory

Memory Management (Chaper 4, Tanenbaum)

CSCI 4500 / 8506 Sample Questions for Quiz 5

Chapter 8. Operating System Support. Yonsei University

Memory Management Virtual Memory

File Management By : Kaushik Vaghani

Transcription:

Memory Management Most demanding di aspect of an operating system Cost has dropped. Consequently size of main memory has expanded enormously. Can we say that we have enough still. Swapping in/out. Memory I/O still slow compared with the speed of processors. Memory Management What happens when a program starts? It uses memory in two ways: Procedure Calls Dynamic Data Types Fall -

Memory Management Used Free Used Free Used 4 5 6 7 8 9 U F U 5 U F 7 U U 5 U F F 7 Memory Management Uni Programming System Kernel Program (user) Just one large area Multi Programming System Kernel Program (user) The large area is sliced/sub-divided further dynamically to have more then one program active (not suspended) in the main memory; which is basically memory management. 4 Fall -

Memory Management. Relocation. Protection. Sharing 4. Logical Organization 5. Physical Organization 5 Memory Management Chief responsibility of a MMS (Memory Management System) is to bring/put processes in and out of main memory for the execution by the processor. How that is achieved and from where to get the processes, obviously, some place similar to main memory. Basically two schemes are used: Paging Segmentation 6 Fall -

Memory Management Schemes. Fixed partitioning. Dynamic Partitioning. Simple Paging 4. Simple Segmentation 5. Virtual Memory Paging 6. Virtual Memory Segmentation 7 Memory Management Schemes Techniques Description Strengths Weaknesses Fixed Partitioning Main memory is divided into a number of static partitions at system generation time. A process may be loaded into a partition of equal or greater size. Simple to implement, small o/s overhead Inefficient usage of memory (internal fragmentation), fixed number of active processes Dynamic Partitions are created dynamically, so that each Efficient usage of memory, no Inefficient usage of Partitioning process is loaded into a partition of exactly the internal fragmentation processor to counter same size as that process. external fragmentation. Simple Paging Main memory is divided into a number of equal No external fragmentation Small internal size frames. Each process is divided into equal fragmentation. size pages of the same size as frames. Process is loaded by putting all the pages into available but not necessarily contiguous frames. Simple Segmentation Virtual Memory Paging Virtual Memory Segmentation Each process is divided into number of segments. A process is loaded by putting all of its segments into dynamically created partitions not necessarily contiguous. Similar to simple paging, except not necessary to load all pages of a process, non resident pages can be loaded when required during execution. Similar to simple segmentation, except not necessary to load all segments of a process, non resident segments can be loaded when required during execution. No internal fragmentation No external fragmentation, large virtual address space, higher degrees of multi programming No internal fragmentation, large virtual address space, higher degrees of multi programming, protection and sharing support External fragmentation, better then dynamic partitioning Over head due to complexity of MM. Same above 8 Fall - 4

Issues with Fixed Partitioning Equal & Unequal Partitioning: Equal size set will produce internal fragmentation. For fixed equal size scheme any partition can be used by the process. Unequal scheme may have better usage of memory. Both can have fixed number of processes in the memory. A program may be bigger compared with a partition, programmer must design overlays scheme to bring in the relative part of the process. Smaller jobs can make system inefficient. In general this is not a useful system in terms of memory usage. When required which process to swap out? 9 Queue Scheme for Un-Equal Partition Operating System Operating System Fall - 5

Dynamic Partitioning To overcome the inefficient usage of memory Dynamic partitioning scheme was developed; meaning use as much as you required. Operating System M Operating System P M 5M Operating System P M 5M Operating System P M 5 M 6 M P 5 M P 5 M P M Dynamic Partitioning Problem It looks fine until? M Operating Operating Operating Operating System System System System P P P P4 P5 P6 P P P P4 P P P P This Process is to be loaded: P5 P5 P5 P P Fall - 6

Placement Options for Dynamic Partitioning Compaction is not a very good idea; waste of CPU power Operating system designers came up with several schemes to efficiently use the memory, question is: Which is better First Fit (FF) Best Fit (BF) Next Fit (NF) Simple & Fast Slowest Poor compared to FF Front Loading Creates Small Holes Back end Loading Frequent Compaction? Frequent Compaction Frequent Fragmentation It is really hard to put your finger on one of these techniques, basically it depends on the sequence of events. Example of BF, NF & FF Empty Occupied 8 M 5M M M Next process is of M Last Occupied 8 M 5M M M M FF M BF M NF With Dynamic replacement in a multi-programming system, at some point in time all the processes will be in the blocked state and even with compaction there will not be enough space for new processes. The o/s has to swap one of the processes; question is which one? 4 Fall - 7

Buddy System Fixed partition schemes suffers from the limitation of having fixed number of active (non suspended) processes and the usage of space may also not be optimal. Dynamic partitioning as seen in our discussion is more complex to maintain and compaction is a major issue with it. So what is the solution: a buddy system. 5 Buddy System In buddy system, whole space is treated as one piece in the beginning, say U. Now suppose a size of s is requested: if U- <s U Allocate the whole block else Recursively divide the block equally & test the condition at each time; when it satisfies, allocate the block and get out of the loop. System also keep the record of all the un-allocated blocks (holes) each of size say a and can merge these different size blocks to make one big chunk (see code on page ). 6 Fall - 8

Buddy System Example if - < s A B C D E 7 Tree Representation of Buddy System Buddy system provides reasonable solution compared with fixed and dynamic partitioning techniques and has found applicable in most parallel systems. The leaf nodes of tree representation shows the current partitioning of the memory and if two buddies are neighbor leaf nodes, so one can be allocated otherwise it should be coalesced into bigger block. 8 Fall - 9

Use of Buddy System The buddy system is known for its speed and simplicity. Less Over Head. However, high internal and external fragmentation have made it unattractive for use in operating system file layout. Dartmouth Time-Sharing System (DTSS) uses this method. LINUX also use Buddy Algorithm. 9 Relocation Issues Initially, loaded processes have absolute references, whether occupying a contiguous block or non contiguous. Compaction has to re-reference all the addresses. Swapping of processes may not necessarily occur with the same partition initially assigned to that particular process. Address Definitions Logical: It is independent of any current assignment of data memory. Relative: As the name says, it is with respect to some known position. Physical: Actual memory address allocated to a program/data etc. Fall -

Process Image in Memory PCB Program Program Data Data Stack Object Code Process Image in Main Memory Typical Loading Scenario Library Module A Process Module B Linker Load Module Loader Module Z Fall -

Loader Option for address Translation. Programmer can hardcode the physical addresses in the program.. Physical Addresses can be obtained at compilation time.. Another option is to ask compiler to generate relative addresses and translate them at load time. 4. The last option is that loader retains the relative addresses and they are converted dynamically at run time by the processor hardware. Linker Options Program Time: Programmer can put every sub-program and data references required in the program itself. Compile Time: Assembler get all the referenced sub-program and data and assemble them at compile time. Load Time: References to external modules are not resolved until load time. Therefore, at load time all the required references are dynamically appended to the load module and loaded into the main memory. Run Time: External References are not resolved until run time, therefore, when the external calls executed the processor is interrupted and the required module is linked. Load Module Creation: Each individual modules are assembled using relative addresses. These modules are then re-referenced relative to the origin of the final load module, this would not be true for program time case. 4 Fall -

Absolute Loading Program always placed at the same memory location when ever loaded for execution. Program has fixed specific addresses. Programmer or compiler can do the address translation. Programmer Choice: Each individual programmer has to know where to load the program. Any change would mean re-referencing each address. Compiler Choice: This has the advantage, it does not suffer from the above limitations. 5 Absolute Loading Symbolic Addresses x Program Jump x Absolute Addresses 44 Program Jump 44 Load y Load 4 y Data 4 Data Object Module Absolute Load Module 6 Fall -

Re-Locatable Loading Loading to a specific location limits the functionality, typically required in having multiple processes in the memory. Compiler Produce addresses for a program relative to the start of a program. Job of the loader becomes simple. Relative Addresses Program Jump 4 4 Load Data Relative Load Module 7 Dynamic Run Time Loading Re-locatable loading has edge over absolute loading. For multi-programming schemes process images are frequentlyentl swapped in and out of memory. Re-locatable loading can not support swapping of images, as the addresses are bounded absolutely at the initial time of loading. Dynamic loading is very similar to re-locatable reference model but address translation is performed through the hardware. Dynamic loading provides swapping of processes. 8 Fall - 4

Dynamic Run Time Loading Relative Address PCB Base Register Program Bound Register Adder Comparator Absolute Address Data Stack Interrupt to o/s Process Image 9 Linking Scenario Relative Addresses External Reference to Module B Module A CALL B Return Length L Module A Jump L Module B L- L Return Module B External Reference to Module C CALL C Length M Jump L+M Return L+M- Return Module C Return Length N L+M L+M+N- Module C Return Object Module Load Module Fall - 5

Page Process is broken into small equal pieces called pages. Memory is divided into small pieces called frames, the size of frame is equal to that of page. Pages of Process A A A A A Pages of Process B B 8 B 9 Frame Numbers 4 5 6 7 Memory Frame Numbers A A A A 4 5 6 B B 7 8 9 s A & B Loaded Proces Page Pages of Process A Frame Numbers Frame Numbers A A A A Pages of Process B B B Pages of Process C C C C 4 5 6 7 8 9 Memory 4 5 6 7 8 9 A A A A C5 B B C C C C C4 Process A & B Loade ed C C4 C5 Fall - 6

Data Structuring for Paging Process A Page Table Process B Page Table 7 8 Process C Page Table 4 5 9 6 Free Frames Frame Numbers 4 5 6 7 8 9 Memory Frame Numbers 4 5 6 7 8 9 A A A A C5 B B C C C C C4 Pro ocess A & B Loaded 4 5 Paging Example Consider this example, the page size is k (4), 6 bit address is used, meaning bits are needed for offset field (k), leaving 6 bits for page number. This means a program can consist of a maximum 64 pages of k bytes. Suppose we have a logical address 5 Relative address = 5 Logical Address = page #, offset 478 User Process bytes 7 Page # Page # 478 Internal Fragmentation Partitioning Paging 4 Fall - 7

Page Address Translation 6 Bit Logical Address 6 Bit Page Number Bit offset Page Table 6 Bit Physical Address 5 Segmentation. Similar to paging another option is to divide the program into segments of varying sizes but should not exceed a maximum segment size.. It is similar to dynamic partitioning, the difference being that it is not necessary to load the segments contiguously.. It eliminate internal fragmentation but do have external fragmentation, however it is very small. 4. In contrast to paging, segmentation is visible to the user as it provides structuring of data and program. Typically, programmer or compiler assign segments for data and program. 5. Because of unequal size, address translation mechanism is not simple. 6. A simple segmentation scheme use a segmentation table for each process and a list of free memory blocks, each table entry has a staring address in the main memory, length of the segment. 7. When a process is in running state, the address of its segment table is loaded into a special register used by the memory management hardware. 6 Fall - 8

Segmentation Logical Address = segment # offset 75 Seg gment 95 5 Bytes Segment 75 Byte es 75 Segmentation 7 Segmentation Address Translation Consider this example, we have logical address, 4 left bits for segment number and the rest is offset, which mean we can have a segment equal to 496 ( bits). The offset is 75 imagine that this segment is residing in the main memory staring at address, so the physical address will be + =. 6 Bit Logical Address 4 Bit Segment # Bit offset Length Base Process Segment Table + 6 Bit Physical Address 8 Fall - 9

Address Translation Steps Paging Steps:. Extract the left n most bits as the page number from the logical address.. Use this page number as the index into the process page table to find the frame number (k).. The starting physical address is kx m (m = word length n) and the physical address of the referenced byte is that number plus the offset. Segmentation Steps:. Extract the segment number from the left most n bits of the logical address.. Use this segment number asindextotheprocess segment table, to find the starting physical address of the segment.. Compare the offset finds in the right most m bits to the length of the segment, if it is greater it is invalid. 4. The desired physical address is sum of the starting physical address of the segment plus the offset. 9 Interesting Comment About Windows Paging http://w-uh.com/posts/6b-windows_paging.html 4 Fall -

4 Fall -