General Objective:To understand the basic memory management of operating system. Specific Objectives: At the end of the unit you should be able to:

Similar documents
Chapter 7 Memory Management

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Memory Management. Memory Management

Lecture 7. Memory Management

Memory Management. Memory Management Requirements

3. Memory Management

Operating Systems: Internals and Design Principles. Chapter 7 Memory Management Seventh Edition William Stallings

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Requirements, Partitioning, paging, and segmentation

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Requirements, Partitioning, paging, and segmentation

Memory Management william stallings, maurizio pizzonia - sistemi operativi

UNIT III MEMORY MANAGEMENT

Operating Systems Unit 6. Memory Management

Operating Systems 2230

Memory Management. Memory Management

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Preview. Memory Management

CS370 Operating Systems

CS399 New Beginnings. Jonathan Walpole

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition


Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

a process may be swapped in and out of main memory such that it occupies different regions

Chapter 3 - Memory Management

Memory management. Knut Omang Ifi/Oracle 10 Oct, 2012

Memory Management. Jo, Heeseung

Chapters 9 & 10: Memory Management and Virtual Memory

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

MEMORY MANAGEMENT. Jo, Heeseung

File System Interface and Implementation

Memory Management. CSE 2431: Introduction to Operating Systems Reading: , [OSC]

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Chapter 8 Virtual Memory

Operating Systems (2INC0) 2017/18

Process size is independent of the main memory present in the system.

Memory Management (1) Memory Management. CPU vs. memory. No.8. Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University

Memory Management (1) Memory Management

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Roadmap. Tevfik Koşar. CSC Operating Systems Spring Lecture - XII Main Memory - II. Louisiana State University

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Background. Contiguous Memory Allocation

Memory: Overview. CS439: Principles of Computer Systems February 26, 2018

Chapter 8: Memory Management

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 9 Real Memory Organization and Management

Chapter 9 Real Memory Organization and Management

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

CS307: Operating Systems

Part II: Memory Management. Chapter 7: Physical Memory Chapter 8: Virtual Memory Chapter 9: Sharing Data and Code in Main Memory

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Virtual Memory. Chapter 8

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation

12: Memory Management

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Memory and multiprogramming

Chapter 8. Operating System Support. Yonsei University

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

Chapter 9: Memory Management. Background

Basic Memory Management

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

Module 8: Memory Management

Memory Management Virtual Memory

Chapter 8: Main Memory

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

Principles of Operating Systems

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

Goals of Memory Management

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

Chapter 8: Main Memory

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Chapter 9 Memory Management

Chapter 8: Main Memory

CHAPTER 3 RESOURCE MANAGEMENT

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

CIS Operating Systems Contiguous Memory Allocation. Professor Qiang Zeng Spring 2018

UNIT 1 MEMORY MANAGEMENT

Chapter 8: Memory-Management Strategies

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Process. Memory Management

The Memory Management Unit. Operating Systems. Autumn CS4023

Chapter 4 Memory Management. Memory Management

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Module 8: Memory Management

Process. One or more threads of execution Resources required for execution

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Memory Management. Memory Management

CS420: Operating Systems

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Transcription:

F2007/Unit6/1 UNIT 6 OBJECTIVES General Objective:To understand the basic memory management of operating system Specific Objectives: At the end of the unit you should be able to: define the memory management list the objectives of memory management in operating system explain the virtual memory and implementation concept explain the relocation policy in memory management

F2007/Unit6/2 INPUT 6.0 Introduction Effective memory management is vital in a multiprogramming system. If only a few processes are in memory, then for much of the time of the processes will be waiting for input output and the processors will be idle. Thus, memory needs to be allocated efficiently to pack as many processors into memory possible. 6.1 Objectives While surveying the various mechanisms and policies associated with memory management, it is well to keep in mind the requirements that memory management is intended to satisfy. It suggests five requirements: Relocation Protection Sharing Logical organization Physical organization

F2007/Unit6/3 6.1.1 Relocation In a multiprogramming system, the available main memory is generally shared among a number of processes. Typically it is not possible for programmer to know in advance which are the programs that will reside in the memory during the execution time of a program. In addition we would like to be able to swap active processes in and out of main memory to maximize processors usage by providing a large pool of ready processes to execute. Once a program has been swapped to disk, it would be quite limiting to declare that when it is next swapped back in it must be placed in the same main memory region as before. Thus, we cannot know ahead of time where a program would be placed, and we must allow it to be moved about in main memory as a result of swapping. This fact raise some technical concern related to addressing, as illustrated in figure 6.1, which depicts the process images. For simplicity, let us assume that the process image occupies a contiguous region of main memory. Clearly the operating system will need to know the location of the process control information, the execution stack, as well as the entry point to begin the execution of the program for this process. As the operating system manages memory and is responsible for bringing this process into main memory, these addresses are easy to come by. In addition, however, the processor must deal with memory references within the program. Branch instructions must contain an addressed to reference the instruction to be executed next. Data-reference instructions must contain the address of the byte or word of data referenced. Some how, the processor hardware and operating system software must be able

F2007/Unit6/4 to translate the memory references found in the code of programs into actual physical memory addresses that reflected the current location of the program in main memory. oint am Process control Block Program Data Branch Figure 6.1 instruction Addressing requirements for a process (Source: Stalling, William Reference to (1995) Operating System) Data Stack 6.1.2 Protection Each process should be protected against unwanted interference by other processes, whether accidental or intentional. Thus, programs in other processes should not be able to reference memory locations in a process, for reading and writing purposes without permission. In one sense, satisfaction of the relocation requirement increases under difficulty of satisfying the protection requirement. Because the location of a program in main memory is unknown, it is possible to check absolute

F2007/Unit6/5 addresses at compile time to assure protection. Furthermore, most programming languages allow the dynamic calculation of addresses at run time, for example by computing an array subscript or a pointer into a data structure. Hence, all memory references generated by a process must be checked at run time to ensure that they refer only to the memory space allocated to that process. Fortunately, as we shall see mechanisms that support relocation also form the base for satisfying the protection requirements. The process image layout in figure 6.1 illustrates the protection requirement. Normally, a user process cannot access any portion of the operating system, either program or data. Again, a program in one process cannot branch to uninstruction in another process. And without special arrangement a program in one process cannot process the data area of another process. The processor must be able to abort such instruction at the point of execution. Note that, in terms of our example, the memory protection requirement must be satisfied by the processors (hardware) rather than the operating system (software). This is because the operating system cannot anticipate all the memory references that a program will make. Even if such anticipation were possible, it would be prohibitively time consuming to screen each program in advance for possible memory reference violation. Thus, it is possible to access only the permissibility of a memory reference (data access or branch) at the time of execution of the instruction making the reference. To accomplish this, the processor must have that capability. 6.1.3 Sharing

F2007/Unit6/6 Any protection mechanisms that are implemented must have the flexibility to allow several processors to access the same portion of main memory. For example, if a number of processes are executing the same program, it is advantageous to allow each process to access the same copy of the program rather than have it on separate copy. Processes that are cooperating on some task may need to share access to the same data structure. The memory management system must therefore allow control access to shared areas of memory without compromising essential protection. Again, we shall see that the mechanism use to support relocation from the base for sharing capabilities. 6.1.4 Logical Organization Almost invariably, main memory in a computer system is organized as a linear, or one-dimensional, address space that consists of sequence of byte or words. Secondary memory, at its physical level, is similarly organized. Although the organization closely mirrors the actual machine hardware, it does not correspond to the way in which program are typically instructed. Most programs are organized into modules, some of which unmodifiable (read-only, execute only) and some of which contain data that may be modified in the operating system and computer hardware can effectively deal with user programs and data in the form of modules of some sort, then a number of advantages can be identified as follows: 1. Modules can be written and compiled independently, with all references one module to another resolved by the system at run time.

F2007/Unit6/7 2. With modest additional overhead, different degrees of protection (read-only, execute only) can be given to different modules. 3. It is possible to introduce mechanism by which modules can be shared among processes. The advantage of providing sharing on a module level is that this corresponds to the user s way of viewing the problem and hence it is easy for the user to specify the sharing that is desired. The tool that most readily satisfies these requirements is segmentation, which is one of the memory management techniques explored in this chapter. 6.1.5 Physical Organization Computer memory is organized into at least two levels: main memory and secondary memory. Main memory provides fast access at relatively high cost. In addition, main memory is volatile; that is, it does not provide permanent storage. Secondary memory is slower and cheaper than main memory, and it is usually not volatile. Thus, secondary memory s large capacity can be provided to allow long term storage of programs and data, while a smaller main memory holds programs and data currently in use. In this two level scheme, the organization of the flow of information between main and secondary memory is a major system concern. The responsibility for this flow could be assigned to the individual programmer, but this is impractical and undesirable for two reasons:

F2007/Unit6/8 1. The main memory available for a program plus its data may be insufficient. In that case, the programmer must engage in a practice known as overlaying, in which the program and data are organized in such a way that various modules can be assigned the same region of memory, with a main program responsible for switching the modules in and out as needed. Even with the aid of compiler tools, overlay programming wastes programmer time. 2. In a multiprogramming environment, the programmer does not know at the time of coding how much space will be available or where that space will be. It is clear then, that the task of moving information between the two levels of memory should be a system responsibility. This task is the essence of memory management. ACTIVITY 6A TEST YOUR UNDERSTANDING BEFORE YOU CONTINUE THE NEXT INPUT...!

F2007/Unit6/9 6.1 Give five objectives of memory management. 6.2 Give three advantages of logical organization. 6.3 How many levels that computer memory can be organized? FEEDBACK TO ACTIVITY 6A 6.1 Relocation Protection

F2007/Unit6/10 Sharing Logical organization Physical organization 6.2 1. Modules can be written and compiled independently, with all references one module to another resolved by the system at run time. 2. With modest additional overhead, different degrees of protection (readonly, execute only) can be given to different modules. 3. It is possible to introduce mechanism by which modules can be shared among processes. The advantage of providing sharing on a module level is that this corresponds to the user s way of viewing the problem and hence it is easy for the user to specify the sharing that is desired 6.3 Computer memory is organized into two level: i. main memory ii. secondary memory INPUT 6.2 Virtual memory concept

F2007/Unit6/11 Many years ago people were first confronted with programs that were too big to fit in the available memory. The solution usually adopted was to split the program into pieces, called overlays. Overlay 0 would start running first. When it was done, it would call another overlay. Some overlay systems were highly complex, allowing multiple overlays in memory at once. The overlays were kept on the disk and swapped in and out of memory by the operating system. Although the actual work swapping overlays in and out was done by the system, the work of splitting the program into pieces had to be done by the programmer. Splitting up large programs into small, modular pieces was time consuming and boring. It did not take long before someone thought of a way to turn the whole job over to the computer. The method that was devised (Fortheringham, 1961) has come to be known as virtual memory. The basic idea behind virtual memory is that the combine size of the program, data and stack may exceed the amount of physical memory available for it. The operating system keeps those parts of the program currently in use in main memory, and the rest on the disk. For example, a 1M program can run on a 256K machine by carefully choosing which 256K to keep in memory at each instant, with pieces of the program being swapped between disk and memory as needed. Virtual memory can also work in a multiprogramming system. For example eight 1M programs can each be allocated a 256K partition in a 2M memory, which each program operating as though it had its own, private 256K machine. In fact virtual memory multiprogramming fit together very well. While a program is waiting for part of itself to be swapped in, it is waiting for I/O and cannot run so the CPU can be given to another process.

F2007/Unit6/12 6.3 Virtual memory implementation Virtual memory can be implementing using paging and segmentation techniques as stated below: 6.3.1 Paging technique The main problem of contagious allocation is external fragmentation. This is overcome in the present scheme. Here a process is allocated the physical memory where ever it is available, and this scheme is call as paging scheme. In the basic method physical memory is broken into fix size block call frame. The logical memory also broken into block of the same size called pages. Every address generate by the CPU is divided into parts: a page number (P) and a page offset (d). The page number p is use as an index into a page table. The page table contains the base address of each page lying in physical memory. The base address read from page table is combining with page offset (d) to generate the physical memory address. The page size generally varies from 512 bytes to 8192 bytes depending upon the hardware design. If the size of logical address space is 2M and a page size is 2M addressing unit (bytes or word) then the high order (m-n) bits of logical address designate the page number and the n

F2007/Unit6/13 low order bit designate the page offset. Thus the logical address will be P = (m-n) and d=n. The advantage of paging scheme is that there is no external fragmentation however has some internal fragmentation. This is because the last page allocated may not be the exact boundary of the process memory requirement. In worst case there are n pages of memory wasted by n process. An important aspect of paging scheme is the lack of user view of memory. The program is scattered throughout the physical memory. The logical addresses are translated to physical addresses. The another scheme segmentation is discussed further. 6.3.2 Segmentation techniques The program and its associated data are divided into a number of segments. It is not required that all segments of all programs be of the same length, although that is maximum segment length. As with paging, a logical address sing segmentation consist of two parts in this case a segment number and an offset. Because of the use of unequal size segment, segmentation is similar to dynamic partitioning. In the absence of an overlay scheme or the use of virtual memory, it would require that all of a program s segments be loaded into memory for execution. The different, compared with dynamic partitions, is that with segmentation a program may occupy more than one partition, and this partitions need not be contiguous. Segmentation eliminates internal segmentation, but like dynamic partitioning it suffers from external fragmentation. However, because a

F2007/Unit6/14 process is broken up into a number of smaller pieces the external fragmentation should be less. Where as paging is invisible to the programmer, segmentation is usually visible and is provided as inconvenient for organizing programs and data. Typically, the programmer of the compiler assigns programs and data to different segment. For purposes of modular programming the program or data may be further broken down into multiple segments. The principal inconvenience of this service the programmer must be aware of the maximum size limitation on segments. Another consequence unequal size segments is that there is no simple relationship between logical addresses and physical addresses. Analogous to paging, a simple segmentation scheme would make use of a segment table for each process and a list of free block in main memory. Each segment table entry would have to give the starting address in main memory of the corresponding segments. The entry should also provide the length of the segment to assure that the valid addresses are not use. 6.4 Relocation policy Before we consider ways of dealing with the shortcomings of partitioning, we must clear up one loose end, which relates to the placement of processes in memory. When the fix partition scheme is used, we can expect that a process will always be a sign to the same partition. That is, the partition that is selected

F2007/Unit6/15 when a new process is loaded will always be used to swapped the process back into memory after it has been swapped up. When the process is first loaded all relative memory references in the code are replaced by absolute main memory addresess determine by the base address of the loaded process. In the case of equal size partitions and in the case of a single process queue for unequal size partitions, a process may occupied different partitions during the course of its life. When a process image is first created, it is loaded into some partitions in main memory. Later, the process may be swapped out; when it is subsequently swapped back in, it may be assigned to a partition different from the previous one. The same is true for dynamic partitioning. Now, consider that a process in memory include instructions plus data. The instructions will contain memory references of the following two types; Addresses of data items, used in load and store instructions and some arithmetics and logical instructions. Addresses of instructions, used for branching and called instructions. But now we see that this addresses are not fixed. They change each time of process is swapped in or shifted. To solve this problem, a distinction is made among several types of addresess. A logical address is reference to a number location independent of the current assignment of data to memory; a translation must be made to a physical address before the memory access can be achieved. A relative address is particular example of logical address, in which the address is express as the location relative to some known point, usually the beginning of a program. A

F2007/Unit6/16 physical address, or absolute address, is unactual location in main memory. Program that employ relative addresses in memory are loaded using dynamic run-time loading. This means that all the memory references in the loaded process are relative to the origin of the program. Thus, a mean is needed in hardware of translating relative addresses to physical main memory at the time of execution of the instruction that contains the reference. 6.4.1 Non-segmentation system (best fit, worst fit, first fit) Because memory compaction is time consuming it behooves the operating system designer to be clever in deciding how to assign process to memory (how to plug the holes). When it is time to load or swapp a process into main memory and if there is more than one free block of memory of sufficient size, then the operating system must decide which free block to allocate. Three placement algorithms that can be considered are best fit, first fit and worst fit. All are limited to choosing among free blocks of main memory that are equal to or larger than the process to be brought in. Best fit chooses the block that is closest in size to the request. First fit begins to scan memory from the beginning and chooses the first available block that is large enough. Worst fit begin to scan memory from the location of the last placement and chooses the next available block that is large enough.

F2007/Unit6/17 Figure 6.2 a shows an example memory configuration after the number of placement and swapping out operation. The last block that was used was a 22KB block from which a 14KB partition was created. Figure 6.2 b shows the different between the best, first and worst fit placement algorithm in satisfying a 16KB allocation request. Best fit will search the entire list of available blocks and make use of the 18KB block, leaving a 2KB fragement. First fit results in a 6KB fragement and worst fit result in a 20KB fragement. Which of this approaches is best will depend on the exact sequence of process swapping that occurs and the size of those processors. The first fit algorithm is not only the simplest but also the best and fastest as well. The worst fit algorithm tend to produced slightly worse result than the first fit. The worst fit algorithm will more frequently lead to an allocation from a free block at the end of memory. The result is that the largest block of free memory, which usually appears at the end of the memory space, is quickly broken up into small fragement. Thus, compaction may be require more frequently with worst fit. On the other hand, the first fit algorithm may litter the front end with small free partition that need to be searched over on each subsequent first fit pass. The best fit algorithm, despite its name, it is usually the worst performer. Because this algorithm looks the smallest block that will satisfy the requirement, it guarantees that the fragement left behind is as small as possible. Although each memory request always wastes the small amount memory the result that main memory is quickly littered by blocks too small to satisfy requests for memory allocation. Thus memory compaction must be done more frequently than the other algorithms.

F2007/Unit6/18 8K 12K 22K llocated 18K (14K) 8K Allocated Block 6K Free block 14K 36K ( a) 8K 12K 6K

Operating system F2007/Unit6/19 Best fit 8K Allocated Block Free block 6K 14K Worst fit 20K (b) Memory configuration before and after allocation of a 16KB block (Source: Stalling, William( 1995) Operating Systems) 6.4.2 Segmentation System (LRU, LFU, FIFO) As pointed out earlier, in paging scheme the user s view of memory is not the same as the actual physical memory. The users view memory as a collection of few segment with variable size and not necessary any order among segments.

Operating system F2007/Unit6/20 Consider the simple situation when you are writing a program. You write a main program with a set of sub routines, function etc. You may use stack arrays, table, referred to by name and do not care where they are stored. Elements in a segment are identified by their offset from beginning of the segment like the first statement of program, the fifth instruction of the square root function. The memory management scheme using segmentation support the user view of memory. The logical address space is a collection of segment each segment has a name and a length. Addresses specify both the segment name and the offset within the segment. The users specify the segment name and an offset also segment can be numbered and referred to by it. Similar to page table a segment table can be kept in fast registers, because it can be quickly referred. However if it is kept in memory then the mapping requires two memory references for each logical address, thus slowing down the computer. To improved speed, set of associative registers are use to hold most recently used segment table entries, which reduce 10 15 % time. Advantage of segmentation is one can associate protection with the segment like instruction segment can be read only. Another advantage is of sharing of code/data programs like, editors etc could be shared and only are copy is needed. Segmentation may cause

Operating system F2007/Unit6/21 external fragmentation, causing a process to wait until a larger hole is available. ACTIVITY 6B 6.4 Fill in the blanks with the suitable answers given below

Operating system F2007/Unit6/22 a. Many years ago, people confronted with programs that were too big to fit in the available memory. The solutions were called. b. The method that was devised (Fortheringham, 1961) is known as. c. The basic idea behind virtual memory is that to combine,. size of program virtual memory data and stack overlay a. 6.5 Try to guessed the virtual memory implementation below: g T b. N H FEEDBACK TO ACTIVITY 6B 6.4 a Overlay b. virtual memory

Operating system F2007/Unit6/23 c. Size of program, data and stack 6.5 a. P A G I N G T E C N I Q U E S b. S E G M E N T A T I O N T E C H N I Q U E S SELF- ASSESSMENT 1 You are approaching success. Try all the questions in this self-assessment section and check your answers with those given in the Feedback on Self-

Operating system F2007/Unit6/24 Assessment 1 given on the next page. If you face any problems, discuss it your lecturer. Good luck!!! Question 6-1 a. Discuss the logical organization in the objectives of memory management in operating system? b. What is the importance of relocation and protection in memory management? SELF ASSESSMENT 2 Question 6-2

Operating system F2007/Unit6/25 a. Describe the implementation of virtual memory techniques? b. Explain the non segmentation and segmentation system in memory management? FEEDBACK TO SELF-ASSESSMENT 1 Question 6-1 Please refer to the input given and discuss with your lecturer.

Operating system F2007/Unit6/26 FEEDBACK TO SELF-ASSESSMENT 2 Question 6-2 Please refer to the input given and discuss with your lecturer.