P r a t t hr h ee e : e M e M m e o m r o y y M a M n a a n g a e g m e e m n e t 8.1/72

Similar documents
Chapter 8: Main Memory

Chapter 8: Main Memory

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Chapter 8: Memory Management Strategies

Chapter 9: Memory Management. Background

Memory Management. Memory Management

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

Module 8: Memory Management

Chapter 8: Memory- Management Strategies

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Module 8: Memory Management

Chapter 8: Memory Management

Logical versus Physical Address Space

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

SHANDONG UNIVERSITY 1

Chapter 8: Memory-Management Strategies

CS307: Operating Systems

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Memory Management. Contents: Memory Management. How to generate code? Background

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Goals of Memory Management

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Main Memory Yi Shi Fall 2017 Xi an Jiaotong University

Chapter 8: Main Memory

Chapter 8: Main Memory

Lecture 8 Memory Management Strategies (chapter 8)

CS307 Operating Systems Main Memory

Chapter 8 Memory Management

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Background. Contiguous Memory Allocation

Memory Management. Memory

Roadmap. Tevfik Koşar. CSC Operating Systems Spring Lecture - XII Main Memory - II. Louisiana State University

Principles of Operating Systems

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

CS 3733 Operating Systems:

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 Main Memory

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Chapter 8: Memory- Management Strategies Dr. Varin Chouvatut

CS370 Operating Systems

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Main Memory (Part I)

CS3600 SYSTEMS AND NETWORKS

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Chapter 8: Memory Management

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Memory Management. Dr. Yingwu Zhu

Chapters 9 & 10: Memory Management and Virtual Memory

CS420: Operating Systems

Memory Management. Minsoo Ryu. Real-Time Computing and Communications Lab. Hanyang University.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Memory Management Minsoo Ryu Real-Time Computing and Communications Lab. Hanyang University

9.1 Background. In Chapter 6, we showed how the CPU can be shared by a set of processes. As a result of

Memory Management. Dr. Yingwu Zhu

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Main Memory (Part II)

CSE 421/521 - Operating Systems Fall Lecture - XII Main Memory Management. Tevfik Koşar. University at Buffalo. October 18 th, 2012.

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

Chapter 9 Memory Management

Chapter 8: Main Memory. Operating System Concepts 8th Edition

Main Memory (II) Operating Systems. Autumn CS4023

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

UNIT-III VIRTUAL MEMORY

Basic Memory Management

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

The Memory Management Unit. Operating Systems. Autumn CS4023

Memory Management and Protection

Memory Management (2)

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Memory Management (1) Memory Management

Memory Management (1) Memory Management. CPU vs. memory. No.8. Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University

CS370 Operating Systems

Operating Systems Unit 6. Memory Management

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

Introduction to Operating Systems

Process. One or more threads of execution Resources required for execution

Memory and multiprogramming

VII. Memory Management

CSE 4/521 Introduction to Operating Systems. Lecture 14 Main Memory III (Paging, Structure of Page Table) Summer 2018

Memory management: outline

Operating System 1 (ECS-501)

Memory management: outline

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University

Transcription:

Part three: Memory Management programs, together with the data they access, must be in main memory (at least partially) during execution. the computer keeps several processes in memory. Many memory-management schemes exist, reflecting various approaches, and the effectiveness of each algorithm depends on the situation. Selection of a memory-management scheme for a system depends on many factors, especially on the hardware design of the system. Each algorithm requires its own hardware support. P271 8.1/72

Chapter 8: Main Memory

Chapter 8: Main Memory Background Contiguous Memory Allocation Paging Structure of the Page Table Segmentation 8.3/72

Objectives To provide a detailed description of various ways of organizing memory hardware To discuss various memory-management techniques, including paging and segmentation 8.4/72

Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage CPU can access directly Register access in one CPU clock (or less) Main memory can take many cycles Cache sits between main memory and CPU registers Protection of memory required to ensure correct operation ensure correct operation ha to protect the operating system from access by user processes and, in addition, to protect user processes from one another. P276 8.5/72

Base and Limit Registers A pair of baseandlimitregisters define the logical address space 8.6/72

Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at three different stages Compile time: If memory location known a priori, absolute codecan be generated; must recompile code if starting location changes Load time: Must generate relocatable codeif memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers) P278 8.7/72

Multistep Processing of a User Program 8.8/72

Logical vs. Physical Address Space The concept of a logical address space that is bound to a separate physical address spaceis central to proper memory management Logical address generated by the CPU; also referred to as virtual address Physical address address seen by the memory unit We now have two different types of addresses: logical addresses (in the range 0 to max ) andphysical addresses (for example, in the range R + 0 to R + max). The user generates only logical addresses and thinks that the process runs in locations 0 to max. these logical addresses must be mapped to physical addresses before they are used. P279 8.9/72

Logical vs. Physical Address Space Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme 8.10/72

Memory-Management Management Unit (MMU) Hardware device that maps virtual to physical address In MMU scheme, for example, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory The user program deals with logicaladdresses; it never sees the real physical addresses P280 8.11/72

Dynamic relocation using a relocation register 8.12/72

Dynamic Loading Routine is not loaded until it is called Better memory-space utilization; unused routine is never loaded Useful when large amounts of code are needed to handle infrequently occurring cases No special support from the operating system is required implemented through program design P280 8.13/72

Dynamic Linking Linking postponed until execution time Small piece of code, stub, used to locate the appropriate memory-resident library routine Stub replaces itself with the address of the routine, and executes the routine Operating system needed to check if routine is in processes memory address all processes that use a language library execute only one copy of the library code. System also known as shared libraries P281 8.14/72

Swapping A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution Backing store fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images Roll out, roll in swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed P282 8.15/72

Schematic View of Swapping 8.16/72

Contiguous Allocation Main memory usually into two partitions: Resident operating system, usually held in low memory with interrupt vector User processes then held in high memory each process is contained in a single contiguous section of memory. P284 8.17/72

Contiguous Allocation Relocation registers used to protect user processes from each other, and from changing operating-system code and data Relocation register contains value of smallest physical address Limit register contains range of logical addresses each logical address must be less than the limit register MMU maps logical address dynamically 8.18/72

Hardware Support for Relocation and Limit Registers P285 8.19/72

Memory Allocation fixed-sized partitions. 将内存空间划分成若干个固定大小的区域, 在每个区域可装入一个进程 the degreeof multiprogramming is bound by the number of partitions. Internal Fragmentation Multiple partition method P286 8.20/72

Contiguous Allocation (Cont.) Multiple-partition allocation Hole block of available memory; holes of various size are scattered throughout memory When a process arrives, it is allocated memory from a hole large enough to accommodate it Operating system maintains information about: a) allocated partitions b) free partitions (hole) 8.21/72

OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2 8.22/72

Dynamic Storage-Allocation Problem How to satisfy a request of size nfrom a list of free holes? First-fit: Allocate the first hole that is big enough Best-fit: Allocate the smallesthole that is big enough; must search entire list, unless ordered by size Produces the smallest leftover hole Worst-fit: Allocate the largesthole; must also search entire list Produces the largest leftover hole P287 8.23/72

Fragmentation External Fragmentation total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction Shuffle memory contents to place all free memory together in one large block Compaction is possible onlyif relocation is dynamic, and is done at execution time This scheme can be expensive. P287 8.24/72

Paging 连续分配造成碎片, 而 Compaction 对系统性能十分不利 Physical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available Divide physical memory into fixed-sized blocks called frames(size is power of 2, between 512 bytes and 8,192 bytes) Divide logical memory into blocks of same size called pages P288 8.25/72

Keep track of all free frames Paging To run a program of size npages, need to find nfree frames and load program Set up a page table to translate logical to physical addresses Internal fragmentation 8.26/72

Paging Model of Logical and Physical Memory 8.27/72

Address Translation Scheme Address generated by CPU is divided into: Page number (p) used as an index into a page tablewhich contains base address of each page in physical memory Page offset (d) combined with base address to define the physical memory address that is sent to the memory unit P289 8.28/72

Address Translation Scheme For given logical address space 2 m and page size2 n page number p m -n page offset d n 8.29/72

Paging Hardware 8.30/72

Paging Example 0 0 1 1 2 3 2 3 4 5 32-byte memory and 4-byte pages 6 7 8.31/72

Page Size If process size is independent of page size, we expect internal fragmentation to average one-half page per process. This consideration suggests that small page sizes are desirable. However, overhead is involved in each page-table entry, and this overhead is reduced as the size of the pages increases. Also, disk I/O is more efficient when the number of data being transferred is larger. Generally, page sizes have grown over time as processes, data sets, and main memory have become larger. Today, pages typically are between 4 KB and 8 KB in size P291 8.32/72

Page Size Today, pages typically are between 4 KB and 8 KB in size Usually, each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of 2 32 physical page frames. If frame size is 4 KB, then a system with 4-byte entries can address 2 44 bytes (or 16 TB) of physical memory. 8.33/72

Free Frames Before allocation After allocation 8.34/72

User s view of memory the clear separation between the user's view of memory and the actual physical memory. The user program views memory as one single space, containing only this one program. In fact, the user program is scattered throughout physical memory, which also holds other programs. The difference between the user's view of memory and the actual physical memory is reconciled by the addresstranslation hardware. P291 8.35/72

User s view of memory The logical addresses are translated into physical addresses. This mapping is hidden from the user and is controlled by the operating system. Notice that the user process by definition is unable to access memory it does not own. It has no way of addressing memory outside of its page table, and the table includes only those pages that the process owns. 8.36/72

Frame table Since the operating system is managing physical memory, it must be aware of the allocation details of physical memory-which frames arc allocated, which frames are available, how many total frames there are, and so on. This information is generally kept in a data structure called a frame table. The frame table has one entry for each physical page frame, indicating whether the latter is free or allocated and, if it is allocated, to which page of which process or processes. P292 8.37/72

context-switch time If a user makes a system call (to do I/O, for example) and provides an address as a parameter (a buffer, for instance), that address must be mapped to produce the correct physical address. The operating system maintains a copy of the page table for each process, just as it maintains a copy of the instruction counter and register contents. It is also used by the CPU dispatcher to define the hardware page table when a process is to be allocated the CPU. Paging therefore increases the context-switch time. P292 8.38/72

Implementation of Page Table the page table is implemented as a set of dedicated registers. These registers should be built with very high-speed logic to make the paging-address translation efficient. Every access to memory must go through the paging map, so efficiency is a major consideration. The CPU dispatcher reloads these registers, just as it reloads the other registers. The use of registers for the page table is satisfactory if the page table is reasonably small (for example, 256 entries). P292 8.39/72

Implementation of Page Table Page table is kept in main memory Page-table base register (PTBR)points to the page table Page-table length register (PTLR)indicates size of the page table P296 In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. P293 8.40/72

TLB The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffer (TLB) The TLB is associative high-speed memory. Each entry in the TLB consists of two parts: a key (or tag) and a value. When the associative memory is presented with an item, the item is compared with all keys simultaneously. If the item is found, the corresponding value field is returned. The search is fast; the hardware, however, is expensive. Typically, the number of entries in a TLB is small, often numbering between 64 and 1,024. 8.41/72

Associative Memory Associative memory parallel search Page # Frame # Address translation (p, d) If p is in associative register, get frame # out Otherwise get frame # from page table in memory 8.42/72

How to use TLB The TLB contains only a few of the page-table entries. When a logical address is generated by the CPU, its page number is presented to the TLB. If the page number is found, its frame number is immediately available and is used to access memory. The whole task may take less than 10 percent longer than it would if an unmapped memory reference were used. 8.43/72

How to use TLB If the page number is not in the TLB (known as a TLB miss), a memory reference to the page table must be made. When the frame number is obtained, we can use it to access memory. In addition, we add the page number and frame number to the TLB, so that they will be found quickly on the next reference. If the TLB is already full of entries, the operating system must select one for replacement. every time a new page table is selected (for instance, with each context switch), the TLB must be flushed (or erased) to ensure that the next executing process does not use the wrong translation information. 8.44/72

Paging Hardware With TLB 8.45/72

Effective Access Time Associative Lookup = ε time unit Assume memory cycle time is 1 microsecond Hit ratio percentage of times that a page number is found in the associative registers; ratio related to number of associative registers Hit ratio = α Effective Access Time(EAT) EAT = (1 + ε) α+ (2 + ε)(1 α) = 2 + ε α P294 8.46/72

Effective Access Time An 80-percent hit ratio means that we find the desired page number in the TLB 80 percent of the time. If it takes 20 nanoseconds to search the TLB and 100 nanoseconds to access memory, then a mapped-memory access takes 120 nanoseconds when the page number is in the TLB. If we fail to find the page number in the TLB (20 nanoseconds), then we must first access memory for the page table and frame number (100 nanoseconds) and then access the desired byte in memory (100 nanoseconds ),for a total of 220 nanoseconds. EAT = 0.80 x 120 + 0.20 x 220= 140 nanoseconds. 8.47/72

Memory Protection Memory protection implemented by associating protection bit with each frame One bit can define a page to be read-write or read-only easily expand this approach to provide a finer level of protection. We can create hardware to provide read-only, readwrite, or execute-only protection or, by providing separate protection bits for each kind of access, we can allow any combination of these accesses. P295 8.48/72

Memory Protection Valid-invalid bit attached to each entry in the page table: valid indicates that the associated page is in the process logical address space, and is thus a legal page Rarely does a process use all its address range. In fact, many processes use only a small fraction of the address space available to them. It would be wasteful in these cases to create a page table with entries for every page in the address range. Most of this table would be unused but would take up valuable memory space. 8.49/72

Valid (v) or Invalid (i) Bit In A Page Table 8.50/72

Memory Protection provide hardware in the form of a page-table length register (PTLR), to indicate the size of the page table. This value is checked against every logical address to verify that the address is in the valid range for the process. Failure of this test causes an error trap to the operating system. P296 8.51/72

Shared code Shared Pages One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). Shared code must appear in same location in the logical address space of all processes Private code and data Each process keeps a separate copy of the code and data The pages for the private code and data can appear anywhere in the logical address space P296 8.52/72

Shared Pages Example 8.53/72

Structure of the Page Table Hierarchical Paging Hashed Page Tables Inverted Page Tables 8.54/72

Hierarchical Page Tables a large logical address space needs a large page table the page table needs access contiguously in main memory. It is very difficult to find a contiguous space larger than page size to store page table. Break up the logical address space into multiple page tables A simple technique is a two-level page table P297 8.55/72

Two-Level Page-Table Scheme 8.56/72

Two-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into: a page number consisting of 20 bits a page offset consisting of 12 bits Since the page table is paged, the page number is further divided into: a 10-bit page number a 10-bit page offset P298 8.57/72

Two-Level Paging Example Thus, a logical address is as follows: page number page offset p 1 p 2 d 10 10 12 设计非常精妙! where p 1 is an index into the outer page table, and p 2 is the displacement within the page of the outer page table 8.58/72

Address-Translation Scheme 8.59/72

Three-level Paging Scheme P299 8.60/72

Hashed Page Tables Common in address spaces > 32 bits The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location. Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted. P300 8.61/72

Hashed Page Table 8.62/72

Inverted Page Table Only one page table is in the system One entry for each real page of memory Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs P301 8.63/72

Inverted Page Table Architecture 8.64/72

Segmentation paging ---the separation of the user's view of memory and the actual physical memory. the user's view of memory is not the same as the actual physical memory. A program is a collection of segments. A segment is a logical unit such as: main program, procedure, function, method, object, local variables, global variables, common block, stack, symbol table, arrays P302 8.65/72

User s View of a Program 8.66/72

Logical View of Segmentation 1 1 4 2 3 4 2 3 user space physical memory space 8.67/72

Segmentation Architecture Logical address consists of a two tuple: <segment-number, offset>, Segment table maps two-dimensional physical addresses; each table entry has: base contains the starting physical address where the segments reside in memory limit specifies the length of the segment Segment-table base register (STBR)points to the segment table s location in memory Segment-table length register (STLR)indicates number of segments used by a program; P303 segment number sis legal if s< STLR 8.68/72

Segmentation Hardware P304 8.69/72

Segmentation Architecture (Cont.) Protection With each entry in segment table associate: validation bit = 0 illegal segment read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level Since segments vary in length, memory allocation is a dynamic storage-allocation problem A segmentation example is shown in the following diagram 8.70/72

Example of Segmentation 8.71/72

Homework:4 5 6 7 9 11 12 13 End of Chapter 8