Chapter 9 Memory Management

Similar documents
Chapter 9 Memory Management

Overview. Operating Systems I. Simple Memory Management. Simple Memory Management. Multiprocessing w/fixed Partitions.

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Module 8: Memory Management

Chapter 8: Memory-Management Strategies

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CS307: Operating Systems

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapter 8: Main Memory

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Logical versus Physical Address Space

Chapter 8: Main Memory

Module 8: Memory Management

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8 Main Memory

Chapter 9: Memory Management. Background

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 8 Memory Management

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Chapter 8: Memory Management

Memory Management. Memory Management

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

Chapter 8: Main Memory

Chapter 8: Main Memory

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

CS307 Operating Systems Main Memory

Chapter 8: Memory Management Strategies

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Memory Management. Contents: Memory Management. How to generate code? Background

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

CS 3733 Operating Systems:

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Lecture 8 Memory Management Strategies (chapter 8)

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Principles of Operating Systems

Memory and multiprogramming

Main Memory Yi Shi Fall 2017 Xi an Jiaotong University

Memory Management. Dr. Yingwu Zhu

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

SHANDONG UNIVERSITY 1

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Goals of Memory Management

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

CS370 Operating Systems

Chapter 8: Memory Management

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Memory Management. Minsoo Ryu. Real-Time Computing and Communications Lab. Hanyang University.

Memory Management Minsoo Ryu Real-Time Computing and Communications Lab. Hanyang University

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

Background. Contiguous Memory Allocation

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

CS3600 SYSTEMS AND NETWORKS

Memory management: outline

Memory management: outline

Main Memory (Part II)

CS399 New Beginnings. Jonathan Walpole

Roadmap. Tevfik Koşar. CSC Operating Systems Spring Lecture - XII Main Memory - II. Louisiana State University

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

12: Memory Management

6.823 Computer System Architecture. Problem Set #3 Spring 2002

CS370 Operating Systems

Memory Management. Memory

COSC Operating Systems Design, Fall 2001, Byunggu Yu. Chapter 9 Memory Management (Lecture Note #8) 1. Background

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01

Last Class: Deadlocks. Where we are in the course

Multilevel Paging. Multilevel Paging Translation. Paging Hardware With TLB 11/13/2014. CS341: Operating System

P r a t t hr h ee e : e M e M m e o m r o y y M a M n a a n g a e g m e e m n e t 8.1/72

Memory Management (Chaper 4, Tanenbaum)

Operating System 1 (ECS-501)

Memory Management. Frédéric Haziza Spring Department of Computer Systems Uppsala University

Memory Management. Dr. Yingwu Zhu

9.1 Background. In Chapter 6, we showed how the CPU can be shared by a set of processes. As a result of

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

Today: Segmentation. Last Class: Paging. Costs of Using The TLB. The Translation Look-aside Buffer (TLB)

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Chapter 8: Main Memory. Operating System Concepts 8th Edition

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Memory Management (Chaper 4, Tanenbaum)

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Introduction to Operating Systems

Main Memory CHAPTER. Exercises. 7.9 Explain the difference between internal and external fragmentation. Answer:

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Memory Management and Protection

Chapters 9 & 10: Memory Management and Virtual Memory

CSE 4/521 Introduction to Operating Systems. Lecture 14 Main Memory III (Paging, Structure of Page Table) Summer 2018

Main Memory (II) Operating Systems. Autumn CS4023

Operating Systems Unit 6. Memory Management

Transcription:

Contents 1. Introuction 2. Computer-System Structures 3. Operating-System Structures 4. Processes 5. Threas 6. CPU Scheuling 7. Process Synchronization 8. Dealocks 9. Memory Management 10.Virtual Memory 11.File Systems Chapter 9 Memory Management

Memory Management Motivation Keep several processes in memory to improve a system s s performance Selection of ifferent memory management methos Application-epenent Harware-epenent epenent Memory A large array of wors or bytes, each with its own aress. Memory is always too small! Memory Management The Viewpoint of the Memory Unit A stream of memory aresses! What shoul be one? Which areas are free or use (by whom) Decie which processes to get memory Perform allocation an e-allocation Remark: Interaction between CPU scheuling an memory allocation!

Backgroun Aress Bining bining of instructions an ata to memory aresses Bining Time Known at compile time, where a program will be in memory - absolute coe MS-DOS *.COM At loa time: - All memory reference by a program will be translate - Coe is relocatable - Fixe while a program runs other object moules system library source program object moule loa moule compiling linking loaing symbolic aress e.g., x Relocatable aress At execution time - bining may change as a program run ynamically loae system library in-memory binary memory image Absolute aress Backgroun Main Memory Bining at the Compiling Time A process must execute at a specific memory space Bining at the Loa Time Relocatable Coe Process may move from a memory segment to another bining is elaye till run-time

Logical Versus Physical Aress CPU Logical Aress 346 The user program eals with logical aresses - Virtual Aresses (bining at the run time) + Relocation Register 14000 Memory Management Unit (MMU) Harware-Support Physical Aress 14346 Memory Aress Register Memory Logical Versus Physical Aress A logical (physical) aress space is the set of logical (physical) aresses generate by a process. Physical aresses of a program is transparent to any process! MMU maps from virtual aresses to physical aresses. Different memory mapping schemes nee ifferent MMU s that are harware evices. (slow own) Compile-time & loa-time bining schemes results in the collapsing of logical an physical aress spaces.

Dynamic Loaing Dynamic Loaing A routine will not be loae until it is calle. A relocatable linking loaer must be calle to loa the esire routine an change the program s s aress tables. Avantage Memory space is better utilize. Users may use OS-provie libraries ies to achieve ynamic loaing Dynamic Linking Dynamic Linking Linking A small piece of coe, calle stub, is use to locate or loa the appropriate routine Avantage Save memory space by sharing the library coe among processes Memory Protection & Library Upate! Static language library + program object moule binary program image Simple

Overlays Motivation Keep in memory only those instructions an ata neee at any given time. Example: Two overlays of a two-pass assembler Symbol table 20KB common routines overlay river 30KB 10KB Certain relocation & linking algorithms are neee! 70KB Pass 1 Pass 2 80KB Overlays Memory space is save at the cost of run-time I/O. Overlays can be achieve w/o OS support: absolute-aress coe However, it s s not easy to program a overlay structure properly! Nee some sort of automatic techniques that run a large program in a limite physical memory!

Swapping User Space OS swap out swap in Process p1 Process p2 Main Memory Backing Store Shoul a process be put back into the same memory space that it occupie previously? Bining Scheme?! Swapping A Naive Way W Pick up a process from the reay queue Dispatcher checks whether the process is in memory Yes Dispatch CPU to the process No Swap in the process Potentially High Context-Switch Cost: 2 * (1000KB/5000KBps + 8ms) = 416ms Transfer Time Latency Delay

Swapping The execution time of each process shoul be long relative to the swapping time in this case (e.g., 416ms in the last example)! 100k isk = 100ms 1000k per sec + Only swap in what is actually use. 100k isk = 100ms 1000k per sec + Users must keep the system informe of memory usage. OS Who shoul be swappe out? Lower Priority Processes? es? Any Constraint? System Design Pi Memory I/O buffering I/O buffering?i/o? Swapping Separate swapping space from the file system for efficient usage Disable swapping whenever possible such as many versions of UNIX Swapping is triggere only if the memory usage passes a threshol, an many processes are running! In Winows 3.1, a swappe-out process is not swappe in until the user selects the process to run.

Contiguous Allocation Single User 0000 OS a User b Unuse 8888 relocation register A single s user is allocate as much memory as neee Problem: Size Restriction Overlays (MS/DOS) a limit register b Contiguous Allocation Single User Harware Support for for Memory Mapping an Protection CPU < + logical aress limit register No trap Yes relocation register physical aress memory Disavantage: Wasting of CPU an Resources No Multiprogramming Possible

Contiguous Allocation Multiple Users Fixe Partitions Partition 1 Partition 2 Partition 3 Partition 4 20k 45k 60k 90k 100k proc 1 proc 7 proc 5 fragmentation Memory is ivie into fixe partitions, e.g., OS/360 (or MFT) A process is allocate on an entire partition An OS Data Structure: Partitions # size location status 1 2 3 4 25KB 15KB 30KB 10KB 20k 45k 60k 90k Use Use Use Free Contiguous Allocation Multiple Users Harware Supports Boun registers Each partition may have a protection key (corresponing to a key in the current PSW) Disavantage: Fragmentation gives poor memory utilization!

Contiguous Allocation Multiple Users Dynamic Partitions Partitions are ynamically create. OS tables recor free an use partitions 20k 40k 70k 90k 110k OS Process 1 free Process 2 free Use Free Input Queue Base = 20k size = 20KB user = 1 Base = 40k size = 30KB Base = 70k size = 20KB user = 2 Base = 90k size = 20KB P3 with a 40KB memory request! Contiguous Allocation Multiple Users Better in Time an Storage Usage Solutions for ynamic storage allocation : First Fit Fin a hole which is big enough Avantage: Fast an likely ly to have large chunks of memory in high memory locations Best Fit Fin the smallest hole which is big enough. It might nee a lot of search time an create lots of small fragments! Avantage: Large chunks of memory available Worst Fit Fin the largest hole an create a new partition out of it! Avantage: Having largest leftover holes s with lots of search time!

Contiguous Allocation Example First Fit (RR Scheuler with Quantum = 1) 400k 2560k 400k 1000k 1700k 2000k 2300k 2560k OS OS OS 400k 1000k 2000k 2300k 2560k P1 P2 P3 400k 1000k 2000k 2300k 2560k A job queue Time = 0 Time = 0 Time = 14 OS OS OS P1 P4 P3 400k 1000k 1700k 2000k 2300k 2560k P4 P3 Process Memory Time P1 600KB 10 P2 1000KB 5 P3 300KB 20 P4 700KB 8 P5 500KB 15 Time = 14 Time = 28 Time = 28 P1 P3 400k 900k 1000k 1700k 300KB 2000k + 560KB 2300k 260KB P5? 2560k P2 terminates & frees its memory P5 P4 P3 Fragmentation Dynamic Partitions External fragmentation occurs as small chunks of memory accumulate as a by- prouct of partitioning ue to imperfect fits. Statistical Analysis For the First-Fit Fit Algorithm: 1/3 memory is unusable 50-percent rule Solutions: a. Merge ajacent free areas. b. Compaction - Compact all free areas into one contiguous region - Requires user processes to be relocatable Any optimal compaction strategy???

Fragmentation Dynamic Partitions 0 300K 500K 600K 1000K 1200K 1500K 1900K 2100K OS P1 P2 400KB P3 300KB P4 200KB 0 300K 500K 600K 800K 1200K 2100K OS P1 P2 *P3 *P4 900K OS P1 P2 *P4 P3 900K Cost: Time Complexity O(n!)?!! Combination of swapping an compaction Dynamic/static relocation 0 300K 500K 600K 1000K 1200K 2100K 0 300K 500K 600K 1500K 1900K 2100K OS P1 P2 900K P3 *P4 MOVE 600KB MOVE 400KB MOVE 200KB Fragmentation Dynamic Partitions Internal fragmentation: A small chunk of unuse memory internal to a partition. OS P1 20,002 bytes P2 P3 request 20KB?? give P3 20KB & leave a 2-byte free area?? Reuce free-space maintenance cost Give 20,002 bytes to P3 an have 2 bytes as an internal fragmentation!

Fragmentation Dynamic Partitions Dynamic Partitioning: Avantage: Eliminate fragmentation to some egree Can have more partitions an a higher egree of multiprogramming Disavantage: Compaction vs Fragmentation The amount of free memory may not be enough for a process! (contiguous allocation) Memory locations may be allocate but never reference. Relocation Harware Cost & Slow Down Solution: Page Memory! Paging Objective Users see a logically contiguous aress space although its physical aresses are throughout physical memory Units of Memory an Backing Store Physical memory is ivie into fixe-size blocks calle frames. The logical memory space of each process is ivie into blocks of the same size calle pages. The backing store is also ivie into blocks of the same size if use.

Paging Basic Metho Page Offset Logical Aress CPU p f f Page Table Physical Aress Page Number p.. f Base Aress of Page p Paging Basic Metho p m-n Aress Translation page size page offset m n max number of pages: 2 m-n Logical Aress Space: 2 m Physical Aress Space:??? A page size tens to be a power of 2 for efficient aress translation. The actual page size epens on the computer architecture. Toay, it is from 512B or 16KB.

Paging Basic Metho Page 0 1 2 3 0 4 8 12 16 A B C D Logical Memory Logical Aress 1 * 4 + 1 = 5 0 1 2 3 01 01 5 6 1 2 Page Table 110 01 0 4 8 12 16 20 24 28 C D A B Frame 0 1 2 3 4 5 6 7 Physical Memory Physical Aress = 6 * 4 + 1 = 25 Paging Basic Metho No External Fragmentation Paging is a form of ynamic relocation. The average internal fragmentation is about one- half page per process The page size generally grows over time as processes, ata sets, an memory have become larger. 4-byte page table entry & 4KB per page 2 32 * 2 12 B = 2 44 B = 16TB of physical memory Page Size Disk I/O Efficiency Page Table Maintenance * Example: 8KB or 4KB for Solaris. Internal Fragmentation

Paging Basic Metho Page Replacement: An executing process has all of its pages in physical memory. Maintenance of the Frame Table One entry for each physical frame The status of each frame (free or allocate) an its owner The page table of each process must be save when the process is preempte. Paging increases context-switch time! Paging Harware Support Page Tables Where: Registers or Memory Efficiency is the main consieration! The use of registers for page tables The page table must be small! The use of memory for page tables Page-Table a Base Register A Page(PTBR) Table

Paging Harware Support Page Tables on Memory Avantages: The size of a page table is unlimite! The context switch cost may be low if the CPU ispatcher merely changes PTBR, instea of reloaing another page table. Disavantages: Memory access is slowe by a factor of 2 Translation Look-asie buffers (TLB) Associative, high-spee memory (key/tag, value) 16 ~ 1024 entries Less than 10% memory access time Paging Harware Support Translation Look-asie Buffers(TLB): Disavantages: Expensive Harware an Flushing of Contents for Switching of Page Tables Avantage: Fast Constant-Search Time key value item

Paging Harware Support Logical Aress CPU p Page# Frame# p. TLB Miss * Aress-Space Ientifiers (ASID) in TLB for process matching? Protection? Flush? TLB.. f Page Table TLB Hit f Physical Aress Physical Memory Upate TLB if a TLB miss occurs! Replacement of TLB entries might be neee. Paging Effective Memory Access Time Hit Ratio = the percentage of times that a page number is foun in the TLB The hit ratio of a TLB largely epens on the size an the replacement strategy of TLB entries! Effective Memory Access Time Hit-Ratio * (TLB lookup + a mappe memory access) + (1 Hit-Ratio) * (TLB lookup + a page table lookup + a mappe memory access)

Paging Effective Memory Access Time An Example 20ns per TLB lookup, 100ns per memory access Effective Access Time = 0.8*120ns +0.2*220ns = 140 ns, when hit ratio = 80% Effective access time = 0.98*120ns +0.02*220ns = 122 ns, when hit ratio = 98% Intel 486 has a 32-register TLB an claims a 98 percent hit ratio. Paging Protection & Sharing Protection y v 2 y v 7 y 3 Page Table Is the page in memory? y v 1 0 Vali-Invali Bit Vali Page? Moifie? r/w/e protecte: 100r, 010w, 110rw, memory r/w/e irty Use a Page-Table Length Register (PTLR) to inicate the size of the page table. Unuse Page table entries might be ignore uring maintenance.

0 2K 4K 8K 10K 10,468 12,287 Paging Protection & Sharing Example: a 12287-byte Process (16384=2 14 6K P0 P1 P2 P3 P4 P5 Logical aress p 3 11 0 1 2 3 4 5 6 7 V 2 V 3 V 4 V 7 V 8 V 9 i 0 i 0 Page Table (PTLR entries?) 0 1 2 3 4 5 6 7 8 9 P0 P1 P2 P3 P4 P5 14 ) P1 Paging Protection & Sharing *e1 *e2 *e3 * Data 1 Page Table 1 3 4 6 1 P2 *e1 *e2 *e3 * Data 2 Page Table 2 3 4 6 7 * ata1 * * e1 * * e2 * * e3 * ata2 :: page 0 1 2 3 4 5 6 7 n Proceures which are execute often (e.g., eitor) can be ivie into proceure + ate. Memory can be save a lot. Reentrant proceures can be save! The non-moifie nature of save coe must be enforce Aress referencing insie share pages coul be an issue.

Multilevel Paging Motivation The logical aress space of a process in many moern computer system is very large, e.g., 2 32 to 2 64 Bytes. 32-bit aress 2 20 page entries 4MB 4KB per page 4B per entries page table Even the page table must be ivie into pieces to fit in the memory! Multilevel Paging Two-Level Paging Logical Aress P1 P2 P1 Physical Memory P2 PTBR Outer-Page Table A page of page table Forwar-Mappe Page Table

Multilevel Paging N-Level Paging P 1 Motivation: Two-level paging is not appropriate for a huge logical aress space! Logical Aress P 2.. P n PTBR N pieces P1 P2 Pn Physical Memory 1 + 1 + + 1 + 1 = n+1 accesses Multilevel Paging N-Level Paging Example 98% hit ratio, 4-level 4 paging, 20ns TLB access time, 100ns memory access time. Effective access time = 0.98 X 120ns + 0.02 X 520ns = 128ns SUN SPARC (32-bit aressing) 3- level paging Motorola 68030 (32-bit aressing) 4- level paging VAX (32-bit aressing) 2-level paging

Hashe Page Tables Objective: To hanle large aress spaces Virtual aress hash function a linke list of elements (virtual page #, frame #, a pointer) Clustere Page Tables Each entry contains the mappings for several physical-page page frames, e.g., 16. Inverte Page Table Motivation A page table tens to be big an oes not correspon to the # of pages resiing in the physical memory. Each entry correspons to a physical frame. Virtual Aress: <Process ID, Page Number, Offset> CPU Logical Aress pi P f Physical Memory pi: p Physical Aress An Inverte Page Table

Inverte Page Table Each entry contains the virtual aress of the frame. Entries are sorte by virtual aresses. One table per system. When no match is foun, the page table of the corresponing process must be reference. Example Systems: HP Spectrum, IBM RT, PowerPC, SUN UltraSPARC CPU Logical Aress pi P pi: p An Inverte Page Table f Physical Aress Physical Memory Inverte Page Table Avantage Decrease the amount of memory neee to store each page table Disavantage The inverte page table is sorte by virtual aresses. The use of Hash Table to eliminate lengthy table lookup time: 1HASH + 1IPT The use of an associative memory to hol recently locate entries. Difficult to implement with share memory

Segmentation Segmentation is a memory management scheme that support the user view of memory: A logical aress space is a collection of segments with variable lengths. Subroutine Stack Sqrt Symbol table Main program Segmentation Why Segmentation? Paging separates the user s s view of memory from the actual physical memory but oes not reflect the logical units of a process! Pages & frames are fixe-size, but segments have variable sizes. For simplicity of representation, <segment name, offset> <segment-number, number, offset>

Segmentation Harware Support Aress Mapping CPU s s limit base limit Segment Table base < no yes + Physical Memory trap Segmentation Harware Support Implementation in Registers limite size! Implementation in Memory Segment-table table base register (STBR) Segment-table table length register (STLR) Avantages & Disavantages Paging Use an associative memory (TLB) to improve the effective memory access time! TLB must be flushe whenever a new segment table is use! a STBR Segment table STLR

Segmentation Protection & Sharing Avantage: Segments are a semantically efine portion of the program an likely to have all entries being homogeneous. Example: Array, coe, stack, ata, etc. Logical units for protection! Sharing of coe & ata improves memory usage. Sharing occurs at the segment level. Segmentation Protection & Sharing Potential Problems External Fragmentation Segments must occupy contiguous memory. Aress referencing insie share segments can be a big issue: Seg offs # et Inirect aressing?!!! Shoul all share-coe segments have the same segment number? How to fin the right segment number if the number of users sharing the segments increase! Avoi reference to segment #

Segmentation Fragmentation Motivation: Segments are of variable lengths! Memory allocation is a ynamic storage-allocation allocation problem. best-fit? first-fit? fit? worst-ft? External fragmentation will occur!! Factors, e.g., average segment sizes Size External Fragmentation A byte Overheas increases substantially! (base+limit registers ) Segmentation Fragmentation Remark: Its external fragmentation problem is better than that of the ynamic partition metho because segments are likely to be smaller than the entire process. Internal Fragmentation??

Segmentation with Paging Motivation : Segmentation has external fragmentation. Paging has internal fragmentation. Segments are semantically efine portions of a program. Page Segments! Page Segmentation Intel 80386 8K Private Segments + 8K Public Segments Page Size = 4KB, Max Segment Size = 4GB Tables: Local Descriptor Table (LDT) Global Descriptor Table (GDT) 6 microprogram segment registers for caching Logical Aress Selector s g p 13 1 2 Linear Aress Segment Offset p1 10 s 32 p2 10 12

Page Segmentation Intel 80386 Logical Aress 16 32 s+g+p s Descriptor Table : Segment Length : : Segme nt Base : >- Trap no f Physical aress Page Directory Base Register Physical Memory Segment table 10 + 10 12 p1 ; p2 ; p1 *Page table are limite by the segment lengths of their segments. p2 Page Directory f Page Table Paging an Segmentation To overcome isavantages of paging or segmentation alone: Page segments ivie segments further into pages. Segment nee not be in contiguous memory. Segmente paging segment the page table. Variable size page tables. Aress translation overheas increase! An entire process still nees to be in memory at once! Virtual Memory!!

Paging an Segmentation Consierations in Memory Management Harware Support, e.g., STBR, TLB, etc. Performance Fragmentation Multiprogramming Levels Relocation Constraints? Swapping: + Sharing?! Protection?!