Operating Systems, Fall Lecture 5, Tiina Niklander 1

Similar documents
Operating Systems, Fall Lecture 5 1

Operating Systems, Fall Lecture 5 1. Overhead due to page table and internal fragmentation. Tbl 8.2 [Stal05] 4.

Page Size Page Size Design Issues

Operating Systems, Fall

Week 2: Tiina Niklander

Modeling Page Replacement: Stack Algorithms. Design Issues for Paging Systems

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Chapter 4 Memory Management. Memory Management

Chapter 4 Memory Management

Operating Systems, Fall

Operating Systems, Fall

Operating Systems, Fall

Motivation. Memory Management. Memory Management. Computer Hardware Review

CS450/550 Operating Systems

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

Registers Cache Main memory Magnetic disk Magnetic tape

Operating Systems, Fall

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s

Chapter 8 Virtual Memory

CS307: Operating Systems

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

a process may be swapped in and out of main memory such that it occupies different regions

Memory Management and Protection

Chapter 8 Main Memory

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Perform page replacement. (Fig 8.8 [Stal05])

Chapter 8: Main Memory

Operating System Support

Chapter 8: Memory-Management Strategies

Chapter 8: Memory Management

Memory management, part 3: outline

Segmentation (Apr 5 and 7, 2005)

Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

Clock page algorithm. Least recently used (LRU) NFU algorithm. Aging (NFU + forgetting) Working set. Process behavior

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Memory management, part 3: outline

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Chapter 8: Main Memory

Page Table Structure. Hierarchical Paging. Hashed Page Tables. Inverted Page Tables

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

SHANDONG UNIVERSITY 1

Memory Management Virtual Memory

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Module 8: Memory Management

Chapter 8: Memory Management

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Outline. V Computer Systems Organization II (Honors) (Introductory Operating Systems) Advantages of Multi-level Page Tables

Chapter 8: Main Memory. Operating System Concepts 8th Edition

The Working Set. CS 355 Operating Systems. The Working Set. Working Set Model 3/27/18. Paging Algorithms and Segmentation

Logical versus Physical Address Space

Memory Management. Motivation

Basic Memory Management

Memory Management Virtual Memory

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

OPERATING SYSTEM. Chapter 9: Virtual Memory

Memory and multiprogramming

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Chapter 8 Virtual Memory

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Memory Management. Dr. Yingwu Zhu

Memory management: outline

Memory management: outline

Background. Contiguous Memory Allocation

Virtual Memory Management

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Memory Management. 3. What two registers can be used to provide a simple form of memory protection? Base register Limit Register

CS399 New Beginnings. Jonathan Walpole

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

Address spaces and memory management

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Chapter 9 Memory Management

Computer Organization and Architecture. OS Objectives and Functions Convenience Making the computer easier to use

Chapter 8. Virtual Memory

Chapter 10: Virtual Memory

Chapter 8: Main Memory

stack Two-dimensional logical addresses Fixed Allocation Binary Page Table

Chapter 8: Main Memory

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Operating System Concepts 9 th Edition

Last Class: Deadlocks. Where we are in the course

Chapters 9 & 10: Memory Management and Virtual Memory

Virtual Memory Design and Implementation

Virtual Memory. Chapter 8

Lecture 8 Memory Management Strategies (chapter 8)

Transcription:

Paging: Design issues, Segmentation Page Fault Handling more details 1. Hardware traps to kernel 2. General registers saved 3. OS determines which virtual page needed 4. OS checks validity of address, seeks page frame 5. If selected frame is dirty, write it to disk 6. OS schedules disk operation to bring new page in from disk 7. Page tables updated 8. Faulting instruction backed up to when it began 9. Faulting process scheduled 10. Registers restored es 11. Program continues 2 Lecture 5, Tiina Niklander 1

Operating System Involvement with Paging Process creation determine program size create page table Process execution MMU reset for new process TLB flushed Page fault time determine virtual address causing fault swap target page out, needed page in Process termination time release page table, pages 3 Page fault rate More small pages to the same memory space References from large pages more probable to go to a page not yet in memory References from small pages often to other pages -> often used pages selected to the memory Too small working set size -> larger page fault rate Large working set size -> nearly all pages in the memory 4 Lecture 5, Tiina Niklander 2

Working set size Set of pages used during last k references (window size) Pages not in the working set are candidates to be released or replaced Suitable size for the working set? Estimate based on page fault rate 1 W ( t, ) min(, n) 5 Page Fault Rate Page fault rate as a function of the number of page frames assigned Page Fault Frequency algorithm (PFF): Keep page fault rate between A and B for each process Increase residence set size at A and reduce at B 6 Lecture 5, Tiina Niklander 3

Load Control Despite good designs, system may still thrash When Page Fault Frequency (PFF) algorithm indicates some processes need more memory but no processes need less Solution : Reduce number of processes competing for memory swap one or more to disk, divide up pages they held reconsider degree of multiprogramming 7 Swap area, VM backup store (Heittovaihtoalue) Static Reserved in advance Space for whole process Copy the code/text at start or Reserve space, but swap gradually based on need PCB has swap location info Dynamic Reserved space when needed Page table has swap block number for the page Or use separate disk map to store page locations on swap No space reservation for pages that are never stored on swap Swap location: Windows: pagefile.sys, win386.swp Linux: swap partition 8 Lecture 5, Tiina Niklander 4

Allocation and replacement Tbl 8.4 [Stal05] 9 Local versus Global Allocation Policies (a) Original configuration (b) Local page replacement (c) Global page replacement 10 Lecture 5, Tiina Niklander 5

Shared pages and libraries Several processes use, just one copy in memory Shared pages read-only Same page frame pointed by multiple page tables Code must be position-independentindependent Code must be re-entrant, entrant, does not change during execution A special case of memory-mapped mapped files 11 Sharing libraries Old style: Link the library with program statically New style: Link the library at run time dynamically Windows: DLL dynamic link library Save space Just one copy in the memory Removing bugs Just make one correction in the shared library Save recompilation time Library change do not require new linking 12 Lecture 5, Tiina Niklander 6

Sharing: example editor 13 Sharing a memory-mapped mapped file Tan08 Fig 10-3 Mapped file is an alternative of I/O, file is accessed as a big character array in memory Can be used as shared memory between processes 14 Lecture 5, Tiina Niklander 7

Paging: Segmentation Segmentation One-dimensional address space with growing tables One table may bump into another Solution: separate tables to different segments 16 Lecture 5, Tiina Niklander 8

Segmentation 0000001011101110 0001000000000000 Programmer (or compiler) determines the logical, unequal- sized segments Segment s size can change dynamically Segment length must be known Still using logical addresses (segment, offset) Segments can be freely located by OS OS maintains a segment table for each process Sta Fig 7.11 17 Segment table entry Sta Fig 8.2b Each entry has a Present and Modified bits Segment location described using physical start address, segment base, and length Address translation 18 Lecture 5, Tiina Niklander 9

Segmentation Segment size can be dynamically changed Update the length in segment table entry May require relocation of the segment in memory MMU must check address correction based on the current length value Segment allocation may cause external fragmentation Memory compaction may be needed Segment is an excellent mechanism for protection and sharing Logical entities make sense as elements for sharing Programmer/User aware of segments, knows the content 19 Example of Pure Segmentation (a)-(d)(d) Memory allocated and deallocated for segments (e) Removal of the external fragments by compaction 20 Lecture 5, Tiina Niklander 10

Comparison of paging and segmentation 21 Combining segmentation with paging 22 Lecture 5, Tiina Niklander 11

Segmentation with paging Segments split to pages Memory management easier by pages No external fragmentation (ulkoista pirstoutumista) No compaction (tiivistämistarvetta) Process has One segment table, contains sharing & protection info One page table for each segment Adding new segment: new entry to segment table Segment grows: new page entries to segment s page table 23 PCB Segmentation with Paging: MULTICS Segment table Segment table entry Each segment table element points to page table Segment table still contains segment length 24 Lecture 5, Tiina Niklander 12

Segmentation with Paging: MULTICS The virtual address has three parts Segment number - index in segment table -> page table Page number - index in page table -> page frame Offset within the page 25 Address translation Sta Fig 8.13 26 Lecture 5, Tiina Niklander 13

Segmentation with Paging: Pentium During execution: Segment registers CS code selector DS data selector (segment selection) Pentium has two tables: LDT and GDT At page access selector informs the CPU which one it is Can contain 8K segment entry descriptors (13 bits) Local Descriptor Table (LDT) one for each process Process s code, data, stack etc. segments Global Descriptor Table (GDT) - one, shared by all System segments, also OS segments 27 Segmentation with Paging: Pentium Conversion of a (selector, offset) pair to a linear address The linear address from the conversion is Physical address, if paging is disabled Pure segmentation scheme in this case! Virtual address, if paging is enabled Pages within the segment 28 Lecture 5, Tiina Niklander 14

Segmentation with Paging: Pentium Two-level page table Each entry is 32 bit long, 20 bits for the frame number Page directory and each page table has 1024 entries Each page table fits to one 4 KB page 29 Segmentation with Paging: Pentium Protection on the Pentium based on levels Each process and segment has protection level Process on one level Can access segments on the same and higher levels, but not on lower level May call procedures on different level using selector (call gate) instead of address 30 Lecture 5, Tiina Niklander 15

UNIX / Solaris (+4BSD) Memory management 31 Two-handed Clock (Fig 8.23 [Stal05]) 32 Fronthand: set Reference = 0 Backhand: if (Reference == 0) page out Pages not used during the sweep are assumed not to be in the working sets of the processes. Suits better for large memories. Speed of the hands (frame/sec)? Scanrate Gap between the hands? Handspread Lecture 5, Tiina Niklander 16

Linux Memory management 33 Linux: paging Architecture independent Alpha: 64b addresses, full support for 3 levels, Page size 8KB, offset 13 bits x86: 32b address, only 2-levels, Page size 4KB, offset 12 bits 4-level page table Page directory, 1 page Page upper directory, multiple pages page middle directory, multiple pages page table, multiple pages Page directory always in memory The page table address given to MMU at process switch Other pages can be on disk missing x86 value 0 Several p. One page Fig 10-16 34 Lecture 5, Tiina Niklander 17

Linux: placement (allocation) Reserves a consecutive region for consecutive pages More efficient fetching and storing with disks Optimize the disk utilisation, with the cost of memory allocation Buddy System: 1, 2, 4, 8, 16 or 32 page frames Allocates pages in large groups, increases internal fragmentation Implementation: table with list of different sizes unallocated page regions 35 Windows Memory management 36 Lecture 5, Tiina Niklander 18

Windows: Paging No segmentation Variable resident set size per process Each process has minimum and maximum limits for resident set Limits are not hard bounds If large number of free frames, process can get more frames A greedy one is not allowed to allocate last 512 frames If the free memory reduces, the resident set of processes can be decreased Demand fetch and prepaging to standby list 37 Windows address space copy on write? Tan08 Fig 11-32 [Tane01] not in swap area 38 Lecture 5, Tiina Niklander 19

Windows: page frames (Fig 11.36 [Tane08]) 39 Lecture 5, Tiina Niklander 20