Chapters 7-8. Memory Management. Chapter 7 - Physical Memory. 7.1 Preparing a Program for Execution

Similar documents
Operating Systems (2INC0) 2017/18

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Chapter 8. Virtual Memory

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Memory Management Virtual Memory

a process may be swapped in and out of main memory such that it occupies different regions

Chapter 8 Virtual Memory

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Chapter 8 Virtual Memory

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Operating Systems Lecture 6: Memory Management II

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Virtual Memory. Chapter 8

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

CS399 New Beginnings. Jonathan Walpole

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Operating Systems: Internals and Design Principles. Chapter 7 Memory Management Seventh Edition William Stallings

ECE519 Advanced Operating Systems

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Memory Management. Dr. Yingwu Zhu

CS450/550 Operating Systems

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

16 Sharing Main Memory Segmentation and Paging

Role of OS in virtual memory management

15 Sharing Main Memory Segmentation and Paging

Chapter 4 Memory Management

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

Basic Memory Management

Chapter 9 Memory Management

How to create a process? What does process look like?

Chapter 4 Memory Management. Memory Management

Operating Systems. Operating Systems Sina Meraji U of T

Process size is independent of the main memory present in the system.

Operating Systems (2INC0) 2017/18

Memory Management Prof. James L. Frankel Harvard University

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 Virtual Memory

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

Operating System Support

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Virtual to physical address translation

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

CPE300: Digital System Architecture and Design

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

DAT (cont d) Assume a page size of 256 bytes. physical addresses. Note: Virtual address (page #) is not stored, but is used as an index into the table

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

ECE468 Computer Organization and Architecture. Virtual Memory

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

CS370 Operating Systems

ECE4680 Computer Organization and Architecture. Virtual Memory

Chapter 3 Memory Management: Virtual Memory

Memory Management Virtual Memory

Virtual Memory Outline

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Operating Systems Unit 6. Memory Management

CS 5523 Operating Systems: Memory Management (SGG-8)

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

stack Two-dimensional logical addresses Fixed Allocation Binary Page Table

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Goals of Memory Management

Virtual Memory III. Jo, Heeseung

Unit II: Memory Management

Chapter 8. Operating System Support. Yonsei University

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

21 Lecture: More VM Announcements From last time: Virtual Memory (Fotheringham, 1961)

Chapter 8 Main Memory

1. Creates the illusion of an address space much larger than the physical memory

Week 2: Tiina Niklander

Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Operating System Concepts

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

Operating Systems, Fall

Unit 2 Buffer Pool Management

Operating Systems. User OS. Kernel & Device Drivers. Interface Programs. Memory Management

Memory: Paging System

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Memory Management. Motivation

Motivation. Memory Management. Memory Management. Computer Hardware Review

Address spaces and memory management

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

Lecture 12: Demand Paging

Chapter 8 Memory Management

Perform page replacement. (Fig 8.8 [Stal05])

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

Virtual Memory. 1 Administrivia. Tom Kelliher, CS 240. May. 1, Announcements. Homework, toolboxes due Friday. Assignment.

Memory management. Knut Omang Ifi/Oracle 10 Oct, 2012

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

4.1 Paging suffers from and Segmentation suffers from. Ans

Chapter 3 - Memory Management

Memory Management. Dr. Yingwu Zhu

UNIT III MEMORY MANAGEMENT

Transcription:

Chapters 7-8 Memory Management Real memory Logical storage Address spaces Virtual memory 1 Chapter 7 - Physical Memory 71 Preparing a Program for Execution Program Transformations Logical-to-Physical Address Binding 72 Memory Partitioning Schemes Fixed Partitions Variable Partitions 73 Allocation Strategies for Variable Partitions 74 Dealing with Insufficient Memory 2

8 Virtual Memory 81 Principles of Virtual Memory 82 Implementations of Virtual Memory Paging Segmentation Paging With Segmentation Paging of System Tables Translation Look-aside Buffers 83 Memory Allocation in Paged Systems Global Page Replacement Algorithms Local Page Replacement Algorithms Load Control and Thrashing Evaluation of Paging 3 Address Binding Time assign physical addresses = relocation static binding programming time compilation time linking time loading time dynamic binding execution time 4

Relocation Assignment of real memory Programs make use of logical address space Code/Data spaces Segments Real memory is linear divided into segments subdivided into page frames Relocation - Load Static (link bind early) Dynamic (link bind late) Hardware supported MMU and HW caches 5 Fig 7-3a: storage reference with dynamic relocation (general principle) NL_map: {relocatable (virtual) addresses} {real (physical) storage addresses} CPU logical address Address Relocation NL_map physical address Main storage Data transfer (read/write) MMU I/O subsystem 6

Address binding how to implement dynamic binding perform for each address at run time: pa = address_map(la) la == va pa = F ( va ) simplest form of address_map function: relocation register pa = la + RR more general form: page/segment table (Chapter 8) 7 Partitions - Segments Multiprogramming requires that more than one program is kept in main memory Memory is divided into logical partitions Partitions can be of fixed or variable size and be located at specific or selectable locations - segments Each partitions should provide a private address space for the program/process/thread/object Efficient partition management and address binding requires hardware support Segments (base and limit registers) and Page Frames with appropriate relocation support must alleviate this task of the operating system 8

logical (virtual) memory VM_1 physical (real) memory NL_map VM_2 segmented memory flexible logical address management 9 Drawbacks Address translation at run time consumes time (TINSTAAFL) there is no such thing as afree lunch Programs and data areas grow and shrink dynamically in size Partitions are eventually released (freed by a program) Newly arriving processes require memory space Ie allocation of a partition Need to manage free space and allocation of partitions Possibly rearrange dynamically the allocation 10

D > H 1, D > H 2 D < H 1 + H 2 Memory fragmentation A A B H 2 D C C H 1 H 1 D 11 memory compaction p 1 initial state 2 3 p 1 total compaction 3 p 2 2 p 1 p 2 partial compaction 3 2 p 2 p 1 minimal move 2 3 4 p 3 5 p 2 2 11 11 5 20 p 3 5 p 3 5 p 3 5 9 9 9 9 12

Address Space Management Programs have a dynamic calling structure subroutines, procedures, functions etc Programs also frequently have varying data areas A programmer may not know how his/her program behaves Measurements have shown that statistically programs tend to exhibit high locality Using this characteristic feature one can delegate the management of the mapping from logical ( virtual ) addresses to real, physical addresses to a combination of hardware and software (operating system) Hence, the user is presented with the illusion of an almost unlimited, private address space! 13 logical (virtual) memory VM_1 page 0 page 1 page 2 VM_2 NL_map physical (real) memory frame 0 frame 1 frame 2 frame 3 page x frame 4 frame 5 frame 6 frame 7 pages MMU page frames paged memory flexible physical address management 14

paging The physical memory is subdivided into page frames The logical memory is subdivided into pages This means that memory addresses are now composed of two parts: A page number An offset (distance) within the page Hardware assistance is required for the binding: Page numbers are translated into frame numbers The offset value is used as is Attention: this is not identical to virtual memory!!! 15 p bits w bits va page number p w f bits w bits pa frame number f w Fig 8-2: form of virtual and physical address 16

Page No NS - virtual memory LS - physical memory Frame No 0 NL_map 0 1 1 2 2 3 3 (p,w) (f,w) f F-1 P-1 Fig 8-3: paged virtual memory 17 Principles of virtual memory system creates illusion of large contiguous memory space(s) for each process relevant portions of VM are loaded automatically and transparently address map translates virtual addresses to physical addresses Figure 8-1 18

logical (virtual) memory VM_1 physical (real) memory NL_map VM_2 Fig 8-1: principles of virtual memory 19 Principles of virtual memory single-segment VM: one area of 0n-1 words divided into fix-size pages multiple-segment VM: multiple areas of up to 0n-1 (words) each holds a logical segment (function, data structure) each is contiguous or divided into pages 20

address mapping Main issues in VM design how to translate virtual addresses to physical placement where to place a portion of VM needed by process replacement which portion of VM to remove when space is needed load control how much of VM to load at any one time sharing how can processes share portions of their VMs 21 Implementation using paging VM is divided into fix-size pages PM is divided into page frames (page size) system loads pages into frames and translates addresses virtual address: va = (p,w) physical address: pa = (f,w) p determines number of pages in VM f determines number of frames in PM w determines page/frame size Figure 8-2 22

given (p,w): Address translation determine f from p -- how to keep track? concatenate f+w to form pa one solution: frame table one entry FT[i] for each frame FT[i]pid records process ID FT[i]page records page number p given (id,p,w): search for a match on (id,p): index f is the frame number 23 logical / virtual memory pid = 1 frame table physical memory pid = 2 0 0 pid = 1 (p) (f,w) (p,w) pid = 2 (p) (p,w) 00F F FFF F = inverted page table 24

Address translation address map(id, p, w) { } pa = UNDEFINED; for (f = 0; f < F; f++) if (FT[f]pid==id && FT[f]page==p) pa:= (f+w); return pa; 25 0 1 2 3 f F-1 Frame table Id pid Id p p FT page w distance frame address pages in main memory w frame f page p memory access Fig 8-4: address translation with frame 26 table

Address translation Figure 8-4 address_map(id, p, w) { pa = UNDEFINED; for (f = 0; f < F; f++) if (FT[f]pid==id && FT[f]page==p) pa = f+w; return pa; } drawbacks of frame table implementation costly: search must be done in parallel in hardware sharing of pages difficult or not possible 27 Page tables PT associated with each VM (not PM) PTR points at PT at run time p th entry holds frame number of page p: *(PTR+p) points to frame f address translation: address_map(p, w) { pa = *(PTR+p)+w; return pa } Figure 8-5 drawback: extra memory access 28

logical / virtual memory pid = 1 (p) physical memory pid = 2 0 0 (f,w) (p,w) (f) (p,w) (f) 00F F FFF F page table page table 29 page tables p w distance pages in main memory PTR p f frame address w memory access Fig 8-5: address translation with page 30 table

PTR = 1024 va = 2100 1024 21504 1025 40960 1026 3072 1027 15360 pa = 3172 Fig 8-6: page table contents 31 0 page no Associative memory p w distance pages in main memory 1 2 3 i F-1 parallel search page in Frame_i frame address w memory access Fig 8-?: address translation with page 32 table

Address Mapping Two possible approaches: Frame table 1 entry per real page frame Page table 1 entry per logical/virtual page Both solutions depend on searching the table Requires hardware support to speed up the process (associative memory Translation Lookaside Buffer) Both solutions need 2 memory accesses per memory reference unless associative memory is used for the translation table 33 Demand paging all pages of VM can be loaded initially simple but maximum size of VM = size of PM pages a loaded as needed -- on demand additional bit in PT indicates presence/absence page fault occurs when page is absent address_map(p, w) { if (resident(*(ptr+p))) { pa = *(PTR+p)+w; return pa; } else page_fault; } 34

VM using segmentation multiple contiguous spaces more natural match to program/data structure easier sharing (Chapter 9) va = (s,w) mapped to pa (but no frames) where/how are segments placed in PM? contiguous versus paged application 35 Contiguous allocation each segment is contiguous in PM ST keeps track of starting locations STR point to ST address translation address_map(s, w) { if (resident(*(str+s))) { pa = *(STR+s)+w; return pa; } else segment_fault; } drawback: external fragmentation 36

Paging with segmentation each segment is divided into fix-size pages va = (s,p,w) s determines # of segments (size of ST) p determines # of pages per seg (size of PT) w determines page size pa = *(*(STR+s)+p)+w Figure 8-7 drawback: 2 extra memory references 37 segment no s p w segment table page tables memory pages STR s p w Fig 8-7: address translation with page and segment 38 table

Paging of system tables ST or PT may be too large to keep in PM divide ST or PT into pages keep track by additional page table paging of ST ST divided into pages segment directory keeps track of ST pages va = (s1,s2,p,w) pa = *(*(*(STR+s1)+s2)+p)+w Figure 8-8 39 s 1 s 2 p w paged segment tables page table memory pages STR s 1 s 2 p w Fig 8-8: address translation with paged segment 40table

14 32 s offset segment table segment segment registers s base 32 offset 32 Fig 8-9a: address translation in the Pentium processor segmentation only 41 14 32 s offset + 10 10 12 p1 p2 w segment table s segment registers base 32 p1 p2 w page directory Fig 8-9b: address translation in the Pentium processor segmentation with CS paging 325 pages of page table program /data pages 42

Translation look-aside buffers to avoid additional memory accesses keep most recently translated page numbers in associative memory: for any (s,p,*) keep (s,p) and frame# f bypass translation if match on (s,p) found Figure 8-10 TLB is different from cache TLB only keeps frame #s cache keeps data values 43 s p w pages in main memory w replacement info (LRU) s, p segment/page number f frame number Fig 8-10: translation lookaside 44buffer

Memory allocation with paging placement policy: any free frame is ok replacement: must minimize data movement global replacement: consider all resident pages (regardless of owner) local replacement consider only pages of faulting process how to compare different algorithms: use reference string: r 0 r 1 r t r t is the number of the page referenced at time t count # of page faults 45 Global page replacement optimal (MIN): replace page that will not be referenced for the longest time in the future Time t 0 1 2 3 4 5 6 7 8 9 10 RS c a d b e b a b c d Frame 0 a a a a a a a a a a d Frame 1 b b b b b b b b b b b Frame 2 c c c c c c c c c c c Frame 3 d d d d d e e e e e e IN e d OUT d a problem: RS not known in advance 46

random replacement Global page replacement simple but does not exploit the locality of reference most instructions are sequential (branch instructions typically,<~10%) most loops are short (within at most a few pages) many data structures are accessed sequentially (but overall arrangements can be critical - "wrap around" for column/row access - phased bank structure) 47 Global page replacement FIFO: replace oldest page Time t 0 1 2 3 4 5 6 7 8 9 10 RS c a d b e b a b c d Frame 0 >a >a >a >a >a e e e e >e d Frame 1 b b b b b >b >b a a a >a Frame 2 c c c c c c c >c b b b Frame 3 d d d d d d d d >d c c IN e a b c d OUT a b c d e problem: favors recently accessed pages but ignores when program returns to old pages 48

Global page replacement LRU: replace least recently used page Time t 0 1 2 3 4 5 6 7 8 9 10 RS c a d b e b a b c d Frame 0 a a a a a a a a a a d Frame 1 b b b b b b b b b b b Frame 2 c c c c c e e e e e d Frame 3 d d d d d d d d d c c IN e c d OUT c d e Qend d c a d b e b a b c d c d c a d b e b a b c b b d c a d d e e a b Qhead a a b b c a a d d e a 49 Global page replacement LRU implementation software queue: too expensive time-stamping stamp each referenced page with current time replace page with oldest stamp hardware capacitor with each frame charge at reference replace page with smallest charge n-bit aging register with each frame shift all registers to right at every reference set left-most bit of referenced page to 1 replace page with smallest value 50

Global page replacement second-chance algorithm approximates LRU implement use-bit u with each frame u=1 when page referenced to select a page: if u==0, select page else, set u=0 and consider next frame used page gets a second chance to stay in PM algorithm is called clock algorithm: search cycles through page frames 51 Global page replacement second-chance algorithm 4 5 6 7 8 9 10 b e b a b c d >a/1 e/1 e/1 e/1 e/1 >e/1 d/1 b/1 >b/0 >b/1 b/0 b/1 b/1 >b/0 c/1 c/0 c/0 a/1 a/1 a/1 a/0 d/1 d/0 d/0 >d/0 >d/0 c/1 c/0 e a c d 52

Global page replacement third-chance algorithm second chance makes no difference between read and write access write access more expensive give modified pages a third chance: u-bit set at every reference (read and write) w-bit set at write reference to select a page, cycle through frames, resetting bits, until uw==00: uw uw 1 1 0 1 1 0 0 0 "cheapest" candidate (cached?) 0 1 0 0 * (remember modification) 0 0 select 53 Global page replacement third-chance algorithm 0 1 2 3 4 5 c a w d b w e >a/10 >a/10 >a/11 >a/11 >a/11 a/00* b/10 b/10 b/10 b/10 b/11 b/00* c/10 c/10 c/10 c/10 c/10 e/10 d/10 d/10 d/10 d/10 d/10 >d/00 e 54

Local page replacement measurements indicate that every program needs a minimum set of pages if too few, thrashing occurs if too many, page frames are wasted the minimum varies over time how to determine and implement this minimum? 55 Local page replacement optimal (VMIN) define a sliding window (t,t+τ) τ is a parameter (constant) at any time t, maintain as resident all pages visible in window guaranteed to generate smallest number of page faults 56

Local page replacement optimal (VMIN) with τ=3 Time t 0 1 2 3 4 5 6 7 8 9 10 RS d c c d b c e c e a d Page a - - - - - - - - - - Page b - - - - - - - - - - Page c - - - - Page d - - - - - - Page e - - - - - - - - IN c b e a d OUT d b c e a guaranteed optimal but unrealizable without RS 57 Local page replacement working set model: use principle of locality use trailing window (instead of future window) working set W(t,τ): all pages referenced during (t τ,t) at time t: remove all pages not in W(t,τ) process may run only if entire W(t,τ) is resident 58

Local page replacement working set model Time t 0 1 2 3 4 5 6 7 8 9 10 RS a c c d b c e c e a d Page a - - - - - Page b - - - - - - - Page c - Page d - - - Page e - - - - IN c b e a d OUT e a d b drawback: costly to implement approximate (aging registers, time stamps) 59 Local page replacement page fault frequency main objective: keep page fault rate low basic principle of pff: if time between page faults τ, grow resident set: add new page to resident set if time between page faults > τ, shrink resident set: add new page but remove all pages not referenced since last page fault 60

page fault frequency Local page replacement Time t 0 1 2 3 4 5 6 7 8 9 10 RS c c d b c e c e a d Page a - - - - - v Page b - - - - - - Page c - Page d - v Page e v v - - IN c b e a d OUT ae bd 61 Load control and thrashing main issues: how to choose degree of multiprogramming when level decreased, which process should be deactivated when new process reactivated, which of its pages should be loaded 62

Load control and thrashing choosing degree of multiprogramming local replacement: working set of any process must be resident this automatically imposes a limit global replacement no working set concept use CPU utilization as a criterion with too many processes, thrashing occurs Figure 8-11 63 L >> S Fig 8-11: program behavior L ~ S L < S degree of multiprogramming 64

Load control and thrashing how to find N max? L=S criterion: page fault service S needs to keep up with mean time between faults L 50% criterion: CPU utilization is highest when paging disk ~50% busy (found experimentally) 65 Load control and thrashing which process to deactivate lowest priority process faulting process last process activated smallest process largest process which pages to load when process activated Figure 8-12 > prepage last resident set 66

Fig 8-12: lifetime curve of a program page fault rate decreases, MTBPF grows slowly - > adding pages has small benefits page fault rate increases & MTBPF also -> prepage size frequent page faults 67 Evaluation of paging experimental measurements: Figure 8-13 (a): number of pages referenced over time initial set can be loaded more efficiently than by individual page faults (b): number of instructions executed within one page (c): influence of page size for fixed total on page fault rate (d): influence of available number of pages on page fault rate 68

Fig 8-13: qualitative behavior 69 Conclusions (a): prepaging is important initial set can be loaded more efficiently than by individual page faults (b,c): suggest page size should be small, however, small pages require larger page tables more hardware greater I/O overhead (d): load control algorithm is important 70