How to create a process? What does process look like?

Similar documents
Simple idea 1: load-time linking. Our main questions. Some terminology. Simple idea 2: base + bound register. Protection mechanics.

Past: Making physical memory pretty

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Operating Systems. Operating Systems Sina Meraji U of T

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

CS450/550 Operating Systems

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Address spaces and memory management

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Introduction to Virtual Memory Management

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

CS370 Operating Systems

Lecture 12: Demand Paging

Review Yi Shi Fall Xi an Jiaotong University

15 Sharing Main Memory Segmentation and Paging

Recall from Tuesday. Our solution to fragmentation is to split up a process s address space into smaller chunks. Physical Memory OS.

16 Sharing Main Memory Segmentation and Paging

Virtual Memory III. Jo, Heeseung

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 15: Caching: Demand Paged Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Virtual Memory Outline

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Operating System Concepts

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Memory Management. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Virtual Memory Management

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Memory Management. Dr. Yingwu Zhu

Outline. 1 Paging. 2 Eviction policies. 3 Thrashing 1 / 28

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management (Chaper 4, Tanenbaum)

Chapter 9: Virtual Memory

Operating Systems Lecture 6: Memory Management II

Memory Management Prof. James L. Frankel Harvard University

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

Last Class: Demand Paged Virtual Memory

Basic Memory Management

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Chapter 9: Virtual-Memory

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Chapter 8: Virtual Memory. Operating System Concepts

12: Memory Management

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

Virtual Memory 1. Virtual Memory

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 13: Address Translation

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

Chapter 3 - Memory Management

Recap: Memory Management

Chapter 8. Virtual Memory

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Chapter 4 Memory Management. Memory Management

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Memory management, part 2: outline

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

CS399 New Beginnings. Jonathan Walpole

Virtual Memory 1. Virtual Memory

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Chapter 4 Memory Management

OPERATING SYSTEM. Chapter 9: Virtual Memory

Virtual Memory. Overview: Virtual Memory. Virtual address space of a process. Virtual Memory. Demand Paging

Virtual Memory Design and Implementation

Main Memory: Address Translation

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

Principles of Operating Systems

Chapter 6: Demand Paging

Memory: Paging System

CSE 153 Design of Operating Systems

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Chapters 9 & 10: Memory Management and Virtual Memory

Chapter 9: Virtual Memory

Memory Management (Chaper 4, Tanenbaum)

Operating Systems and Computer Networks. Memory Management. Dr.-Ing. Pascal A. Klein

All Paging Schemes Depend on Locality. VM Page Replacement. Paging. Demand Paging

CS420: Operating Systems

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

Chapter 8 Memory Management

CS 153 Design of Operating Systems Winter 2016

1. Background. 2. Demand Paging

198:231 Intro to Computer Organization. 198:231 Introduction to Computer Organization Lecture 14

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Transcription:

How to create a process? On Unix systems, executable read by loader Compile time runtime Ken Birman ld loader Cache Compiler: generates one object file per source file Linker: combines all object files into one executable Loader: loads executable in memory 2 What does process look like? Process divided into segments, each has: Code, data, heap (dynamic data) and stack (procedure calls) stack heap code address 2^n-1 initialized data address >= Processes in Memory We want many of them to coexist x OS x11 PowerPoint x121 Visual Studio x23 Issues: PowerPoint needs more memory than there is on machine? What is visual studio tries to access x5? If PowerPoint is not using enough memory? If OS needs to expand? 3 4 Issues Protection: Errors in process should not affect others Transparency: Should run despite memory size/location Load Store CPU Translation box (MMU) legal addr? virtual Illegal? address fault data Physical address How to do this mapping? Physical memory 5 Scheme 1: Load time Linking Link as usual, but keep list of references At load time: determine the new base address Accordingly adjust all references (addition) static a.out x3 jump x2 OS jump x5 x6 x4 Issues: handling multiple segments, moving in memory 6 1

Scheme 2: Execution time Linking Use hardware (base + limit reg) to solve the problem Done for every memory access Relocation: physical address = logical (virtual) address + base Protection: is virtual address < limit? OS a.out x3 MMU x6 Base: x3 jump x2 Limit: x2 a.out jump x2 When process runs, base register = x3, bounds register = x2. Jump addr = x2 + x3 = x5 x4 Dynamic Translation Memory Management Unit in hardware Every process has its own address space CPU logical addrs MMU Physical addrs memory 7 8 Logical and Physical Addresses Logical Address View MMU Physical Address View OS 9 Scheme 2: Discussion Pro: cheap in terms of hardware: only two registers cheap in terms of cycles: do add and compare in parallel Con: only one segment prob 1: growing processes. How to expand? prob 2: how to share code and data?? how can Word copies share code? prob 3: how to separate code and data? Free space A solution: multiple segments Word2 p3 segmentation Word1 p2 1 Segmentation Processes have multiple base + limit registers Processes address space has multiple segments Each segment has its own base + limit registers Add protection bits to every segment Real memory x3 x5 x6 Text seg r/o Stack seg r/w Base&Limit? How to do the mapping? x2 x8 x6 11 Types of Segments Each file of a program (or library) typically results in An associated code segment (instructions) The initialized data segment for that code The zeroed part of the data segment ( bss ) bss) Programs also have One heap segment grows upward A stack segment for each active thread (grows down; all but first have a size limit) 12 2

Lots of Segments Bottom line: a single process might have a whole lot of active segments! And it can add them while it runs, for example by linking to a DLL or forking a thread Address space is a big range with chunks of data separated by gaps It isn t uncommon for a process on a modern platform to have hundreds of segments in memory Mapping Segments Segment Table An entry for each segment Is a tuple <base, limit, protection> Each memory reference indicates segment fault and offset Virtual addr no yes 3 128? + Seg# offset Seg table Prot base len seg mem 128 13 r 512 14 Segmentation Example If first two bits are for segments, next 12 for offset Seg base bounds rw x4 x6ff 1 1 x x4ff 11 2 x3 xfff 11 3 where is x24? x118? x265c? x32? x17? logical x4 x3 x2 x15 x7 physical x47 x4 x3 x5 Segmentation: Discussion Advantages: Allows multiple segments per process Easy to allow sharing of code Do not need to load entire process in memory Disadvantages: Extra translation overhead: Memory & speed An entire segment needs to reside contiguously in memory! Fragmentation x x 15 16 Fragmentation Free memory is scattered into tiny, useless pieces External Fragmentation: Variable sized pieces many small holes over time Internal Fragmentation: Fixed sized pieces internal waste if entire piece is not used External Word?? fragmentation emacs Paging Divide memory into fixed size pieces Called frames or pages Pros: easy, no external fragmentation Pages typical: 4k-8k allocated stack doom Unused ( internal fragmentation ) 17 emacs internal frag 18 3

Mapping Pages If 2 m virtual address space, 2 n page size (m n) bits to denote page number, n for offset within page Translation done using a Page Table Virtual addr 3 128 (12bits) ((1<<12) 128) VPN page offsetpage table seg Prot VPN PPN? PPN mem 128 Paging: Hardware Support Entire page table (PT) in registers PT can be huge ~ 1 million entries Store PT in main memory Have PTBR point to start of PT Con: 2 memory accesses to get to any physical address Solution: a special cache (next slide) Each time we context switch, must switch the current PT too, so that the process we execute will be able to see its memory region invalid r 3 1 19 2 MMU includes a cache We call it the translation lookaside buffer (TLB) Just a cache, with a special name This cache contains page table entries The OS needs to clear the TLB on context switch So first few operations by a freshly started process are very slow! But then the cache will warm up Goal is that MMU can find the PTE entries it needs without actually fetching them from the page table Paging: Discussion Advantages: No external fragmentation Easy to allocate Easy to swap, since page size usually same as disk block size Disadvantages: Space and speed One PT entry for every page, vs. one entry for contiguous memory for segmentation B=x,len=xffff x Page table xffff 22 Size of the page Small page size: High overhead: What is size of PT if page size is 512 bytes, and 32 bit addr space? Large pg page size: High internal fragmentation More Paging + Segmentation Paged segmentation Handles very long segments The segments are paged Segmented Paging When the page table is very big Segment the page table Let s consider System 37 (24 bit address space) Seg # page # (8 bits) page offset (12 bits) (4 bits) page size 23 24 4

1 2 Segmented Paging Example Base bound prot x2 x14 R x x xd RW Segment table x2216 read? x14c84 read? x11424 read? x2114 write? Page table (2-byte entries) Seg 2: page 2, PTE @ x14, addr = x416 Seg 1: page 4, PTE @ protection error Seg, page x11, PTE@x222, addr = x1f424 Seg 2, page x1, PTE@ bounds violation Table entry addresses x1f x11 x22 x3 x2a x13 x2 xc x7 x12 x4 xb x6 25 Handling PT size Segmented Paging removes need for contiguous alloc OS maintains one copy of PT for every process And a global copy of all frames Other approaches: Hierarchical Paging: Page the page table Hashed Page Table: Each entry maps to linked list of pages Inverted Page Table: Map from Frame to (VA, process) instead of VA to frame per process 26 What is virtual memory? Each process has illusion of large address space 2 32 for 32 bit addressing However, physical memory is much smaller How do we give this illusion to multiple processes? Virtual Memory: some addresses reside in disk disk page table Physical memory 28 Virtual Memory Load entire process in memory (swapping), run it, exit Is slow (for big processes) Wasteful (might not require everything) Solutions: partial residency Paging: only bring in pages, not all pages of process Demand paging: bring only pages that are required Where to fetch page from? Have a contiguous space in disk: swap file (pagefile.sys) How does VM work? Modify Page Tables with another bit ( is present ) If page in memory, is_present = 1, else is_present = If page is in memory, translation works as before If page pg is not in memory, translation causes a page fault 32 :P=1 4183:P= 177 :P=1 5721:P= Disk Mem 29 3 5

Page Faults On a page fault: OS finds a free frame, or evicts one from memory (which one?) Want knowledge of the future? Issues disk request to fetch data for page (what to fetch?) Just the requested page, or more? Block current process, context switch to new process (how?) Process might be executing an instruction When disk completes, set present bit to 1, and current process in ready queue Resuming after a page fault Should be able to restart the instruction For RISC processors this is simple: Instructions are idempotent until references are done More complicated for CISC: E.g. move 256 bytes from one location to another Possible Solutions: Ensure pages are in memory before the instruction executes 31 32 When to fetch? Just before the page is used! Need to know the future Demand paging: Fetch a page when it faults Prepaging: Get the page on fault + some of its neighbors, or Get all pages in use last time process was swapped What to replace? Page Replacement When process has used up all frames it is allowed to use OS must select a page to eject from memory to allow new page The page pg to eject is selected using the Page Replacement Algo Goal: Select page that minimizes future page faults 33 34 Page Replacement Algorithms Random: Pick any page to eject at random Used mainly for comparison FIFO: The page brought in earliest is evicted Ignores usage Suffers from Belady s Anomaly Fault rate could increase on increasing number of pages E.g. 1 2 3 1 4 1 2 3 4 with frame sizes 3 and 4 OPT: Belady s algorithm Select page not used for longest time LRU: Evict page that hasn t been used the longest Past could be a good predictor of the future Example: FIFO, OPT A B C D A B C Reference stream is A B C A B D A D B C OPTIMAL A B C A B D A D B C B FIFO 5 Faults toss C A B C A B D A D B C B 7 Faults toss A toss? toss A or D 35 36 6

Implementing Perfect LRU On reference: Time stamp each page On eviction: Scan for oldest frame Problems: Large page lists Timestamps are costly 13 Approximate LRU LRU is already an approximation! xffdcd: add r1,r2,r3 xffdd: ld r1, (sp) 14 14 t=4 t=14 t=14 t=5 37 LRU: Clock Algorithm Each page has a reference bit Set on use, reset periodically by the OS Algorithm: FIFO + reference bit (keep pages in circular list) Scan: if ref bit is 1, set to, and proceed. If ref bit is, stop and evict. Problem: Low accuracy for large memory 38 LRU with large memory Solution: Add another hand Leading edge clears ref bits Trailing edge evicts pages with ref bit What if angle small? What if angle big? Clock Algorithm: Discussion Sensitive to sweeping interval Fast: lose usage information Slow: all pages look used Clock: add reference bits Could use (ref bit, modified bit) as ordered pair Might have to scan all pages LFU: Remove page with lowest count No track of when the page was referenced Use multiple bits. Shift right by 1 at regular intervals. MFU: remove the most frequently used page LFU and MFU do not approximate OPT well 39 4 Page Buffering Cute simple trick: (XP, 2K, Mach, VMS) Keep a list of free pages Track which page the free page corresponds to Periodically write modified pg pages, and reset modified bit evict add modified list (batch writes = speed) free used unmodified free list 41 Allocating Pages to Processes Global replacement Single memory pool for entire system On page fault, evict oldest page in the system Problem: protection Local (per process) replacement Have a separate pool of pages for each process Page fault in one process can only replace pages from its own process Problem: might have idle resources 42 7