Chapter 3: Important Concepts (3/29/2015)

Similar documents
CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Operating Systems Lecture 6: Memory Management II

Memory Management Prof. James L. Frankel Harvard University

Memory Management. Dr. Yingwu Zhu

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

CS399 New Beginnings. Jonathan Walpole

Chapter 4 Memory Management

Basic Memory Management

Chapter 8 Virtual Memory

Memory management, part 2: outline

Memory management. Knut Omang Ifi/Oracle 10 Oct, 2012

Memory Management. Memory Management. G53OPS: Operating Systems. Memory Management Monoprogramming 11/27/2008. Graham Kendall.

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

Memory management, part 2: outline. Operating Systems, 2017, Danny Hendler and Amnon Meisels

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Chapter 3 - Memory Management

CS 550 Operating Systems Spring Memory Management: Page Replacement

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

CS 333 Introduction to Operating Systems. Class 14 Page Replacement. Jonathan Walpole Computer Science Portland State University

Paging and Page Replacement Algorithms

CS510 Operating System Foundations. Jonathan Walpole

Memory: Paging System

Operating Systems and Computer Networks. Memory Management. Dr.-Ing. Pascal A. Klein

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

CS450/550 Operating Systems

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Week 2: Tiina Niklander

Chapter 4 Memory Management. Memory Management

Operating Systems, Fall

Virtual Memory Outline

Virtual Memory. Today.! Virtual memory! Page replacement algorithms! Modeling page replacement algorithms

21 Lecture: More VM Announcements From last time: Virtual Memory (Fotheringham, 1961)

2 nd Half. Memory management Disk management Network and Security Virtual machine

Chapter 3 Memory Management: Virtual Memory

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Perform page replacement. (Fig 8.8 [Stal05])

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Operating Systems. User OS. Kernel & Device Drivers. Interface Programs. Memory Management

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management. 3. What two registers can be used to provide a simple form of memory protection? Base register Limit Register

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Chapters 9 & 10: Memory Management and Virtual Memory

Operating Systems. IV. Memory Management

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

MODERN OPERATING SYSTEMS. Chapter 3 Memory Management

Address spaces and memory management

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Page replacement algorithms OS

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Memory Management. Goals of Memory Management. Mechanism. Policies

How to create a process? What does process look like?

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Operating Systems. Operating Systems Sina Meraji U of T

Lecture 12: Demand Paging

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

10: Virtual Memory Management

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

CS370 Operating Systems

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

CS 153 Design of Operating Systems Winter 2016

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Recap: Memory Management

Goals of Memory Management

CS 5523 Operating Systems: Memory Management (SGG-8)

Motivation. Memory Management. Memory Management. Computer Hardware Review

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

12: Memory Management

Virtual Memory III. Jo, Heeseung

Chapter 8 Main Memory

Virtual Memory Design and Implementation

Computer Systems II. Memory Management" Subdividing memory to accommodate many processes. A program is loaded in main memory to be executed

Lecture 21: Virtual Memory. Spring 2018 Jason Tang

Background. Contiguous Memory Allocation

Memory management: outline

First-In-First-Out (FIFO) Algorithm

(b) External fragmentation can happen in a virtual memory paging system.

Clock page algorithm. Least recently used (LRU) NFU algorithm. Aging (NFU + forgetting) Working set. Process behavior

Readings and References. Virtual Memory. Virtual Memory. Virtual Memory VPN. Reading. CSE Computer Systems December 5, 2001.

The Operating System. Chapter 6

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

CS 143A - Principles of Operating Systems

Unit II: Memory Management

Operating Systems (2INC0) 2017/18

Operating Systems Design Exam 2 Review: Spring 2011

Requirements, Partitioning, paging, and segmentation

Classifying Information Stored in Memory! Memory Management in a Uniprogrammed System! Segments of a Process! Processing a User Program!

Virtual Memory 1. To do. q Segmentation q Paging q A hybrid system

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

CS 416: Opera-ng Systems Design March 23, 2012

CS307: Operating Systems

Transcription:

CISC 3595 Operating System Spring, 2015 Chapter 3: Important Concepts (3/29/2015) 1 Memory from programmer s perspective: you already know these: Code (functions) and data are loaded into memory when the program is being executed. Address of variables and functions (recall & operator): location of the variable/data and code in memory: they are different for different run (concept of relocation: the same program can be loaded to different part of program and is able to run regardlessly.). A process s address space (recall the figure) made up of text/code, data, heap, and stack segment. It is not necessarily a consecutive address space. (Note: you can use gdb, or just cout statement in the code to examine the address of various variables, functions,...). Variable s lifetime can be static (located in data segment), automatic (located in stack), and dynatmic (located in heap). stack and heap can grow, and there is an user setting that specify the maximum size of stack or heap. (You can use command ulimit -a to view your current settings). OS provides protection: typically a process cannot access a memory address within another process s address space. (Segmentation fault or bus error are generated when a process tries to access memory using an illegal address). 2 Memory from programmer s perspective: there are more to know: One can create memory segment to be shared by multiple processes, in a way similar to the case where mutliple threads share global variables. It s up to the programmer to ensure mutual exclusion in order to avoid race condtion. (The system call to create shared memory segment is shmget in Unix.) Libraries: collections of commonly used functions and data, e.g., C++ standard library, STL (Standard Template Library), POSIX thread library. Library s code (functions implementation) can be linked to the progam in different ways: static library is linked to the program s code at compilation s linking stage. This means a larger executable code, taking up more disk and memory space. (static library has a suffix of.a). shared library: a stub function (instead of the actual function) is linked to the program s code, which will binds to the actual function s code (might be shared by multiple programs) when being called at run time. (Note: command ldd can be used to show the shared libraries that a program needs). Demo of lab5, see Makefile for details. dynamically loaded: programmer can load and unload infrequently used library to save storage space, and to allow switching library implementation while the program s running. (plugin behavior). 3 What we know about the memory, RAM, the hardware RAM stands for random access memory, meaning that you can access any random address of the memory (in contrast, magnetic tape is a sequential access media, you need to wind (forward or backword) the tape to a location in order to access a certain part of the tape). The size of RAM is usually given in byte. RAM can be viewed as an array of bytes, with each byte s index being its address.

When accessing memory, CPU puts the physical address of the memory location to be accessed on the bus (more particular, the address lines) of the bus. 4 OS Memory Management: hide messy and ugly details, and take care of tedious works for programmers in managing main memory storage resource: allocate and deallocate memory keep track of memory usage provide protection provide abstraction provide virtualization From no abstraction to swapping to paging, virtual memory to segmentation. Question 1: What is physical address space, or what does the bus see? (a)suppose the system bus (or bus) has 64 address lines, then what are the physical address space? (b)on some computers, part of the above physical address space is used for I/O controller, so that when CPU reads/writes from/to a memory address belonging to an I/O controller to perform Input/Output. (Please read book, page 28 for more details). Question 2: What is a process s address space referring to? 5 No abstraction Programmer (assembly langauge programs) sees the physical address, Mapping function from logic address to physical address is an identity function: physical_address_of (logic_address) = logic_address; there is no abstraction: it s up to the programmer to do tricks to run large program that cannot fit into memory. static relocation: change all reference to memory at loading time 6 Swapping 2

Mapping function from logic address to physical address: physical_address_of (logic_address) = base + logic_address; // if base+logic_address > limit, a fault/software interrupt is generated, // 1. Save context of current process (registers) // 2. Trap to kernel mode, and interrupt number is used to look up // interrupt handler // 3. Call the corresponding handler (function that processes this type // of interrupt) // for this type of interrupt (or, trap, exception, fault), usually the process will // be aborted due to memory fault OS allocates a chunk of consecutive memory to each process, i.e., the process s address space is mapped to a contiguous chunk of physical memory. Two registers, base and limit, are used to store the starting address and the size of this memory chunk. Protection: using limit and base, OS makes sure memory reference does not go beyond or below the range of address allocated to the process Problem: Memory fragmentation (internal and external) Question: In swapping (or consecutive memory allocation) scheme, OS alloates memory in multiples of {\bf allocation unit}. If allocation unit is 2KB, a process asks for 5KB is allocated 6KB (leading to an internal fragment of 1KB). If allocation unit is 10KB, a process asks for 2KB is allocated 10KB (leading to an internal fragment of 8KB). What are the pros and cons of using small or large allocation unit? Swapping or memory compaction (when there is no consectutive memory block big enough to allocate to new process): memory compaction: to move used memory blocks to lower/higher ends of memory, in order to remove external fragmentation and create large consecutive unused memory blocks swapping: pick a process to evict from memory, and save its image (address space) into disk (swapping area). 7 Paging or Virtual Memory Mapping from logic address to physical address: physical_address_of (logic_address) = MMU (logic_address) //i.e., the translation is done by Memory Management Unit, using page tables and TLB // the translation might lead to page fault, and disk read/write operation, and page // replacement algorithm to evict some page from RAM When the page table showing the page is not present in memory, MMU generates a software interrupt, page fault, which leads to // 1. Save context of current process (registers) // 2. Trap to kernel mode, and interrupt number is used to look up 3

// interrupt handler // 3. Call the corresponding handler (function that processes this type // of interrupt) // for this type of interrupt, page fault, kernel allocates a page frame (if none // available, starts page replacement algorithm), and loads the page from disk Ideas: break down the physical address space (decided by number of address lines in bus) into page frames of equal size. break down a process s logic address space into pages, of same size as page frame Each page (in a process s address space) can be either loaded in memory (mapped onto a page frame in RAM), or not loaded (not laoded yet, or has been loaded but was later replaced on a page fault) and therefore is kept in disk (in a special partition, backing store in page 231). Pages belonging to a process do not need to be mapped to adjacent page frames Not all pages of a process needs to be loaded into memory for the process to be executed Address translation by MMU: for each process, there is a page table (indexed using the page number in the logic address) where each entry stores: whether this page is present in memory; if so, the physical page frame number that this page is mapped to; Modification bit; Reference bit; and so on. TLB, Translation Lookaside Buffer, caches the recently or mostly frequently used page table entries. It s an associate memory, supporting parallel lookup of all entries. Question: Recall caching is used everywhere in computer system. List all different places (hardware or software) that caches are used? (a) the size of page (page frame) is a design parameter, which decides how many bits in the address is page offset, and how many bits is page number. Exercise: draw a 64 bits long address, and mark page offsetand page number parts of the address. Question 3: Why the size of page (and page frame) needs to be a power of 2? 4

Question 4: Pros and Cons of choosing a small page size. Question 5: Pros and Cons of choosing a large page size. (b) Page replacement algorithms The Problem: When a page fault occurs, OS needs to choose a page to remove from memory to make room for incoming page. Why does it matter, i.e., Performance Metrics. To minimize the total page faults, which incur overhead in terms of CPU cycle, memory and disk access. The ways: Common assumption: the pages that are referenced recently are likely to be referenced again in the near future. Draw the page table entry diagram (hint: it includes bits flagging whether the page has been modified, referenced or not, whether it is present/absent in memory, and the page frame number. There is usually also a Protection field to specify what kind of access is allowed to the page.) M bit and R bit: maintained by hardware (on every memory reference instruction), or simulated by software (OS set R bit, READ ONLY initially; on writing access, a fault is generated and set to READ WRITE and M bit can be set). are essential information for the algorithms below. Different Algorithms Optimal Scheme: replace the page that will be accessed in the furthest future. Observations: Can this be implemented? In what kind of system, collecting logs of page references (in a test run) be used in second/future run s paging replacement decision is useful? Not-Recently-Used (NRU): OS periodically (say every 10ms) clears R bit of page table entries. On page fault, choose a page in the order of the following priority (based upon R and M bit). If multiple pages are in the top category, choose randomly. 5

i. R=0, M=0 ii. R=0, M-1 iii. R=1, M=0 iv. R=1, M=1 FIFO: remove a page that is loaded into memory first (i.e., oldest page goes to make room for new page). Implementation of FIFO scheme? Problem of FIFO? Second-Chance (give an old but recently referenced page a second chance, to become young again): Two different data structures used to implement second chance: Queue (storing pages in order of loading time) Clock Page (circular buffer) Least Recently Used (LRU): replace a page that has been unused for the longest time. Hardware Implementation: Full implementation: list of pages in order of last reference time use a hardware counter (why 64 bits?, what happens if the counter overflows?) a matrix of nxn bits (why it works?) Software Implementation/Approximation: OS sets a timer, which generates periodic timeout interrupt, for OS to update a counter for each page in the memory: Not Frequently Used (NFU): OS samples R bit on periodically, adds R to a counter; and evits a page with lowest counter value aging: counter shifts to right before adding R, R added to the left Working Set: Ideas: work set: w(k, t) the set (or size of the set) of pages references by the k most recent references. work set is stale in terms of k make sure the work set, i.e., the k pages, are kept in memory Implementation: a shift register of length k (shifted to the left, and inserted to the right on each page reference) Practice: Draw the figure here: Redefine work set as the set of pages a process has referenced during the last T seconds of virtual time. Keep the time of last used for each page in the page table, if a page has not been used in the last T seconds, replace the page. 6

WSClock: Circular list of pages, Use 1) Time of last use, 2) R field, and 3) M field to decide which page to replace. 7