Processes and Tasks What comprises the state of a running program (a process or task)?

Similar documents
Preview. Memory Management

12: Memory Management

CS399 New Beginnings. Jonathan Walpole

Memory management OS

Chapter 3 - Memory Management

Memory management: outline

Memory management: outline

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

Operating Systems and Computer Networks. Memory Management. Dr.-Ing. Pascal A. Klein

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Memory and multiprogramming

Memory hierarchy and cache

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

CS 550 Operating Systems Spring Memory Management: Paging

Operating Systems, Fall

Operating Systems, Fall

Assembly Language. Lecture 2 - x86 Processor Architecture. Ahmed Sallam

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

Assembly Language. Lecture 2 x86 Processor Architecture

CS370 Operating Systems

a process may be swapped in and out of main memory such that it occupies different regions

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

Chapter 4 Memory Management

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Operating System Support

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

The Instruction Set. Chapter 5

ECE468 Computer Organization and Architecture. Virtual Memory

Memory Management Prof. James L. Frankel Harvard University

ECE4680 Computer Organization and Architecture. Virtual Memory

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

UNIT III MEMORY MANAGEMENT

Virtual Memory 1. To do. q Segmentation q Paging q A hybrid system

Chapter 4 Memory Management. Memory Management

Registers Cache Main memory Magnetic disk Magnetic tape

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2015 Lecture 23

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

CS450/550 Operating Systems

Chapter 8. Virtual Memory

IA-32 Architecture COE 205. Computer Organization and Assembly Language. Computer Engineering Department

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Hardware and Software Architecture. Chapter 2

CS 333 Introduction to Operating Systems. Class 11 Virtual Memory (1) Jonathan Walpole Computer Science Portland State University

Chapter 9 Memory Management

Chapter 8 Memory Management

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

4 MEMORY MANAGEMENT 4.1 BASIC MEMORY MANAGEMENT

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Virtual to physical address translation

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

CPE300: Digital System Architecture and Design

Handout 4 Memory Hierarchy

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Computer Architecture. Lecture 8: Virtual Memory

MICROPROCESSOR TECHNOLOGY

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

Virtual Memory: Mechanisms. CS439: Principles of Computer Systems February 28, 2018

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

IA32 Intel 32-bit Architecture

Motivations for Virtual Memory Virtual Memory Oct. 29, Why VM Works? Motivation #1: DRAM a Cache for Disk

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 23

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Memory Management. 3. What two registers can be used to provide a simple form of memory protection? Base register Limit Register

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

CS307 Operating Systems Main Memory

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Memory Management. Dr. Yingwu Zhu

Operating Systems Unit 6. Memory Management

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

CS307: Operating Systems

For your convenience Apress has placed some of the front matter material after the index. Please use the Bookmarks and Contents at a Glance links to

Virtual Memory Oct. 29, 2002

Chapter 8: Memory-Management Strategies

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Memory Management. Memory Management. G53OPS: Operating Systems. Memory Management Monoprogramming 11/27/2008. Graham Kendall.

V. Primary & Secondary Memory!

virtual memory. March 23, Levels in Memory Hierarchy. DRAM vs. SRAM as a Cache. Page 1. Motivation #1: DRAM a Cache for Disk

Operating System Support

Virtual Memory. Today. Segmentation Paging A good, if common example

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

For The following Exercises, mark the answers True and False

Chapter 11. Addressing Modes

Memory Management. q Basic memory management q Swapping q Kernel memory allocation q Next Time: Virtual memory

Virtual Memory 1. Virtual Memory

Virtual Memory 1. Virtual Memory

Transcription:

Processes and Tasks What comprises the state of a running program (a process or task)? Microprocessor Address bus Control DRAM OS code and data special caches code/data cache EAXEBP EIP DS EBXESP EFlags ES ECX EDI... FS EDX ESI CS SS GS Data bus P s state P stack P Code The STATE of a task or process is given by the register values, OS data structures, and the process s data and stack segments. P Data If a second process, P2, is to be created and run (not shown), then the state of P must be saved so it can be later resumed with no side-effects. Since only one copy of the registers exist, they must be saved in memory. We'll see there is hardware support for doing this on the Pentium later.

Memory Hierarchy For now, let's focus on the organization and management of memory. Ideally, programmers would like a fast, infinitely large nonvolatile memory. In reality, computers have a memory hierarchy: Cache (SRAMS): Small (KBytes), expensive, volatile and very fast (< 5ns). Main Memory (DRAM): Larger (MBytes), medium-priced, volatile and mediumspeed (<8ns). Disk: GBytes, low-priced, non-volatile and slow (ms). Therefore, the OS is charged with managing these limited resources and creating the illusion of a fast, infinitely large main memory. The Memory Manager portion of the OS: Tracks memory usage. Allocates/Deallocates memory. Implements virtual memory. 2

Simple Memory Management In a multiprogramming environment, a simple memory management scheme is to divide up memory into n (possibly unequal) fixed-sized partitions. These partitions are defined at system start-up and can be used to store all the segments of the process (e.g., code, data and stack). Partition 4 Multiple Job Queues Partition 3 Partition 2 Partition OS Advantage: it's simple to implement. However, it utilizes memory poorly. Also, in time sharing systems, queueing up jobs in this manner leads to unacceptable response time for user processes. 3

Variable-Sized Partitions In a variable-sized partition scheme, the number, location and size of memory partitions vary dynamically: X C C C C B B B A OS A OS OS () (2) (3) Time X2 D D OS OS (4) (5) () Initially, process A is in memory. (2) Then B and C are created. (3) A terminates. (4) D is created, B terminates. 4

Variable-Sized Partitions Problem: Dynamic partition size improves memory utilization but complicates allocation and deallocation by creating holes (external fragmentation). This may prevent a process from running that could otherwise run if the holes were merged, e.g., combining X and X2 in previous slide. Memory compaction is a solution but is rarely used because of the CPU time involved. Also, the size of a process's data segments can change dynamically, e.g. malloc(). If a process does not have room to grow, it needs to be moved or killed. stack Process A data code Other Processes OS Growth 5

Implementing Memory on the Hard Drive The hard disk can be used to allow more processes to run than would normally fit in main memory. For example, when a process blocks for I/O (e.g. keyboard input), it can be swapped out to disk, allowing other processes to run. The movement of whole processes to and from disk is called swapping. The disk can be used to implement a second scheme, virtual memory. Virtual memory allows processes to run even when their total size (code, data and stack) exceeds the amount of physical memory (installed DRAM). This is very common, for example, in microprocessors with 32-bit address spaces. If an OS supports virtual memory, it allows for the execution of processes that are only partially present in main memory. OS keeps the parts of the process that are currently in use in main memory and the rest of the process on disk. 6

Virtual Memory When a new portion of the process is needed, the OS swaps out older 'not recently used ' memory to disk. Virtual memory also works in a multiprogrammed system. Main memory stores bits and pieces of many processes. A process blocks whenever it requires a portion of itself that is on disk, much in the same way it blocks to do I/O. The OS schedules another process to run until the referenced portion is fetched from disk. But swapping out portions of memory that vary in size is not efficient. External fragmentation is still a problem (it reduces memory utilization). Two concepts: Segmentation: Allows the OS to 'share' code and enforce meaningful constraints on the memory used by a process, e.g. no execution of data. Paging: Allows the OS to efficiently manage physical memory, and makes it easier to implement virtual memory. 7

Paging and Virtual Memory So how does paging work? We will refer to addresses which appear on the address bus of main memory as a physical addresses. Processes generate virtual addresses, e.g., MOV EAX, [EBX] Note, the value given in [EBX] can reference memory locations that exceed the size of physical memory. (We can also start with linear addresses, which are virtual addresses translated through the segmentation system, to be discussed). All virtual (or linear) addresses are sent to the Memory Management Unit (MMU) for translation to a physical address. CPU sends CPU chip virtual address MMU translates CPU to MMU address and sends physical address to Memory MMU memory 8

Paging and Virtual Memory The virtual (and physical) address space is divided into pages. Page size is architecture dependent but usually range between 52-64K. Corresponding units in physical memory are called page frames. Pages and page frames are usually the same size. Assume: Page size is 4K. Physical mem is 32K. Virtual mem is 64K. Therefore, there are 6 virtual pages. 8 page frames. Assume the process issues the virtual address : Paging translates it to physical address 892 using the layout on right. 25 is translated to physical address 2K + 2 = 238. Virtual address space 6K-64K 56K-6K 52K-56K 48K-52K 44K-48K 4K-44K 36K-4K 32K-36K 28K-32K 24K-28K 2K-24K 6K-2K 2K-6K 8K-2K 4K-8K K-4K X X X X 7 X 5 X X X 3 4 6 2 Virtual pages Page Frames Physical address space 28K-32K 24K-28K 2K-24K 6K-2K 2K-6K 8K-2K 4K-8K K-4K 9

Paging and Virtual Memory Note that 8 virtual pages are not mapped into physical memory (indicated by an X on the previous slide). A present/absent bit in the hardware indicates which virtual pages are mapped into physical RAM and which ones are not (out on disk). What happens when a process issues an address to an unmapped page? MMU notes page is unmapped using present/absent bit. MMU causes CPU to trap to OS - page fault. OS selects a page frame to replace and saves its current contents to disk. OS fetches the page referenced and places it into the freed page frame. OS changes the mem map and restarts the instruction that caused the trap. Paging allows the physical address space of a process to be noncontiguous! This solves the external fragmentation problem (since any set of pages can be chosen as the address space of the process). However, it generally doesnõt allow % mem utilization, since the last page of a process may not be entirely used (internal fragmentation).

Paging and Virtual Memory Addresses Translation by the MMU Process generates Virtual Address 6 bits = 64K For example: 896 in binary is Physical Address 5 bits = 32K BUS Page Table: Maps virtual pages onto page frames. Virtual Address Virtual Page Offset Number Physical Address Page Frame Number Offset 5 4 3 2 9 8 7 6 5 4 3 2 present/ absent

Paging and Virtual Memory Two important issues w.r.t the Page Table: Size: The Pentium uses 32-bit virtual addresses. With a 4K page size, a 32-bit address space has 2 32 /2 2 = 2 2 or,48,576 virtual page numbers! If each page table entry occupies 4 bytes, thatõs 4MB of memory, just to store the page table. For 64-bit machines, there are 2 52 virtual page numbers!!! Performance: The mapping from virtual-to-physical addresses must be done for EVERY memory reference. Every instruction fetch requires a memory reference. Many instructions have a memory operand. Therefore, the mapping must be extremely fast, a couple nanoseconds, otherwise it becomes the bottleneck. 2

Page Table Design Alternatives Single page table stored in an array of fast hardware registers. OS loads registers from memory when a process is started. Advantage: No memory references are needed for the page table. Disadvantage: Context switches require the entire page table to be loaded. If it is large, this will be expensive. Page table kept entirely in main memory. Single register points to the start of the page table. Advantage: Context switches only require updating the register pointer. Disadvantage: One or more memory references are needed to read page table entries for each instruction. Modern computers keep 'frequently used' page table entries on chip in a cache (similar to first alternative above) and the others in main memory (similar to the second alternative). 3

Multilevel Page Tables Instead of using only one level of indirection, use two. Top-level page table 4K 23 23 Page Frames 6 6 5 5 4 4 3 3 2 2 x54 Number of Bits 2 3 4 32-bit virtual address x434 Base Address x54 23 6 5 4 3 2 x32 Second-level page tables Pages Base Address of desired page. 4

Multilevel Page Tables This addresses page table size problem since many of the second-level page tables need not be defined (and therefore stored in main memory). Note that two page faults can occur for a single memory reference. If the second-level page table is not in memory, a page fault occurs. If the page that the second-level entry refers to is not in memory, another page fault occurs. In general, Page Frames are machine dependent with the following info: Modified Other info Referenced Present/absent bit Page Frame Address Protection bits Page Frame address: Most significant bits of physical memory address. Present/Absent bit: If, page is in memory, if, it is on disk. Modified bit: If set, page has been written to, e.g. it is ÔdirtyÕ. Referenced bit: Used in the OS page replacement algorithm. Protection bits: Specifies if data in page can be read/written/executed. 5

Translation Lookaside Buffers (TLBs) With two-level paging, one memory reference could require three memory accesses! In order to reduce the number of times this occurs, a fast lookup table called a TLB is added as a hardware cache in the microprocessor. CPU virtual address Page # Offset page frame Offset physical address Page # compared to all keys simultaneously If found, - TLB hit - no memory access required If not, TLB miss Page # Page frame 4 3 2 38 3 22 TLB 29 4 hit 9 2 56 86 33 key value TLB TLB miss 23 6 5 4 3 2 Page Table physical memory 6

Translation Lookaside Buffers (TLBs) Number of TLB entries varies from 8 to 248. Typically around 64. When a TLB miss occurs: A trap occurs and an OS routine handles the fault. The instruction is then restarted. The OS routine copies one (or more) page frame(s) from the page table in memory to one (or more) of the TLB entries. Therefore, if page is referenced again soon, a TLB hit occurs eliminating the memory reference for the page frame. 7