Administrivia. Deadlock Prevention Techniques. Handling Deadlock. Deadlock Avoidance

Similar documents
Main Memory (Part I)

CS370 Operating Systems

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

12: Memory Management

3. Memory Management

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

CS420: Operating Systems

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Background. Contiguous Memory Allocation

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

Memory Management. Memory Management

CS370 Operating Systems

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Memory Management william stallings, maurizio pizzonia - sistemi operativi

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Chapter 9: Memory Management. Background

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Memory Management. Memory Management Requirements

Roadmap. Tevfik Koşar. CSC Operating Systems Spring Lecture - XII Main Memory - II. Louisiana State University

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Module 8: Memory Management

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

Last class: Today: Deadlocks. Memory Management

Module 8: Memory Management

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

Chapter 8: Memory Management

CSE 4/521 Introduction to Operating Systems. Lecture 12 Main Memory I (Background, Swapping) Summer 2018

Memory management: outline

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Chapter 8: Main Memory

Memory Management. Contents: Memory Management. How to generate code? Background

Memory Management. Memory Management

CS307: Operating Systems

Chapter 8: Main Memory

Chapter 9 Memory Management

Memory management: outline

Chapters 9 & 10: Memory Management and Virtual Memory

Chapter 8: Main Memory

Process. Memory Management

Chapter 8: Memory Management

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

Requirements, Partitioning, paging, and segmentation

Memory Management. Frédéric Haziza Spring Department of Computer Systems Uppsala University

CS 143A - Principles of Operating Systems

CS399 New Beginnings. Jonathan Walpole

Chapter 8: Memory-Management Strategies

Logical versus Physical Address Space

Chapter 8: Memory Management Strategies

Process. One or more threads of execution Resources required for execution

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapter 8 Memory Management

Last Class: Deadlocks. Where we are in the course

Chapter 8: Main Memory

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Memory Management. Memory Management

Chapter 9 Real Memory Organization and Management

Chapter 9 Real Memory Organization and Management

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

Preview. Memory Management

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Module 3. DEADLOCK AND STARVATION

Compile: compiler. Load: loader. compiler linker loader memory. source object load code module module 2

CSE325 Principles of Operating Systems. Memory. David P. Duggan. March 6, 2010

Principles of Operating Systems

Memory: Overview. CS439: Principles of Computer Systems February 26, 2018

Q1. What is Deadlock? Explain essential conditions for deadlock to occur?

Operating System Support

Chapter 7 Memory Management

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

Lecture 7. Memory Management

15 Sharing Main Memory Segmentation and Paging

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD

The Memory Management Unit. Operating Systems. Autumn CS4023

Classifying Information Stored in Memory! Memory Management in a Uniprogrammed System! Segments of a Process! Processing a User Program!

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

1. a. Show that the four necessary conditions for deadlock indeed hold in this example.

Requirements, Partitioning, paging, and segmentation

Memory Management (1) Memory Management

Memory Management (1) Memory Management. CPU vs. memory. No.8. Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating systems. Part 1. Module 11 Main memory introduction. Tami Sorgente 1

16 Sharing Main Memory Segmentation and Paging

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

Operating Systems. Memory Management. Lecture 9 Michael O Boyle

Memory Management. Today. Next Time. Basic memory management Swapping Kernel memory allocation. Virtual memory

Transcription:

Administrivia Project discussion? Last time Wrapped up deadlock Today: Start memory management SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 1 Handling Deadlock Deadlock Prevention Techniques Prevent hold and wait: one shot allocation Conservative and inefficient Prevent no preemption : allow preemption, by releasing or grabbing resources Need checkpointing/recovery; can lead to livelock Prevent circular wait Impose a hierarchy on resources; acquire resources only in one direction SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 3 Three General Approaches: 1. Use a protocol that will guarantee deadlock cannot occur Deadlock prevention: ensure that one of the ingredients necessary for deadlock cannot happen Deadlock avoidance: a smarter way of avoiding deadlock (later) 2. Allow deadlock to happen but detect it if it happens and recover 3. Do nothing (??); actually used by many OS s including unix If it is infrequent, why worry about it? If user processes are deadlocked, they will eventually kill them (a slow form of deadlock recovery?) SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 2 Deadlock Avoidance Deadlock avoidance is a more sophisticated way of preventing deadlock that makes decisions based on the state of the system Honor a resource request only if you can tell that it will not lead to deadlock (the resulting state is a safe state) How do I know that a state is safe? How can I guarantee that the processes are not already destined to deadlock? A state is safe if there exists an order [, P2,... Pn] for the processes to execute to completion finishes using only the available resources P2 finishes using the available resources + s resources finishes with available resources + and P2 s resources, etc.. SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 4

Banker s Algorithm Example S = set of processes; while(s is not empty){ 1. Find a process p such that foreach i Need[p,i] <= V[i] 2. If impossible -- fail; state is unsafe 3. Remove p from S; add p s resources Available pool } A safe state is a realizable state where there exits at least one sequence for the processes to run to completion Allow a resource request if the resulting state is safe How do we determine if a state is safe? Algorithm above is O(n 2 ) more efficient implementation exists (Habbermann s algorithm) Process Allocation Claim A B C A B C P 1 7 5 3 2 3 2 2 P2 3 2 9 2 2 1 1 2 2 2 2 4 3 3 V = [3 3 2] Is the current state safe? Is it deadlocked? Generate need matrix Apply safety algorithm New request from (1,, 2); allow it or not? SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 5 SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 6 Deadlock Detection Never block a process if enough resources are available Detect deadlock and recover once it occurs How to detect deadlock? (think about detecting a deadlock-free state) Add a matrix Q for each processes currently outstanding resource requests Update banker s algorithm to pick a process that may finish (given its currently held resources + outstanding requests) Replace need matrix with the outstanding request matrix Apply safety algorithm; if it succeeds, no deadlock why? Deadlock Detection Example Process Allocation Outstanding A B C A B C P 2 2 2 3 2 5 1 P2 1 2 5 2 2 1 1 3 2 5 3 3 4 3 Deadlocked state? V = [2 3 2] When to run the deadlock detection algorithm? What to do if there is deadlock? Kill processes or preempt resources SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 7 SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 8

Memory Management Motivation Primary goal is to bring programs into memory for execution What is the physical organization of the memory? How can the operating system help manage it? Memory management is easy for uniprogramming systems; load and execute What happens if the program is bigger than the memory? Multiprogramming makes life interesting: How to share the memory between the processes? What addition problems/side-effects occur because of this sharing; how do we solve them? Input queue: queue of processes on disk waiting to be loaded to memory Compiling, Linking and Loading other object modules system library source program compiler or assembler object module linkage editor load module loader dynamically loaded sytem library In memory binary memory image Problem: what to do with memory references/branches; will the program go in the same place in memory every time? SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 9 SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 1 Address Binding Absolute binding: bind memory references at compile (link) time; program has to go in specific place in memory Relocateable binding: Compiler produces relative addresses At load time, these addresses are translated. What happens if the process is swapped? Dynamic run-time binding: References are kepts relative, final translation is done at run time (when the reference occurs) Too expensive? In software Yes; hardware mechanisms assists SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 11 Dynamic Loading Loading so far static Dynamic loading; load parts of the program when/if they are needed. Why? Routine not loaded until it is called; better memory utilization Useful for pieces of code that are large but infrequently needed No OS support needed; implemented using program design SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 12

Linking External references (to other object files, or libraries), the are usually resolved at linking time Some OS s allow dynamic linking Reference to some libraries is postponed until execution time A small piece of code (called stub) is used to locate the appropriate memory resident library routine Stub replaces itself with the address of the library routine OS is needed to check if the required code is in the process memory address (e.g., if another process does, then the OS is the only entity that can allow sharing) What if the required memory is larger than the physical memory size? SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 13 7K Overlays Symbol Table Common Routines overlay driver 2K 3K 1K Pass 1 Pass 2 Key Idea: Suppose we keep only the parts of the program we are currently using in memory Overlays is the oldest incarnation of this idea Two pass assembler; only 15Kbytes physical memory available SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 14 8K main memory System User Space Swapping (1) swap out (2) swap in disk With overlays a process loads the portion of its image that is needed to fit in memory Swapping is similar (remember medium term scheduler); a process can be swapped temporarily from memory into a swap area on a disk Major cost is swap time between the disk and memory cost proportional to size of process Swapping is used in most modern OS s including Unix and Windows P2 Memory Management Requirements Relocation: we do not know beforehand where the program will actually go in memory Protection: firewalls are needed to protect programs from interfering with the OS or with each other Sharing: the protection mechanism should be flexible enough to allow portions of the memory to be shared (instructions or data) Logical vs. Physical address Each process has a logical address space that is bound to a separate physical address space Logical address also called virtual address generated by the CPU this is the address in the program after linking Physical address address seen by the memory Impossible/too expensive to translate at runtime using software. Memory Management Unit translates SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 15 SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 16

Address Translation Logical to Physical CPU Logical address Limit Register < no yes Base Register + trap; memory protection error physical address For each contiguous chunk in memory Memory Base (or Relocation) register; add to logical address to produce physical address Limit (or Bound) register; check if the address is within limit (logical or physical address?) Why not translate at load time (and after swaps)? What about dynamic addresses (pointers); can you resolve those before the program starts execution? memory aliasing problem Also, can be expensive to recompute all the addresses and update everyone SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 17 How to share the memory What if we share it in time? Single Partition One process gets the full memory When it is blocked (or preempted) Swap it out to disk Swap in the process your schedule wants to run next OS (or at least the kernel) resides in memory always Usually in low address area along with interrupt tables For now, we assume contiguous allocation SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 18 Fixed Partitions Memory is statically divided into partitions; these sizes are fixed in the OS (recompile OS to change) Partition sizes can be equal or not equal What happens if the partition is too big for a program? internal fragmentation What happens if partition is too small for a program? Use overlays at a big cost to swap in/swap out Do these problems disappear if partition size was not equal? Problem: restricts the number of ready processes to the number of partitions What does the placement algorithm look like (queue-per-partition vs. single-queue) SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 19 Variable Size Partitions Dynamic Partitioning Allocate memory on a need-to basis: find an available memory big enough to fit process allocate exactly the needed space to the process Is internal fragmentation a problem? Is this better or worse than fixed partitions? External Fragmentation is a problem Requires periodic compaction SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 2

Placement A new process comes in, where should it be placed in memory? Find a suitable area (free; and equal or bigger than what we need) Some policies: First fit: The first suitable area where the process fits Next fit: The first suitable area starting from the placement Best fit: The smallest of the suitable areas What are the tradeoffs in using these? Best fit performs worst; why? SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 21 Example job queue process memory time Example (cont d) 1 6K 1K 3K 7K 5K P2 5 2 8 15 216K User space 256K SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 22 1K P2 1K P2 finishes 1K allocate 17K finishes 1K 17K allocate 9K 1K 17K 2K 2K 2K 23K 23K 23K 256K 256K 256K 2K 2K 23K 23K 256K 256K SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 23 Compaction 9K 1K 17K 2K 23K 256K 3K 26K 1K 9K 16K 19K 256K 66K SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 24

Fixed Partitioning: Discussion Limit the number of active processes statically Internal Fragmentation Dynamic Partitioning: Complex to maintain External Fragmentation/overhead for compaction Can we strike a balance? What happens if we swap out or relocate (for example, due to compaction) memory while an I/O device is writing to it? Two options force the job to stay in memory where it is if it has active I/O Do I/O only to OS buffers, copy when done SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 25 Buddy System Memory is available in blocks of 2 k bytes, l k u So, smallest block is 2 l, biggest is 2 u Idea: A request for r bytes is given a memory block of size 2 j where 2 j 1 r 2 j If no such block is available, find a 2 j+1 area and split it into two 2 j buddies (recursively go to 2 j+2, etc... if no 2 j+1 is available) If two buddies become free; they are coalesced into a single block SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 26 Buddy System gethole(i) { if (i > u) fail; if (list_of_i s is empty) { hole = gethole(i+1); split hole into buddies; put buddies on list_of_i s; } return first hole on list_of_i s; } Example: free memory is 1Meg (2 2 ); the following processes are executed 14k, 1k, 6k, 256k SUNY-BINGHAMTON CS35 SPRING 8 LEC. #13 27