Fall COMP3511 Review

Similar documents
Fall 2015 COMP Operating Systems. Lab 06

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01

CS 143A - Principles of Operating Systems

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

System Model. Types of resources Reusable Resources Consumable Resources

CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

CS370 Operating Systems

Memory Management. Memory Management

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Chapter 7: Deadlocks

Chapter 7: Deadlocks

The Deadlock Problem

Chapter 8: Deadlocks

Chapter 8: Deadlocks. The Deadlock Problem. System Model. Bridge Crossing Example. Resource-Allocation Graph. Deadlock Characterization

Deadlocks. Dr. Yingwu Zhu

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition,

Chapter 7: Deadlocks. Chapter 7: Deadlocks. The Deadlock Problem. Chapter Objectives. System Model. Bridge Crossing Example

Module 7: Deadlocks. The Deadlock Problem. Bridge Crossing Example. System Model

Chapter 8: Deadlocks. The Deadlock Problem

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Chapter 7: Deadlocks

Chapter 7 : 7: Deadlocks Silberschatz, Galvin and Gagne 2009 Operating System Concepts 8th Edition, Chapter 7: Deadlocks

Distributed Deadlock Detection

The Deadlock Problem (1)

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition,! Silberschatz, Galvin and Gagne 2009!

Chapter 7: Deadlocks. Operating System Concepts with Java 8 th Edition

UNIT-5 Q1. What is deadlock problem? Explain the system model of deadlock.

MC7204 OPERATING SYSTEMS

Chapter 8: Main Memory

Chapter 8: Deadlocks. Operating System Concepts with Java

CMSC 412. Announcements

Chapter 8: Main Memory

Chapter 9: Memory Management. Background

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Module 8: Memory Management

Deadlocks. Operating System Concepts - 7 th Edition, Feb 14, 2005

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

CSE Opera+ng System Principles

Module 3. DEADLOCK AND STARVATION

CS307 Operating Systems Deadlocks

Deadlocks. Bridge Crossing Example. The Problem of Deadlock. Deadlock Characterization. Resource-Allocation Graph. System Model

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Final Review. Quiz-5 Solutions. Tevfik Koşar

Chapter - 4. Deadlocks Important Questions

Deadlocks. Prepared By: Kaushik Vaghani

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

Lecture 7 Deadlocks (chapter 7)

Deadlocks. Deadlock Overview

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

Final Exam Preparation Questions

The Slide does not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams.

Operating Systems. No. 9 ศร ณย อ นทโกส ม Sarun Intakosum

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

The Deadlock Problem

Announcements. Final Exam. December 10th, Thursday Patrick Taylor Hall. Chapters included in Final. 8.

Memory Management. Contents: Memory Management. How to generate code? Background

Chapter 8: Memory Management Strategies

Module 7: Deadlocks. The Deadlock Problem

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING UNIT I

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation

Module 7: Deadlocks. System Model. Deadlock Characterization. Methods for Handling Deadlocks. Deadlock Prevention. Deadlock Avoidance

Q1. What is Deadlock? Explain essential conditions for deadlock to occur?

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Module 6: Deadlocks. Reading: Chapter 7

Chapter 8: Memory Management

CS307: Operating Systems

QUESTION BANK UNIT I

Chapter 7: Deadlocks. Operating System Concepts 8th Edition, modified by Stewart Weiss

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Design of Operating System

Operating Systems 2015 Spring by Euiseong Seo DEAD LOCK

Final Review Spring 2018 Also see Midterm Review

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Deadlock Prevention. Restrain the ways request can be made. Mutual Exclusion not required for sharable resources; must hold for nonsharable resources.

Chapter 7: Deadlocks. Operating System Concepts 9th Edition DM510-14

Deadlocks. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 9: Virtual-Memory

Deadlock. Concepts to discuss. A System Model. Deadlock Characterization. Deadlock: Dining-Philosophers Example. Deadlock: Bridge Crossing Example

Principles of Operating Systems

Deadlock. A Bit More on Synchronization. The Deadlock Problem. Deadlock Characterization. Operating Systems 2/7/2005. CSC 256/456 - Spring

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Module 8: Memory Management

CS420: Operating Systems. Deadlocks & Deadlock Prevention

CS307 Operating Systems Main Memory

Silberschatz, Galvin and Gagne 2013! CPU cycles, memory space, I/O devices! " Silberschatz, Galvin and Gagne 2013!

CS370 Operating Systems

Chapter 7: Deadlocks

The Deadlock Problem

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Basic Memory Management

Logical versus Physical Address Space

Goals of Memory Management

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Transcription:

Outline Fall 2015 - COMP3511 Review Monitor Deadlock and Banker Algorithm Paging and Segmentation Page Replacement Algorithms and Working-set Model File Allocation Disk Scheduling Review.2 Monitors Condition Variables A high-level abstraction that provides a convenient and effective mechanism for process synchronization Abstract data type, internal variables only accessible by code within the procedure Only one process may be active within the monitor at a time But not powerful enough to model some synchronization schemes monitor monitor-name { // shared variable declarations procedure P1 ( ) {. } condition x, y; Two operations on a condition variable: x.wait () a process that invokes the operation is suspended until x.signal () x.signal () resumes one of processes (if any) that invoked x.wait () If no x.wait () on the variable, then it has no effect on the variable procedure Pn ( ) { } Initialization code ( ) { } } } Review.3 Review.4 Monitor with Condition Variables Deadlock Characterization Deadlock can arise if four conditions hold simultaneously Mutual exclusion only one process at a time can use a resource. Hold and wait a process holding at least one resource is waiting to acquire additional resources held by other processes. No preemption a resource can be released only voluntarily by the process holding it, after that process has completed its task. Circular wait there exists a set {P 0, P 1,, P n } of waiting processes such that P 0 is waiting for a resource that is held by P 1, P 1 is waiting for a resource that is held by P 2,, P n 1 is waiting for a resource that is held by P n, and P n is waiting for a resource that is held by P 0. Review.5 Review.6

Resource-Allocation Graph A set of vertices V and a set of edges E. V is partitioned into two types: P = {P 1, P 2,, P n }, the set consisting of all the processes in the system. R = {R 1, R 2,, R m }, the set consisting of all resource types in the system Each resource type R i has W i instances. Each process utilizes a resource as follows: request, use, release Request edge directed edge P i R j Assignment edge directed edge R j P i Safe State When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state System is in safe state if there exists a safe sequence of all processes: Sequence <P 1, P 2,, P n > is safe if for each P i, the resources that P i can still request can be satisfied by currently available resources + resources held by all the P j, with j<i If P i resource needs are not immediately available, then P i can wait until all P j have finished When P j is finished, P i can obtain needed resources, execute, return allocated resources, and terminate When P i terminates, P i+1 can obtain its needed resources, and so on Review.7 Review.8 Banker s Algorithm Safety Algorithm Each resource can have multiple instances. Each process must a priori claim maximum use. When a process requests a resource it may have to wait. When a process gets all its resources it must return them in a finite amount of time. Let n = number of processes, and m = number of resources types. Available: Vector of length m. If available [j] = k, there are k instances of resource type R j available. Max: n x m matrix. If Max [i,j] = k, then process P i may request at most k instances of resource type R j. Allocation: n x m matrix. If Allocation[i,j] = k then P i is currently allocated k instances of R j. Need: n x m matrix. If Need[i,j] = k, then P i may need k more instances of R j to complete its task Need [i,j] = Max[i,j] Allocation [i,j]. 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available Finish [i] = false for i - 1,3,, n. 2. Find and i such that both: (a) Finish [i] = false (b) Need i Work If no such i exists, go to step 4 3. Work = Work + Allocation i Finish[i] = true go to step 2 4. If Finish [i] == true for all i, then the system is in a safe state Review.9 Review.10 Base and Limit Registers Two special registers, base and limit are used to prevent user from straying outside the designated area During context switch, loads new base and limit register from PCB Contiguous Memory Allocation Each process is contained in a single contiguous section of memory Degree of multiprogramming limited by number of partitions Variable-partition sizes for efficiency (sized to a given process needs) Hole block of available memory; holes of various size are scattered throughout memory Operating system maintains information about: a) allocated partitions b) free partitions (hole) User is NOT allowed to change the base and limit registers (privileged instructions) process 8 process 9 process 9 process 10 Review.11 Review.12

Paging Address Translation Physical address space of a process can be noncontiguous Divide physical memory into fixed-sized blocks called frames, Divide logical memory into blocks of same size called pages. Keep track of all free frames Address generated by CPU is divided into: Page number (p) used as an index into a page table which contains base address of each page in physical memory Page offset (d) combined with base address to define the physical memory address that is sent to the memory unit Set up a page table to translate logical to physical addresses Review.13 Review.14 Page Table Implementation Implementation of Page Table Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PRLR) indicates size of the page table In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction TLB The two memory access problem can be solved by using TLB (translation look-aside buffer) a special, small, fast-lookup hardware cache each entry in the TLB consists of a key (or tag) and a value page number is presented to the TLB, if found, its frame number is immediately available to access memory fast but expensive Review.15 Review.16 Paging Hardware With TLB TLB miss and Hit ratio TLB miss: If the page number is not in the TLB, a memory reference to the page table must be made Hit ratio: Can be very large, e.g. 1M entries percentage of times that a page number is found in the TLB. For example: Assume TLB search takes 20ns; memory access takes 100ns TLB hit 1 memory access; TLB miss 2 memory accesses Review.17 Review.18

Effective Access Time (EAT) Segmentation If Hit ratio = 80% EAT = (20 + 100) * 0.8 + (20 + 200) * 0.2 = 140ns Memory-management scheme that supports user view of memory A program is a collection of segments of different sizes A segment is a logical unit 3 1 2 4 user space If Hit ratio = 98% EAT = (20 + 100) * 0.98 + (20 + 200) * 0.02 = 122ns physical memory space 1 4 2 3 Review.19 Review.20 Address Translation Example of Segmentation Logical view: multiple separate segments Each segment is allocated with a contiguous memory External fragmentation Review.21 Review.22 Motivation of virtual memory Should an entire process be in memory before it can execute? In fact, real programs show that, in many cases, the entire program is not needed Even in those cases where the entire program is needed, it may not all be needed at the same time More programs could run concurrently, increasing CPU utilization and throughput Less I/O would be needed to load or swap each user program into memory, so each user program would run faster Allow processes to share files easily and to implement shared memory If there is no free frame Page Replacement Page replacement find some page in memory, but not really in use, swap it out Replacement algorithm Performance want an algorithm which will result in minimum number of page faults Same page may be brought into and out of memory several times Review.23 Review.24

Page Replacement FIFO Page Replacement Review.25 Review.26 Algorithms for approximating optimal page replacement Optimal page replacement (9 page faults) LRU (Least Recently Used) algorithm Use the recent past as an approximation of the near future Replace the page that has not been used for the longest period of time Considered to be good, but how to implement LRU page replacement (12 page faults) Few computer systems provide sufficient hardware support for true LRU LRU-approximation: Reference bits, Second chance Review.27 Review.28 Working-Set Model A Typical File-system Organization Working-Set model is based on the locality working-set window a fixed number of page references Example: 10,000 instructions WSS i (working set of Process P i ) = total number of pages referenced in the most recent (varies in time) if too small will not encompass entire locality if too large will encompass several localities if = will encompass entire program D = WSS i total demand for frames (by all processes) if D > m Thrashing (m is the available frames) Policy if D > m, then suspend one of the processes; the process pages are swapped out, and its frames are re-allocated to other processes. The suspended process can be re-started later Review.29 Review.30

Allocation Methods Combined Scheme: UNIX (4K bytes per block) An allocation method refers to how disk blocks are allocated for files Objectives: Maximize sequential performance Easy random access to file Easy management of file (growth, truncation, and etc) Contiguous allocation Linked allocation Indexed allocation Multi-level index file, key idea: Efficient for small files, still allow large files File header format are: First 10 pointers are to data blocks Pointer 11 points to indirect block containing 256 block pointers Pointer 12 points to doubly indirect block containing 256 indirect block pointers for total of 64K blocks Pointer 13 points to a triply indirect block (16M blocks) Pointers get filled in dynamically Review.31 Review.32 Free-Space Management Disk Scheduling Bit vector (n blocks) 0 1 2 n-1 bit[i] = 1 block[i] free 0 block[i] occupied Linked list (free list) (previous block contains a pointer to the next free block) Cannot get contiguous space easily No waste of space Grouping (stores the addresses of n free blocks in the first free block) Counting Several contiguous blocks may be allocated and freed simultaneously <first free block, number of free contiguous blocks> The operating system is responsible for using hardware efficiently for the disk drives, this means having a fast access time and disk bandwidth. Access time has two major components Seek time is the time for the disk are to move the heads to the cylinder (tracks) containing the desired sector. Rotational latency is the additional time waiting for the disk to rotate the desired sector to the disk head. Minimize seek time Seek time seek distance The disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer. Review.33 Review.34 Disk Scheduling When a process needs I/O to or from a disk, it issues a system call to the containing the following pieces of information Whether the operation is input or output What the disk address for the transfer is What memory address for the transfer is What the number of sectors to be transferred is Under multiprogramming system with many processes, the request may be placed in a disk queue waiting unless the desired disk drive and the controller are available The question is, when one request is completed, the needs to choose which pending requests to service next? How does the make this choice? We need disk scheduling algorithms FCFS, SSTF, SCAN, LOOK, C-SCAN, C-LOOK Review.35