Question Points Score Total 100

Similar documents
CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

Operating Systems Comprehensive Exam. Spring Student ID # 2/17/2011

Fall COMP3511 Review

Chapter 9: Virtual-Memory

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

CSE325 Principles of Operating Systems. Virtual Memory. David P. Duggan. March 7, 2013

Operating Systems Design Exam 2 Review: Fall 2010

Operating Systems Design Exam 2 Review: Spring 2011

Module 9: Virtual Memory

CS 416: Opera-ng Systems Design March 23, 2012

CS510 Operating System Foundations. Jonathan Walpole

CSCI 4210 Operating Systems CSCI 6140 Computer Operating Systems Sample Final Exam Questions (document version 1.0) WITH SELECTED SOLUTIONS

CS370 Operating Systems

(b) External fragmentation can happen in a virtual memory paging system.

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Demand Paging. Valid-Invalid Bit. Steps in Handling a Page Fault. Page Fault. Transfer of a Paged Memory to Contiguous Disk Space

Module 9: Virtual Memory

Operating Systems Design Exam 2 Review: Spring 2012

Virtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory

CS4411 Intro. to Operating Systems Final Fall points 10 pages

CS370 Operating Systems

Managing Storage: Above the Hardware

3/3/2014! Anthony D. Joseph!!CS162! UCB Spring 2014!

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Where are we in the course?

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

CS370 Operating Systems

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 10: Virtual Memory. Background

Structure of Computer Systems

CSE 120 Principles of Operating Systems

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

First-In-First-Out (FIFO) Algorithm

Chapter 10: Virtual Memory. Background. Demand Paging. Valid-Invalid Bit. Virtual Memory That is Larger Than Physical Memory

Chapter 9: Virtual Memory

Operating System Concepts

CS370 Operating Systems

Virtual Memory Outline

Lecture 19: File System Implementation. Mythili Vutukuru IIT Bombay

Virtual Memory COMPSCI 386

Chapter 4 File Systems. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved

Operating Systems. Week 9 Recitation: Exam 2 Preview Review of Exam 2, Spring Paul Krzyzanowski. Rutgers University.

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 8 Main Memory

Chapter 8: Virtual Memory. Operating System Concepts

Da-Wei Chang CSIE.NCKU. Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University

Optimal Algorithm. Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs

Final Exam Preparation Questions

CSE 120 PRACTICE FINAL EXAM, WINTER 2013

DAT (cont d) Assume a page size of 256 bytes. physical addresses. Note: Virtual address (page #) is not stored, but is used as an index into the table

Virtual Memory. Chapter 8

COMP 3361: Operating Systems 1 Final Exam Winter 2009

VIRTUAL MEMORY READING: CHAPTER 9

CS370 Operating Systems

Operating Systems Lecture 6: Memory Management II

CS370 Operating Systems

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Operating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University

Chapter 11: Implementing File Systems

Chapter 9: Virtual Memory

Disk Scheduling COMPSCI 386

University of Waterloo Midterm Examination Model Solution CS350 Operating Systems

Chapter 8 Virtual Memory

Operating Systems CMPSC 473 File System Implementation April 10, Lecture 21 Instructor: Trent Jaeger

Practice Exercises 449

Virtual to physical address translation

Week 2: Tiina Niklander

1. Creates the illusion of an address space much larger than the physical memory

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

b. How many bits are there in the physical address?

C13: Files and Directories: System s Perspective

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

File Management By : Kaushik Vaghani

Operating Systems, Fall

Chapter 3 Memory Management: Virtual Memory

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

20-EECE-4029 Operating Systems Spring, 2013 John Franco

Chapter 11: Implementing File Systems. Operating System Concepts 8 th Edition,

File Systems. CS170 Fall 2018

Chapter 9: Virtual Memory

Midterm Exam Solutions Amy Murphy 28 February 2001

Final Review. Geoffrey M. Voelker. Final mechanics Memory management Paging Page replacement Disk I/O File systems Advanced topics

Address spaces and memory management

CPE300: Digital System Architecture and Design

FILE SYSTEMS. CS124 Operating Systems Winter , Lecture 23

File System: Interface and Implmentation

OPERATING SYSTEM. Chapter 9: Virtual Memory

CS3600 SYSTEMS AND NETWORKS

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Background. Virtual Memory (2/2) Demand Paging Example. First-In-First-Out (FIFO) Algorithm. Page Replacement Algorithms. Performance of Demand Paging

CIS Operating Systems Memory Management Cache Replacement & Review. Professor Qiang Zeng Fall 2017

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Page Replacement Chap 21, 22. Dongkun Shin, SKKU

File-System Structure. Allocation Methods. Free-Space Management. Directory Implementation. Efficiency and Performance. Recovery

Operating Systems Comprehensive Exam. Fall Student ID # 10/31/2013

CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Operating Systems. No. 9 ศร ณย อ นทโกส ม Sarun Intakosum

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING UNIT I

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013

Transcription:

Midterm #2 CMSC 412 Operating Systems Fall 2005 November 22, 2004 Guidelines This exam has 7 pages (including this one); make sure you have them all. Put your name on each page before starting the exam. Write your answers directly on the exam sheets, using the back of the page as necessary. Bring your exam to the front when you are finished. Please be as quiet as possible. If you have a question, raise your hand and I will come to you. Errors on the exam will be posted on the board if they are discovered. If you feel an exam question assumes something that is not written, write it down on your exam sheet. Barring some unforeseen error on the exam, however, you shouldn t need to do this at all, so be careful when making assumptions. Use good test-taking strategy: read through the whole exam first, and answer the questions easiest for you first. Question Points Score 1 15 2 20 3 25 4 15 5 25 Total 100

1. (Fragmentation, 15 points) (a) (6 points) Which of the following schemes for memory allocation exhibit internal fragmentation? Which exhibit external fragmentation? Check all the boxes that apply. Allocation Scheme External Internal Contiguous Segmented ( ) Paged Segmentation may or not have internal fragmentation, depending on whether there is a minimum segment size. On the x86, for example, segments are in increments of bytes, or increments of 4K pages, with the latter having a much larger maximum segment size. (b) (6 points) Which of the following schemes for file allocation exhibit internal fragmentation? Which exhibit external fragmentation? Check all the boxes that apply. Allocation Scheme External Internal Contiguous Linked Indexed All schemes exhibit internal fragmentation because files are allocated in blocks, rather than bytes. (c) (3 points) Of the schemes in part 1b that exhibit internal fragmentation, what is the maximum internal fragmentation possible per file if using a block size of B bytes? The simplest reasonable answer is B 1 bytes: a file with no data needs no blocks at all, while a file with 1 byte needs one block, thus wasting B 1 bytes. However, the answer can be more precise. Under the indexed scheme, there is also the overhead of the index block, which if we assume the block is B bytes and block pointers are 4 bytes, the overhead is B 4 bytes, so the total becomes 2B 5 bytes. Having multiply-indirected blocks would make it worse! 2

2. (Page Replacement, 20 points) (a) (4 points) Suppose a program issues a request for a page not currently in main memory. Given that main memory is composed of 4 frames, use the following table to determine which frame s contents should be swapped out: Time When Time Last Frame # Loaded Referenced Referenced Bit Modified Bit 0 126 279 0 0 1 230 280 1 0 2 120 282 1 1 3 160 290 1 1 i. When using FIFO page replacement. 2 ii. When using LRU page replacement. 0 iii. When using Most-Recently Used (MRU) page replacement. 3 iv. When using Least-Frequently Used (LFU) page replacement. 0. Note that since we do not know when the reference bits were last reset, this may not be accurate. So another reasonable answer is simply we can t tell. (b) For each of the next two questions, assuming no pages are in memory at the start, indicate which pages are in memory in the indicated frames following each reference in the sequence, using the given page replacement algorithm. If a fault occurs, put a check the fault? box. If an algorithm needs to evict a frame, but could choose equally from among multiple frames (i.e., there is a tie according to the algorithm), choose the frame for the page that was brought in to memory least recently. (Next page, please.) 3

i. (8 points) FIFO page replacement. page reference 0 1 7 2 3 2 7 1 0 3 frame 1 0 0 0 0 3 3 3 3 3 3 frame 2 1 1 1 1 1 1 1 0 0 frame 3 7 7 7 7 7 7 7 7 frame 4 2 2 2 2 2 2 2 fault? ii. (8 points) LRU page replacement. page reference 0 1 7 2 3 2 7 1 0 3 frame 1 0 0 0 0 3 3 3 3 0 0 frame 2 1 1 1 1 1 1 1 1 1 frame 3 7 7 7 7 7 7 7 7 frame 4 2 2 2 2 2 2 3 fault? 4

3. (File System Implementation, 25 points) Background: Say we have a file system that uses 1000 byte blocks. In the file system s metadata, block numbers are expressed as 2 byte integers. Assume that all accesses to the disk are through a buffer cache that contains at most 4 blocks, using an LRU replacement strategy. Writes to disk should only occur when the user indicates this explicitly with a sync(), or when a dirty buffer needs to be evicted. Do not assume that you have any other caches in the system (e.g., a name cache). The root directory is stored at block 1, with all directory entries fitting in this single block. This directory contains a file file, consisting of 10 data blocks. file s FCB and data blocks are stored on disk at the following blocks: File block # FCB 0 1 2 3 4 5 6 7 8 9 Disk block # 10 4 12 15 16 17 2 3 20 21 22 Problem: Say a user program issues the following system calls for accessing the filesystem. Assuming we start with an empty buffer cache, fill out the table below indicating how many disk blocks are read or written on each system call. For the first column, assume you are using a file allocation table (FAT). Assume the FAT is stored at block 0 on the disk. For the second column, assume you are using a indexed allocation strategy, where the index block for file is stored at block 23. If you wish, you may state assumptions about your answers, draw pictures, etc. It may be useful for you to keep a picture of the contents of the buffer cache next to each line in the table below. This may get you some partial credit. However, if these things are incorrect, you could lose points. FAT Indexed Operation # reads # writes # reads # writes open /file 2 2 read 50 bytes at offset 100 1 2 write 200 bytes at offset 400 write 50 bytes at offset 1100 2 1 write 50 bytes at offset 8100 1 1 1 1 read 50 bytes at offset 250 1 1 1 1 sync 1 1 read 50 bytes at offset 3400 1 1 The open requires reading in the root directory block; the answer also includes reading in the file control block, though that could be delayed. The first read requires reading the first block of the file; in both cases, for this you need the file control block, and for the indexed case you also need the index block. To write to a block, we have to first read it in. However, in the case of the first write, the block is already in the cache. The write marks this block as dirty. For the next write, the block has to be read in again; in the case of FAT, this requires reading in the FAT, which is stored in a single block, to get the link to the file s second block. The next two actions cause the dirty blocks to get evicted from the cache, and so these must be written. Therefore, at the time of sync, the only dirty block is the ninth file block, which gets written. Finally, the last read accesses a block not in the cache, so it must be read from disk. 5

4. (Disk Scheduling, 15 points) (a) (5 points) In the last problem, the file system implementation only interacts with the buffer cache, and it is the buffer cache that actually reads from or writes to the disk. Using this scheme, when can the disk scheduling algorithm make a difference in system performance? Does it make any difference for this particular trace? The only place a disk scheduling algorithm can make a difference is when there are multiple read and write requests available at once. This could happen under high load to the file system, where potentially many processes are accessing the file system through the shared buffer cache. The only way it could arise in the prior problem is due to sync: multiple dirty blocks could be in the cache, and they could be reordered to improve throughput. However, since we only ever write one block to disk at a time in the prior problem, this never comes up. (b) (10 points) Assume you have a disk with 100 blocks (numbered 0 99), and the operating system s queue to read or write from the device contains the blocks 5, 50, 25, 95, 10. If the disk head is currently at block 45, what is the total seek distance when using i. Shortest Seek Time First (SSTF) The order the blocks are written is 50, 25, 10, 5, 95, and since the head begins at 45 the seek distance is 140. ii. First-Come First-Served (FCFS) The order the blocks are written is 5, 50, 25, 95, 10, and since the head begins at 45 the seek distance is 265. 6

5. (Short Answer, 25 points) (a) (5 points) What is a virtual filesystem (VFS)? Why is it useful? A virtual file system is an abstraction of a file system s implementation from its interface, as viewed by the rest of the operating system. System calls interact with the VFS interface, rather than a particular implementation. This makes the implementation of file systems more modular, so that it s easier to add support for new file systems, and easier to port and maintain existing ones. (b) (10 points) Say I have a paged memory system that uses a hash-table-based page table and a TLB, where the time ɛ to access the TLB is.001 memory references, and page table lookups take, on average, 2 memory references (not including the actual read to the requested address). Say memory references hit the TLB 70% of the time, on average. If as a hardware designer I had the choice to increase the size of the TLB to improve the hit rate to 80%, or to use a smarter hashing algorithm to reduce the average page table lookup time to 1.8 references, which choice should I make if I want to improve the effective access time? Justify your answer. We can justify our answer by calculating the effective access time (EAT) in each case. For the modified page table: EAT pt = 0.7(.001 + 1) 0.3(.001 + 1.8 + 1) = 1.541 Here: 0.7 is the frequency that requests hit the TLB; the cost of accessing the TLB (.001) is borne whether we hit the TLB or not. The 1 appearing in both parts is the time to read the actual memory, while the 1.8 is the time to access the page table. For the modified TLB: EAT tlb = 0.8(.001 + 1) 0.2(.001 + 2 + 1) = 1.401 Since the latter is smaller, this is the preferred choice. (c) (5 points) Can virtual memory be implemented using segmentation? Why or why not? Yes. Rather than swapping pages to and from disk, we would swap segments. The segment table would need valid/present bits, so that when a segment is swapped out, its valid bit would be set to false. The ensuing segmentation fault would result in the OS swapping the segment back in. This could be difficult or inefficient if segments are very large in practice. (d) (5 points) Would it make sense to implement a file system for a tape drive using techniques we have discussed in class (e.g., that one uses to implement FAT, UFS, extfs, etc.)? Why or why not? No. The performance would be abysmal. The main problems are that tapes are bad at random access, since they must wind or unwind to get to a specific spot, and they must truncate after each write. This would result in bad seek times for reads, and massive amounts of recopying if files were ever written after they were created. 7