Operating Systems Comprehensive Exam. There are five questions on this exam. Please answer any four questions total

Similar documents
Operating Systems Comprehensive Exam. Spring Student ID # 2/17/2011

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

CSE473/Spring st Midterm Exam Tuesday, February 19, 2007 Professor Trent Jaeger

Operating Systems Comprehensive Exam. Fall Student ID # 10/31/2013

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

Midterm Exam. October 20th, Thursday NSC

Operating Systems (1DT020 & 1TT802)

Midterm Exam Amy Murphy 6 March 2002

CSE 120 Principles of Computer Operating Systems Fall Quarter, 2002 Halloween Midterm Exam. Instructor: Geoffrey M. Voelker

STUDENT NAME: STUDENT ID: Problem 1 Problem 2 Problem 3 Problem 4 Problem 5 Total

CSE 153 Design of Operating Systems

CMPS 111 Spring 2013 Prof. Scott A. Brandt Midterm Examination May 6, Name: ID:

Lecture 9: Midterm Review

COS 318: Midterm Exam (October 23, 2012) (80 Minutes)

Course Syllabus. Operating Systems

CS4411 Intro. to Operating Systems Exam 2 Fall 2009

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

OS 1 st Exam Name Solution St # (Q1) (19 points) True/False. Circle the appropriate choice (there are no trick questions).

FCM 710: Architecture of Secure Operating Systems

Concurrent & Distributed Systems Supervision Exercises

Midterm Exam Amy Murphy 19 March 2003

Suggested Solutions (Midterm Exam October 27, 2005)

Concept of a process

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

CS 318 Principles of Operating Systems

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

Chapter 6: CPU Scheduling

COMP 3361: Operating Systems 1 Final Exam Winter 2009

UNIT:2. Process Management

Chapter 5: CPU Scheduling

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

CMPSCI 377: Operating Systems Exam 1: Processes, Threads, CPU Scheduling and Synchronization. October 9, 2002

Midterm Exam #1 February 28, 2018 CS162 Operating Systems

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

1995 Paper 10 Question 7

Operating Systems EDA092, DIT 400 Exam

CS140 Operating Systems and Systems Programming Midterm Exam

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Process/Thread Synchronization

Homework Assignment #5

COMP SCI 3SH3: Operating System Concepts (Term 2 Winter 2006) Test 2 February 27, 2006; Time: 50 Minutes ;. Questions Instructor: Dr.

Multiprocessor and Real- Time Scheduling. Chapter 10

CMPS 111 Spring 2003 Midterm Exam May 8, Name: ID:

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

CPSC/ECE 3220 Summer 2017 Exam 2

CS370 Operating Systems

Synchronization. Before We Begin. Synchronization. Credit/Debit Problem: Race Condition. CSE 120: Principles of Operating Systems.

1. Draw and explain program flow of control without and with interrupts. [16]

Chapter 6: Process Synchronization

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS450 OPERATING SYSTEMS FINAL EXAM ANSWER KEY

Process Management And Synchronization

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

POSIX Threads: a first step toward parallel programming. George Bosilca

Comp 310 Computer Systems and Organization

Midterm Exam Solutions Amy Murphy 28 February 2001

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

Remaining Contemplation Questions

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5

Job Scheduling. CS170 Fall 2018

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

OPERATING SYSTEMS UNIT-II

Operating Systems: Quiz2 December 15, Class: No. Name:

Operating Systems. Process scheduling. Thomas Ropars.

Last Class: Synchronization

FCM 710: Architecture of Secure Operating Systems

Properties of Processes

KEY CENG 334. Midterm. Question 1. Question 2. Question 3. Question 4. Question 5. Question 6. Total. Name, SURNAME and ID

HY 345 Operating Systems

PROCESSES & THREADS. Charles Abzug, Ph.D. Department of Computer Science James Madison University Harrisonburg, VA Charles Abzug

CPU Scheduling Algorithms

Midterm I October 11 th, 2006 CS162: Operating Systems and Systems Programming

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Exam in Real-Time Systems

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes:

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CSE 120 Principles of Computer Operating Systems Fall Quarter, 2001 Midterm Exam. Instructor: Geoffrey M. Voelker

8: Scheduling. Scheduling. Mark Handley

CS-537: Midterm Exam (Spring 2001)

Problem Set: Processes

CHAPTER 2: PROCESS MANAGEMENT

Lecture 2 Process Management

CS 370 Operating Systems

Predictable Interrupt Management and Scheduling in the Composite Component-based System

SMD149 - Operating Systems

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

TENTAMEN / EXAM. TDDB68 Processprogrammering och operativsystem / Concurrent programming and operating systems

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

STUDENT NAME: STUDENT ID: Problem 1 Problem 2 Problem 3 Problem 4 Problem 5 Total

Operating Systems. Lecture Process Scheduling. Golestan University. Hossein Momeni

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

INDIAN INSTITUTE OF TECHNOLOGY GUWAHATI

Transcription:

Operating Systems Comprehensive Exam There are five questions on this exam. Please answer any four questions. ID 1 2 3 4 5 total

1) The following questions pertain to McKusick et al's A Fast File System for UNIX paper. To get credit, you must answer all of them correctly. a. In 10 words or less, state the primary design principle promoted by this paper. b. Identify the two primary techniques used to promote reliability, and explain how they are used. c. What is the purpose of the bit map, and why was it chosen over a free list? d. Identify the two primary techniques used to promote performance, and explain how they are used. e. Identify the two primary sources of overhead, and the difficulties in trying to remove them.

2) A local startup is designing a new operating system and has come to you to help them with their scheduler. It seems that they have implemented a sophisticated preemptive priority scheduler, but it doesn't seem to be performing optimally for the set of jobs they are running. The table below shows the five jobs in their test suite (you can think of them as processes if you like). Each job arrives into the system at a different time, and takes a specified amount of CPU time to complete. The jobs are completely independent---i.e., they are not synchronized, they do not block, and no job requires the completion of any other to make forward progress. Name Arrival Time Duration Priority A 0 2 0 B 1 5 2 C 2 4 1 D 3 1 3 E 4 2 4 The scheduler uses a time quanta of one (meaning each job will run for at least one unit of time before being context switched), starts at time 0, and the scheduler runs until all jobs are completed. The scheduler runs equal-priority jobs in round-robin order. There are no other jobs in the system, and higher priorities are more important. a. List the starting and ending time of each job using the current scheduler. For example, job A will start at time 0, and the last job to complete will finish at time 14.

In an effort to improve performance, the CEO has asked the programming team to try out a number of different scheduling algorithms. Rather than replace the current scheduler, however, they've heard a rumor that it is possible to reuse the current one and are hoping you can help. Your task is to replace the current job priorities with new positive integer priorities so that the current preemptive, priority scheduler will schedule them in a manner identical to a scheduling algorithm running the prescribed algorithm. Assume that you assign the priorities before the system starts running. For each specified algorithm, list the priorities you would use, as well as the new starting and ending time for each job. b. Round-Robin. c. First in, First out (FIFO). d. Shortest job first (SJF) (not SRTF). e. If the company is concerned with minimizing the average completion time T finish T arrival and willing to ignore the initial priority assignments, which scheduling algorithm should they use?

3) A problem on concurrency. a. An operating system routine uses a simple binary search table that supports two concurrent asynchronous processes: Search and Update. Search repeatedly looks to see if various keys are in the table, and Update repeatedly inserts or removes various key in the table. Assume the table, which has many slots, initially contains three keys B, C, D in slots 1, 2, and 3 respectively. The Search process for finding B starts by doing a READ of the middle occupied slot 2; finding that C is greater than B, the Search process then does a READ of slot 1 where the key B is found. Suppose the Update process wishes to add a new key A concurrently. This process puts A in slot 1, B in slot 2, C in slot 3, and D in slot 4. We need to ensure that, despite the concurrent operation of the Update process, the Search process returns consistent (though possibly stale) results. We assume that each READ or WRITE to a memory slot is atomic; and, the system allows explicit Read and Write Locks. The standard solution to this concurrency problem is to use two copies of the table and move a pointer atomically to the updated table. Describe briefly the sequence of memory READS and WRITES required in the above example to do the Update and describe any locking needed to swing the pointer to the updated table.

b. Observing that memory is expensive, Hugh Hopeful wishes to avoid the cost of a second copy. To allow the Update process to access the same copy as the Search process, Hugh introduces the clever idea of duplicates. Hugh reasons that it cannot hurt binary search to have duplicate copies of any key. Thus, to update the binary search table above, Hugh proposes that the Update process add an extra copy of D in slot 4 to create B, C, D, D. Then, Slot 3 is overwritten with a C to create B, C, C, D. Next, Slot 2 is overwritten with a B to create B, B, C, D. Finally, Slot 1 is overwritten with A to create A, B, C, D. Thus Hugh is creating a "hole" in Slots 4, 3, 2, and finally 1 by using duplicates. Hugh's proposal uses no locks. While Hugh's reasoning is sound in a centralized model it does not work in a concurrent environment. Show an example where the Search process returns an incorrect value (e.g., does not find a key that actually is in the table) because of a race condition between Search and Update processes.

4) Many processors provide intrinsic hardware support for operations that can read, modify and write one or more memory locations atomically (e.g., test-and-set, compareand-exchange). These operations are, in turn, used for constructing higher-level synchronization primitives such as mutexes or semaphores. However, some architectures do not provide support for complex atomic operations. In other cases, the runtime overhead is too high to be efficient. In both cases, the operating system emulates them instead. A classic way to ensure such atomicity on a uniprocessor is to disable interrupts for the duration of the instruction sequence that must be atomic. For example, consider the compare and exchange operation that atomically compares the value stored at a given address with a value (v1) and if they are equal stores another value (v2) at the same address. The operating system can emulate this by disabling interrupts as follows: int cmpxchg(int *addr, int v1, int v2) { int ret = 0; DISABLE_INTERRUPTS(); if (*addr == v1) { *addr = v2; ret = 1; } ENABLE_INTERRUPTS(); return ret; } In this question we will consider an alternative implementation technique, called a Restartable Atomic Sequence (RAS). It works as follows: the instructions that are part of the RAS are registered with the kernel. When an interrupt happens, the PC of the interrupted thread is saved as usual. However, if the kernel does a context switch to another thread AND the saved PC was within a registered RAS region, then the saved PC is "rolled back" to the beginning of the RAS. This way, when the interrupted thread is rescheduled, the sequence of instructions will be re-executed from the beginning... hence the term restartable. Here is an implementation of compare-and-exchange using RAS: int cmpxchg(int *addr, int v1, int v2) { int ret = 0; BEGIN_RAS if (*addr == v1) { *addr = v2; END_RAS ret = 1; } return ret; }

a) Is this implementation of cmpxchng guaranteed to complete atomically on a uniprocessor? Why or why not? Be specific. b) Is this implementation of cmpxchng guaranteed to complete atomically on a shared memory multiprocessor? Why or why not? Be specific. c) RAS is an optimistic technique and can be considerably faster than other approaches (including intrinsic hardware support). Explain what makes it so fast and why it is an optimistic technique.

5) The following questions pertain to David D. Clark s The Structure of Systems Using Upcalls paper. a. What is an upcall? How does its use differ from a traditional layered system structure? b. Consider a subsystem of an operating system that sends messages. Give at least one advantage of structuring this using upcalls as compared to using a layered system structure. c. Using upcalls can lead to corruption of data if, during the execution of an upcall, a downcall is made that changes the data of the module. The paper gives five methods of avoiding such corruption. Give two of them.