CS-537: Midterm Exam (Spring 2001)

Similar documents
Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013

CSE 153 Design of Operating Systems

CS153: Midterm (Fall 16)

Lecture 9: Midterm Review

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Operating Systems Comprehensive Exam. Fall Student ID # 10/31/2013

OS 1 st Exam Name Solution St # (Q1) (19 points) True/False. Circle the appropriate choice (there are no trick questions).

Midterm Exam Amy Murphy 6 March 2002

Midterm Exam. October 20th, Thursday NSC

CS-537: Midterm Exam (Fall 2013) Professor McFlub

CPS 110 Midterm. Spring 2011

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

Midterm Exam Solutions Amy Murphy 28 February 2001

Midterm Exam Amy Murphy 19 March 2003

Lecture 5: Synchronization w/locks

COMP 300E Operating Systems Fall Semester 2011 Midterm Examination SAMPLE. Name: Student ID:

EECE.4810/EECE.5730: Operating Systems Spring 2017

Operating Systems Comprehensive Exam. Spring Student ID # 2/17/2011

OS Structure. User mode/ kernel mode (Dual-Mode) Memory protection, privileged instructions. Definition, examples, how it works?

Suggested Solutions (Midterm Exam October 27, 2005)

OS Structure. User mode/ kernel mode. System call. Other concepts to know. Memory protection, privileged instructions

EECE.4810/EECE.5730: Operating Systems Spring Midterm Exam March 8, Name: Section: EECE.4810 (undergraduate) EECE.

CS 318 Principles of Operating Systems

CMPSCI 377: Operating Systems Exam 1: Processes, Threads, CPU Scheduling and Synchronization. October 9, 2002

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

Operating Systems (1DT020 & 1TT802)

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

Last Class: Deadlocks. Today

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CSE Traditional Operating Systems deal with typical system software designed to be:

Threads. Concurrency. What it is. Lecture Notes Week 2. Figure 1: Multi-Threading. Figure 2: Multi-Threading

CS 153 Design of Operating Systems Winter 2016

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

COMP 3361: Operating Systems 1 Final Exam Winter 2009

UNIT:2. Process Management

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

COMP 3361: Operating Systems 1 Midterm Winter 2009

CSE 153 Design of Operating Systems

CSC369H1 S2016 Midterm Test Instructor: Bogdan Simion. Duration 50 minutes Aids allowed: none. Student number:

Dealing with Issues for Interprocess Communication

Synchronization I. Jo, Heeseung

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Concurrency. Glossary

CSE 120 Principles of Computer Operating Systems Fall Quarter, 2002 Halloween Midterm Exam. Instructor: Geoffrey M. Voelker

CS 450 Exam 2 Mon. 4/11/2016

AC59/AT59/AC110/AT110 OPERATING SYSTEMS & SYSTEMS SOFTWARE DEC 2015

CS533 Concepts of Operating Systems. Jonathan Walpole

CS140 Operating Systems and Systems Programming Midterm Exam

PROCESS SYNCHRONIZATION

1. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.

Spring CS 170 Exercise Set 1 (Updated with Part III)

CS162 Operating Systems and Systems Programming Midterm Review"

University of California at Berkeley College of Engineering Department of Electrical Engineering and Computer Science

Operating Systems EDA092, DIT 400 Exam

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

There are 19 total numbered pages, 8 Questions. You have 2.5 hours. Budget your time carefully!

HY 345 Operating Systems

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year

Concept of a process

CROWDMARK. Examination Midterm. Spring 2017 CS 350. Closed Book. Page 1 of 30. University of Waterloo CS350 Midterm Examination.

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

CSE 410 Final Exam Sample Solution 6/08/10

EECS 482 Introduction to Operating Systems

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Operating Systems (Classroom Practice Booklet Solutions)

CS-537: Midterm Exam (Fall 2008) Hard Questions, Simple Answers

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

CS370 Operating Systems

by Marina Cholakyan, Hyduke Noshadi, Sepehr Sahba and Young Cha

Parallel Programming Languages COMP360

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Goals. Processes and Threads. Concurrency Issues. Concurrency. Interlacing Processes. Abstracting a Process

Concurrency, Thread. Dongkun Shin, SKKU

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules

Midterm I February 28 th, 2019 CS162: Operating Systems and Systems Programming

Computer Science 161

CS 4410 Operating Systems. Review 1. Summer 2016 Cornell University

Stanford University Computer Science Department CS 140 Midterm Exam Dawson Engler Winter 1999

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Operating Systems: Quiz2 December 15, Class: No. Name:

( B ) 4. Which is not able to solve the race condition? (A) Test and Set Lock (B) Shared memory (C) Semaphore (D) Monitor

First Midterm Exam September 28, 2017 CS162 Operating Systems

CSE 120 PRACTICE FINAL EXAM, WINTER 2013

HY345 - Operating Systems

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Midterm Exam Solutions and Grading Guidelines March 3, 1999 CS162 Operating Systems

There are 10 total numbered pages, 5 Questions. You have 60 minutes. Budget your time carefully!

CPS 310 second midterm exam, 11/14/2014

Transcription:

CS-537: Midterm Exam (Spring 2001) Please Read All Questions Carefully! There are seven (7) total numbered pages Name: 1

Grading Page Points Total Possible Part I: Short Answers (12 5) 60 Part II: Long Answers (2 20) 40 Total 100 Name: 2

Part I: Short Questions The following questions require short answers. Each of 12 is worth 5 points (60 total). 1. Which of the following are more like policies, and which are more like mechanisms? For each answer, circle policy or mechanism. (a) The timer interrupt (b) How long a time quantum should be (c) Saving the register state of a process (d) Continuing to run the current process when a disk I/O interrupt occurs The interrupt and register state saving are mechanisms (the how ), whereas the quantum length and decision to run a particular process are clearly policies. 2. For a workload consisting of ten CPU-bound jobs of all the same length (each would run for 10 seconds in a dedicated environment), which policy would result in the lowest average response time? Please circle ONE answer. (a) Round-robin with a 100 millisecond quantum (b) Shortest Job First (c) Shortest Time to Completion First (d) Round-robin with a 1000 nanosecond quantum Response time is the time from the demand from service until the first service. Shortest-job first and shortest-time-to-completion will make some processes (long ones) wait arbitrarily long, so those can be ruled out. Thus, it is clearly one of the RRs. The one with the shorter time quantum wins (1000 ns is much less than 100 ms). 3. Assume we divide the heap into fixed-sized units of size S. Also assume that we never get a request for a unit larger than S. Thus, for any request you receive, you will return a single block of size S, if you have any free blocks. What kind of fragmentation can occur in this situation, and why is this bad? Internal fragmentation may occur. This is bad simply because it potentially wastes space, e.g., if all requests are for 1 byte, S-1 bytes will be wasted per request. It is called internal fragmentation because the waste is internal in the block that has been allocated. 4. Name and describe two examples of where we saw the OS needed support from the hardware in order to implement some needed functionality (hint: think about process management and memory management). Many possible answers. Base and bounds registers for memory management support. Test and set instruction for synchronization. Timer interrupt to re-gain control of the CPU. 5. What is a cooperative approach to scheduling, and why is it potentially a bad idea? Cooperative scheduling assumes that processes will voluntarily give up the CPU. It is potentially bad because a buggy or malicious process can commandeer the CPU and never relinquish it. 3

6. Assume we run the following code snippet. After waiting for a long time, how many processes will be running on the machine, ignoring all other processes except those involved with this code snippet? You can assume that fork() never fails. Feel free to add a short explanation to your answer. void runme() { for (int i = 0; i < 23; i++) { int rc = fork(); if (rc == 0) { while (1) ; else { while (1) ; Number of processes running: Two processes. The parent enters the code, forks a child, and then spins eternally. The child gets created, and starts spinning too. Thus, we have parent and child, each spinning forever (well, until you hit control- C). 7. Assume the following code snippet, where we have two semaphores, mutex and signal : Thread 1 Thread 2 mutex.p(); if (x > 0) signal.v(); mutex.v(); signal.p(); mutex.p(); x++; signal.v(); mutex.v(); We want mutex to provide mutual exclusion among the two threads, and for signal to provide a way for thread 2 to activate thread 1 when x is greater than 0. What should the initial values of each of the two semaphores be? (Assume that x is always positive or zero, and that there are only these two threads in the system). Value of mutex: 1 Value of signal: 0 Mutex must be 1 to provide mutual exclusion. Signal must be 0 to ensure the proper signalling occurs (that is, thread 1 will only pass through the signal.p() if signal has been incrememted first via a V(), either by itself or thread 2). 8. Assume we manage a heap in a best-fit manner. What does best-fit management try to do? What kind of fragmentation can occur in this situation, and why is this bad? Best-fit management finds the smallest block that will fulfill the current request (equal-to or greater-than). Organizing memory in such a manner can lead to external fragmentation. This is bad because a request that asks for S bytes of memory may be denied even when more than S bytes are free, but the S bytes are scattered throughout memory. 4

9. Which of the following will NOT guarantee that deadlock is avoided? Please circle all that apply. (a) Acquire all resources (locks) all at once, atomically (b) Use locks sparingly (c) Acquire resources (locks) in a fixed order (d) Be willing to release a held lock if another lock you want is held, and then try the whole thing over again Using locks sparingly does not do anything for you, and can still lead to deadlock. 10. Why is stack-based memory management usually much faster than heap-based memory management? If it s so much faster, why do we ever use the heap? The stack s advtantage is simplicity, where allocation is just a pointer increment, and deallocation is a decrement. Simplicity implies speed in this case. We use the heap for flexibility, e.g., for data structures that are not allocated/deallocated in a stack discipline. 11. For a workload consisting of ten CPU-bound jobs of varying lengths (half run for 1 second, and the other half for ten seconds), which policy would result in the lowest total run time for the entire workload? Please circle all that apply. (a) Shortest Job First (b) Shortest-Time to Completion First (c) Round-robin with a 100 millisecond quantum (d) Multi-level Feedback Queue Total time is not altered by scheduling policy, and thus all policies are equivalent. You could argue that SJF is the fastest, because of a lack of context switches. But then you would have to know the parameters of the MLFQ, which were not specified... 12. Assume we have a system that performs dynamic relocation of processes in physical memory. A process that has just started needs 2000 total bytes of memory in its address space. The OS currently has two free regions: the first between physical addresses of 1000 and 2000, and the second between physical addresses 5000 and 7000. What value would end up in the base register for this process, and what value would be in the bounds register? (write down any assumptions that you make about the bounds register) Value of base: 5000 Value of bounds: 2000 or 7000 5

Part II: Longer Questions The second half of the exam consists of two longer questions, each worth 20 points (total 40). 1. Monit-or Not, Here I Come. You are stuck using some language that only provides monitors for mutual exclusion. However, you don t really like monitors all that much, and decide to implement a new class that looks a lot more like the Binary semaphore we all know and love. The monitor class looks something like this. monitor class BinarySemaphore { // your class variables go here void P() { // code for P() goes here void V() { // code for V() goes here You have at your disposal a condition class, which has three methods: signal(), broadcast(), and wait(), which all function as you would expect given our class discussion. You are also expected to use Hoare semantics. a): What variables would you declare as part of this monitor class, and what would you initialize them to? int count = 1; condition c = new condition(); b): Write the pseudo-code for your monitor implementation of P() here. if (count == 0) c.wait(); count = 0; c): Write the pseudo-code for your monitor implementation of V() here. count = 1; c.signal(); b: Assume you had to use Mesa semantics instead. What would have to change in your code? If you didn t change the code, what bad thing could happen? The if in the P() code should be a while. If this change isn t made, the code will not guarantee mutual exclusion (think about another process entering P() just as a waiting process is woken. 6

2. Race to the Finish Assume we are in an environment with many threads running. Take the following C code snippet: int z = 0; // global variable, shared among threads void update (int x, int y) { z += x + y; Assume that threads may all be calling update with different values for x and y. a): Write assembly code that implements the function update(). Assume you have three instructions at your disposal: (1) load [address], Rdest, (2) add Rdest, Rsrc1, Rsrc2, and (3) store Rsrc, [address]. Also, feel free to assume that when update() is called, the value of x is already in R1, and the value of y is in R2. add r2, r1, r2 ld z, r3 add r2, r2, r3 store r2, z // combine x and y // load z // combine x, y, and z // store new value of z b): Because this code is not guarded with a lock or other synchronization primitive, a race condition could occur. Describe what this means. A race condition occurs when two processes race to update a shared variable (in this case, z ). It is a race because the outcome is indeterminate; the outcome depends on how the scheduler interleaves the two threads. c): Now, label places in the assembly code where a timer interrupt and switch to another thread could result in such a race condition occuring. add r2, r1, r2 // combine x and y ld z, R3 // load z (SWITCH HERE IS A PROBLEM) add r2, r2, r3 // combine x, y, and z (SWITCH HERE IS A PROBLEM) store r2, z // store new value of z Basically, once z is loaded, and before it is stored, we are in danger. d): Now, assume we change the C code as follows: void update (int x, int y) { z = x + y; // note we just set z equal to x+y (not additive) If two threads call update() at nearly the same time, the first like this: update(3,4), and second like this: update(10,20), what are the possible outcomes? If we place a lock around the routine (e.g., before setting z = x + y, we acquire a lock, and after, we release it), does this change the behavior of this snippet? No problem here, because z is not incremented, simply over-written. Thus, whether there is mutual exclusion or not, the value of z will either be 7 if thread 1 went last, or 30 if thread 2 went last. 7