CPS 310 first midterm exam, 10/6/2014

Similar documents
(In columns, of course.)

CPS 310 second midterm exam, 11/14/2014

CPS 310 first midterm exam, 2/26/2014

Midterm Exam CPS 210: Operating Systems Spring 2013

CPS 310 midterm exam #1, 2/19/2018

CPS 310 midterm exam #2, 4/10/2017

There are 10 total numbered pages, 5 Questions. You have 60 minutes. Budget your time carefully!

628 Lecture Notes Week 4

Your name please: NetID:

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

IC221: Systems Programming 12-Week Written Exam [SOLUTIONS]

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CPS 110 Midterm. Spring 2011

PROCESS SYNCHRONIZATION

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CROWDMARK. Examination Midterm. Spring 2017 CS 350. Closed Book. Page 1 of 30. University of Waterloo CS350 Midterm Examination.

There are 8 total numbered pages, 6 Questions. You have 60 minutes. Budget your time carefully!

Concurrency & Synchronization. COMPSCI210 Recitation 25th Feb 2013 Vamsi Thummala Slides adapted from Landon Cox

CPS 310 midterm exam #1, 2/19/2016

CSE351 Winter 2016, Final Examination March 16, 2016

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Synchronization COMPSCI 386

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Operating Systems (1DT020 & 1TT802)

CSE 410 Final Exam 6/09/09. Suppose we have a memory and a direct-mapped cache with the following characteristics.

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Exam Guide COMPSCI 386

The Kernel Abstraction. Chapter 2 OSPP Part I

CPS 310 final exam, 5/1/2014

CS 61 Section Notes 5

Midterm Exam Amy Murphy 6 March 2002

CSE 153 Design of Operating Systems

CS-537: Midterm Exam (Fall 2008) Hard Questions, Simple Answers

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

by Marina Cholakyan, Hyduke Noshadi, Sepehr Sahba and Young Cha

Processes The Process Model. Chapter 2. Processes and Threads. Process Termination. Process Creation

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Lecture 9: Midterm Review

CPS 310 midterm exam #2, 4/9/2018

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Computer Systems Engineering: Spring Quiz I Solutions

Inter-Process Communication

OS COMPONENTS OVERVIEW OF UNIX FILE I/O. CS124 Operating Systems Fall , Lecture 2

OPERATING SYSTEMS ASSIGNMENT 3 MEMORY MANAGEMENT

(b) External fragmentation can happen in a virtual memory paging system.

CS 167 Final Exam Solutions

Design Overview of the FreeBSD Kernel CIS 657

UNIX Input/Output Buffering

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

Design Overview of the FreeBSD Kernel. Organization of the Kernel. What Code is Machine Independent?

Third Midterm Exam April 24, 2017 CS162 Operating Systems

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.

Computer Science 161

CS533 Concepts of Operating Systems. Jonathan Walpole

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Stanford University Computer Science Department CS 140 Midterm Exam Dawson Engler Winter 1999

Condition Variables. parent: begin child parent: end

Midterm Exam Solutions and Grading Guidelines March 3, 1999 CS162 Operating Systems

The Process Abstraction. CMPU 334 Operating Systems Jason Waterman

CSE 410: Computer Systems Spring Processes. John Zahorjan Allen Center 534

CSE 153 Design of Operating Systems

CSE 451: Operating Systems Winter Module 4 Processes. Mark Zbikowski Allen Center 476

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems

Process management. What s in a process? What is a process? The OS s process namespace. A process s address space (idealized)

CS-537: Midterm Exam (Fall 2013) Professor McFlub

CPS 310 second midterm exam, 11/6/2013

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Operating Systemss and Multicore Programming (1DT089)

Creating a Shell or Command Interperter Program CSCI411 Lab

CS-537: Midterm Exam (Spring 2001)

Linux Operating System

PESIT Bangalore South Campus

CS61 Scribe Notes Lecture 18 11/6/14 Fork, Advanced Virtual Memory

Question: Total Points: Score: CSE421 Midterm Exam. 09 Mar 2012

Project 5: File Systems

Process Management! Goals of this Lecture!

Killing Zombies, Working, Sleeping, and Spawning Children

Lecture 5: Synchronization w/locks

Final Examination CS 111, Fall 2016 UCLA. Name:

CS61 Scribe Notes Date: Topic: Fork, Advanced Virtual Memory. Scribes: Mitchel Cole Emily Lawton Jefferson Lee Wentao Xu

CS 153 Design of Operating Systems Winter 2016

CSE 120 Principles of Computer Operating Systems Fall Quarter, 2002 Halloween Midterm Exam. Instructor: Geoffrey M. Voelker

Midterm Exam Solutions Amy Murphy 28 February 2001

PROCESS CONTROL: PROCESS CREATION: UNIT-VI PROCESS CONTROL III-II R

Third Midterm Exam April 24, 2017 CS162 Operating Systems

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

CS 322 Operating Systems Practice Midterm Questions

Section 7: Wait/Exit, Address Translation

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013

Logical disks. Bach 2.2.1

Computer Systems Assignment 2: Fork and Threads Package

Unix Processes. What is a Process?

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

First Midterm Exam Solutions October 1, 2018 CS162 Operating Systems

CSE421 Alternate Midterm Solutions SOLUTION SET 08 Mar 2012

Threads and Parallelism in Java

Operating Systems and Networks Assignment 2

Midterm Exam March 3, 1999 CS162 Operating Systems

Qualifying exam: operating systems, 1/6/2014

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

Transcription:

CPS 310 first midterm exam, 10/6/2014 Your name please: Part 1. More fun with fork and exec* What is the output generated by this program? Please assume that each executed print statement completes, e.g., assume that each print is followed by an fflush(stdout). Be sure to consider whether the output is uniquely defined. You should presume that no errors occur. [30 points] /30 /30 /40 /60 /40 /200 main(int argc, char * argv[]) { printf( 0 ); fork(); printf( 1 ); execvp(argv[0], argv); printf( 2 ); } 011. An unbounded sequence of 0s and 1s with twice as many 1s as 0s (in the limit). After the first 01 the exact order is undefined. Optional: explain. I will consider your explanation for partial credit if you need it. Parent prints 0. Fork creates child. Both parent and child print 1. Both parent and child exec the same program again (argv[0]). [5 points] Exec* does not return if there is no error, so 2 is never printed. [5 points] Note: for the remaining answers that have key words or phrases in bold, those words were generally sufficient for full credit. (I also used bold for identifiers, but those answers required more explanation.)

CPS 310 first midterm exam, 10/6/2014, page 2 of 5 Part 2. Ping Pong These questions deal with the ping pong example discussed in class. PingPong is implemented in Java roughly like the code on the right: (a) Suppose that multiple threads call this PingPong method on the same object concurrently. Describe the resulting behavior. I am looking for 1-3 sentences summarizing any constraints on the execution ordering. [15 points] The threads execute computesomething serially (one after the other) in some undefined order, forever. synchronized void PingPong() { while(true) { computesomething(); } } notify(); wait(); Optional explanation: Since the method is synchronized, a monitor is held and so at most one thread can execute in the method at a time. Each thread enters the method and holds the monitor until it calls wait. Each wait blocks the caller and releases the monitor, allowing another thread to execute in the method. Before calling wait, a thread calls notify to wake up one waiting thread, if there is one. Upon resuming in wait after a wakeup (resulting from a notify by some other thread), a thread reacquires the monitor, returns from wait, and executes another iteration of the loop before waiting again. (b) Does it matter if we change the order of the statements in the loop? What if computesomething() is between the notify and the wait, or after the wait? What if we change the order of the notify and wait? [15 points] It doesn t matter where the computesomething call is: it might affect the order in which the threads compute, but the order is undefined anyway. [10] If the wait is before the notify, then all threads wait before any thread calls notify, and so all threads wait forever. [5] A surprisingly large number of answers seemed to suggest that calling notify at the wrong time implicitly yields the monitor, or somehow relaxes the mutual exclusion of the monitor. It doesn t! Only wait releases the monitor, but it reacquires it before returning, and so the monitor/lock is always held during computesomething().

CPS 310 first midterm exam, 10/6/2014, page 3 of 5 Part 3. Servers and threads and events Answer each of these short questions using a word, or a phrase, or a sentence, or maybe two. [10 points each] (a) It has been said that an event-driven design pattern is better than using multiple threads to structure a server. I have argued in class that it is best to use both models together. Why is it important for a server to have multiple threads, even in a system with full support for event-driven programming (e.g., with non-blocking system calls)? Multi-core. Optional explanation: a fully event-driven server can execute multiple requests at the same time even without multiple threads, but each thread can use only a single core at a time. [See the blue slide on event-driven server to see how multiple requests can be in different stages of processing at the same time.] Modern machines have multiple cores and so they need multiple threads to use all of those cores. I gave a few points for concurrently, and I gave the benefit of the doubt to parallelism. (b) In a ThreadPool implementation, how does a sleeping worker thread "know" that it has been selected to handle the next request or event? What wakes it up? When an event/request/task arrives, it is placed on a queue and a worker thread is awakened, e.g., with a notify. When a thread holds the mutex/lock/monitor on the queue and extracts an event/request/task from the queue, it knows it was selected to handle it. I gave full credit for any answer that suggested an event queue with a monitor (mutex and condition variable). (c) The main thread of an Android application runs an event-driven pattern. If the app has a task to perform that is long-running or that must block, the main thread can start an AsyncTask to do the work. In this case, how does the AsyncTask report its progress back to the main thread? By posting a message/event on the main thread s event queue. The AsyncTask is an object with a thread. The thread may report progress by calling an AsyncTask method, which posts a progress update event on a designated queue. The main thread handles events one at a time from its queue: it receives the progress update at some time in the future, whenever it is ready. (d) An atomic instruction performs an indivisible read-modify-write on memory: in particular a thread can use an atomic instruction to set a lock variable safely, indicating that the lock is busy. In this case, how does a thread know if it lost the race and the lock is already busy? What should the thread do if the lock is busy? The atomic instruction returns the old value of the lock variable, e.g., in a register. Spin or block. Atomic instructions such as test-and-set or compare-and-swap return the value of the read before writing to the target location, e.g., by leaving the target s old value in a register. If the old value shows the lock was already held, the thread has lost the race. It should spin or block and retry until it succeeds in setting the lock when the lock is free.

CPS 310 first midterm exam, 10/6/2014, page 4 of 5 Part 4. dsh These questions pertain to your shell project code (dsh). If you don t remember the details of what your group did then make something up that sounds good. Please confine your answers to the space provided. [12 points each] (a) What would happen if you type the command dsh to your dsh, i.e., to run dsh under itself? Will it work? What could go wrong? Sure, that ll work. What could go wrong? The shell runs commands, which are just programs. The shell itself is just a program like any other. Many students seemed convinced that this could not work, and came up with some creative and fun ways that it might fail (but they were generally nonsense). My favorite answer was it works --- I did it by accident hundreds of times. (b) Briefly summarize how your dsh implemented input/output redirection (<>). What system calls did you call from the parent for this purpose? What system calls did you call from the child for this purpose? open and dup* (e.g., dup2). In a proper shell these system calls are done in the child. The parent does no system calls for this purpose: the parent just forks the child, which sets up its own environment before exec*. (c) For pipeline jobs like cat cat cat, what would happen if a child writes into a pipe before the next downstream child (the process that is supposed to read from that pipe) has started? Can this scenario ever occur? Yes, it can occur because the order in which the children run is undefined. The bytes are stored in the pipe buffer until the reader is ready to retrieve them. If the writer fills the buffer, then it blocks until the reader starts reading. If no process holds the read side of the pipe, then the writes fail. But there is always such a process: the parent holds the pipe open until it has forked both children. (d) For pipeline jobs like cat cat cat, what would happen if a child reads from a pipe before the next upstream child (the process that is supposed to write to that pipe) has started? Can this scenario ever occur? Yes, it can occur because the order in which the children run is undefined. The reader blocks until the writer deposits some bytes in the pipe buffer. If no process holds the write side of the pipe, then the read returns an EOF. But there is always such a process: the parent forks the left child (the writer) before the reader, and in any case it holds the pipe open until it has forked both children. (e) For pipeline jobs like cat cat cat, how does a middle child s setup differ from the processes at the ends of the pipeline? How does a process know that it is the middle child? It has to dup the left-side pipe onto its stdin and the right-side pipe onto its stdout. The child does it before exec*: the parent cannot do it because the parent must preserve its own stdin and stdout (e.g., the tty). The child knows to do it because the child gets a snapshot of the parent s memory at the moment of the fork, including data structures that tell the child what to do (e.g., a list of process objects for the current job, and a pointer to a specific process in the list).

CPS 310 first midterm exam, 10/6/2014, page 5 of 5 Part 5. Maps These questions pertain to a classic 32-bit virtual memory system, with linear page tables and 4KB pages. Answer each question with an ordered sequence of events. Feel free to draw on the back page: otherwise, I am looking for as much significant detail as you can fit in the space provided. [10 points each] (a) Suppose that a running thread issues a load instruction on the following 32-bit virtual address: 0x00002014. Suppose that the address misses in the TLB. How does the hardware locate the page table entry for the page? Please show your math. Break the VA into a Virtual Page Number (VPN) and a byte offset in the page. Index the page table with the VPN and examine the PTE. Conceptually this is just like indexing into any array. Extra smiles if you told me about the hierarchical page table structure or that the page table to index is the one whose base is loaded into the core s Page Table Base Register. Few did. This feels like it should be nasty math, but it is very easy. There are 4K bytes on each page, 4K = 4*1024 = 2 2 * 2 10 = 2 12. So the offset is the low-order 12 bits, or three hex digits (16=2 4, so 4 bits per hex digit) = 0x014. The high-order 20 bits (32-12=20) are the VPN, so VPN=0x00002=2. If you are weak on power-of-two math (we both know who you are!) take the time to bone up and see how easy it is. (b) Suppose further that the page table entry is empty : it does not contain a valid translation. The hardware cannot complete the requested virtual memory reference. What does it do? Fault and let the kernel handle it (parts c and d). The machine raises a fault to redirect control to the OS kernel. Many answers suggested this was an error, as in segmentation fault. But it is just a miss in the page table, which is itself a cache. (c) How does the operating system determine if the reference is legal, i.e., how does it determine whether or not the reference results from an error in the program? Short full-credit answer: vm_map. The OS determines if the faulting address lies within some valid segment of the current virtual address space, and if it is allowed for that segment (read, write, execute permission). For example, it might run down a linked list of segments (vm_map). If the access is not valid for some attached segment, THEN it is an error: e.g., a segmentation fault or protection fault. (d) If the reference is legal, how does the operating system find the data for the page? Is it possible that the page already resides in memory? (Why or why not?) You may presume that the requested virtual page has never been modified. If the segment is backed by a file, then find the block location on disk by indexing the inode block map, and read that block into a freshly allocated frame of machine memory. If the segment is anonymous (stack, heap) then zero-fill the frame. Install a page table entry to map the virtual page to the frame, and return from the fault. It is possible that the page already resides in memory. One example we discussed is a write (store instruction) to a copy-on-write page: the machine raises a fault because the page is write-protected. The OS copies it to a new frame, and maps the virtual page to the new frame, and enables writes. There are other cases.

Exam scores (ranked) 200" 180" 160" 140" 120" 100" 80" 60" 40" 20" 0" 0" 10" 20" 30" 40" 50" 60" 70" 80"

Ranked scores on individual questions (normalized). 120" 100" 80" 60" Q1N" Q2N" Q3N" Q4N" 40" Q5N" 20" 0" 0" 10" 20" 30" 40" 50" 60" 70" 80" Here we see that surprisingly few students really understand the basics of VM (question 5), and many did not take the time to study the material on threads that was presented in the last two weeks before the exam (questions 2 and 3). People did much better on the questions pertaining to the Unix system call interface (shell lab).