CS 167 Final Exam Solutions

Similar documents
CS 167 Final Exam Solutions

CS 167 Final Exam. Three Hours; Closed Book May 12, Please write in black ink. Your Name: Your CS Login ID:

CS 167 Final Exam Solutions

CS 167 Midterm Exam. Two Hours; Closed Book; Please Use a Black Pen March 23, 2017

Interrupts, Etc. Operating Systems In Depth V 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

A Unix process s address space appears to be three regions of memory: a read-only text region (containing executable code); a read-write region

( D ) 4. Which is not able to solve the race condition? (A) Test and Set Lock (B) Semaphore (C) Monitor (D) Shared memory

Virtual Memory COMPSCI 386

PROCESS STATES AND TRANSITIONS:

Final Exam Preparation Questions

What is a file system

Process Time. Steven M. Bellovin January 25,

CIS Operating Systems Memory Management Address Translation for Paging. Professor Qiang Zeng Spring 2018

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Midterm Exam Answers

OPERATING SYSTEM. Chapter 9: Virtual Memory

(In columns, of course.)

CSE 120 PRACTICE FINAL EXAM, WINTER 2013

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

1.Define signal? NOV DEC 2011 MAY JUNE 2011

IMPLEMENTATION OF SIGNAL HANDLING. CS124 Operating Systems Fall , Lecture 15

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

NAME: KEY (FIRST NAME FIRST) TOTAL:

PROCESS CONTROL BLOCK TWO-STATE MODEL (CONT D)

LINUX OPERATING SYSTEM Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science

Chapter 8: Virtual Memory. Operating System Concepts

Memory Management - Demand Paging and Multi-level Page Tables

1993 Paper 3 Question 6

Templates what and why? Beware copying classes! Templates. A simple example:

Deviations are things that modify a thread s normal flow of control. Unix has long had signals, and these must be dealt with in multithreaded

Implementing Threads. Operating Systems In Depth II 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

CS 33. Architecture and the OS. CS33 Intro to Computer Systems XIX 1 Copyright 2018 Thomas W. Doeppner. All rights reserved.

The Kernel Abstraction

Da-Wei Chang CSIE.NCKU. Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University

ENGR 3950U / CSCI 3020U Midterm Exam SOLUTIONS, Fall 2012 SOLUTIONS

CMPS 111 Spring 2013 Prof. Scott A. Brandt Midterm Examination May 6, Name: ID:

Outline. V Computer Systems Organization II (Honors) (Introductory Operating Systems) Advantages of Multi-level Page Tables

Long-term Information Storage Must store large amounts of data Information stored must survive the termination of the process using it Multiple proces

Operating Systems Design Exam 2 Review: Fall 2010

Chapter 9: Virtual Memory. Operating System Concepts 9th Edition

NFS Design Goals. Network File System - NFS

OS 1 st Exam Name Solution St # (Q1) (19 points) True/False. Circle the appropriate choice (there are no trick questions).

COMP 3361: Operating Systems 1 Final Exam Winter 2009

Chapter 10: Virtual Memory

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100

Address spaces and memory management

Virtual Memory Outline

The UtePC/Yalnix Memory System

THE PROCESS ABSTRACTION. CS124 Operating Systems Winter , Lecture 7

CS 550 Operating Systems Spring Memory Management: Page Replacement

Operating System Concepts 9 th Edition

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Processes and Threads

Operating System Concepts

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

MEMORY MANAGEMENT/1 CS 409, FALL 2013

PAGE REPLACEMENT. Operating Systems 2015 Spring by Euiseong Seo

CS Operating Systems

CS Operating Systems

CS330: Operating System and Lab. (Spring 2006) I/O Systems

Process size is independent of the main memory present in the system.

EECS 482 Introduction to Operating Systems

CS510 Operating System Foundations. Jonathan Walpole

( B ) 4. Which is not able to solve the race condition? (A) Test and Set Lock (B) Shared memory (C) Semaphore (D) Monitor

Each terminal window has a process group associated with it this defines the current foreground process group. Keyboard-generated signals are sent to

CS-537: Midterm Exam (Spring 2001)

! How is a thread different from a process? ! Why are threads useful? ! How can POSIX threads be useful?

CS 33. Architecture and the OS. CS33 Intro to Computer Systems XIX 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

Introduction. Secondary Storage. File concept. File attributes

I/O Systems. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Universidad Carlos III de Madrid Computer Science and Engineering Department Operating Systems Course

Where are we in the course?

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

CS 550 Operating Systems Spring Interrupt

Noorul Islam College Of Engineering, Kumaracoil MCA Degree Model Examination (October 2007) 5 th Semester MC1642 UNIX Internals 2 mark Questions

6.033 Spring 2004, Quiz 1 Page 1 of Computer Systems Engineering: Spring Quiz I

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Processes and Threads. Processes and Threads. Processes (2) Processes (1)

Processes. Process Management Chapter 3. When does a process gets created? When does a process gets terminated?

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.

Windows 7 Overview. Windows 7. Objectives. The History of Windows. CS140M Fall Lake 1

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Computer Systems Engineering: Spring Quiz I Solutions

CMSC 313 Spring 2010 Exam 3 May 17, 2010

CS307: Operating Systems

CS 5460/6460 Operating Systems

CSE 153 Design of Operating Systems

The cow and Zaphod... Virtual Memory #2 Feb. 21, 2007

3. Process Management in xv6

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1


CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Operating Systems Design Exam 2 Review: Spring 2012

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4

Design of Operating System

* What are the different states for a task in an OS?

3.1 Introduction. Computers perform operations concurrently

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Introduction to OS. File Management. MOS Ch. 4. Mahmoud El-Gayyar. Mahmoud El-Gayyar / Introduction to OS 1

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Transcription:

CS 167 Final Exam Solutions Spring 2016 Do all questions. 1. The implementation given of thread_switch in class is as follows: void thread_switch() { thread_t NextThread, OldCurrent; } NextThread = dequeue(runqueue); OldCurrent = CurrentThread; CurrentThread = NextThread; swapcontext(&oldcurrent->context, &NextThread->context); // We re now in the new thread s context a. Suppose we don t have a swapcontext routine to call. Instead, we have to use getcontext and setcontext. Recall that getcontext saves all of the calling thread s registers (including the stack pointer, but not the instruction pointer) into its argument, a pointer to a data structure of type ucontext_t and returns nothing. The routine setcontext restores the registers from its first (and only) argument (of type ucontext_t). Show how to implement thread_switch using getcontext and setcontext. If changes to the thread control block (of type thread_t) are required, indicate what they are. // the context component of thread_t has been replaced // with ucontext_t ctx; void thread_switch() { volatile int first = 1; getcontext(&currentthread->ctx); if (!first) { // the thread has just been switched to return; } first = 0; // have saved context of CurrentThread, now get into context // of the next thread CurrentThread = dequeue(runqueue); setcontext(&currentthread->ctx); // unreached! } b. Suppose thread_switch takes an argument and this argument is referred to after setcontext returns. Explain what, if anything, must be done to ensure that what is referred to is what was passed in the most recent call to thread_switch. Since the argument to thread_switch is stored on the stack and the stack is switched to that of NextThread by setcontext, it is necessary to copy the argument someplace else so that it can be accessed after setcontext. One place to copy it is to a field within the thread_t structure of NextThread.

2. Recall that in Unix, when a file whose set-user-id bit is one is exec d, the process s effective user ID and the saved user ID become the owner of the file, while the real user ID is not modified. A program may set its effective user ID to be anything if it s currently superuser, otherwise it may set its effective user ID only to its real, effective, or saved user IDs. Also, file access checks are done using a process s effective UID. A set-uid program uses these features to check if its caller has permission to access a given file, perhaps one passed as an argument. (It first sets its effective UID to that of the caller, opens the file, then sets its effective UID back to the saved UID.) A primary use of these features is to allow the owner of a file to keep others from directly accessing the file, but provide a program (whose set-uid bit is set) that performs operations on the file on the behalf of others, but under the control of the file s owner. a. This approach works well if the set-uid program is owned by superuser such programs are effectively extensions to the kernel. However, it s not that useful if the file containing the program is owned by a normal (untrusted) user. Explain why not. Suppose we are running a set-uid-to-mary program, where Mary is the untrusted (for good reason) user. Even though Mary s program is supposed to access just the file whose name was passed via an argument, since the program may change its effective user ID to the caller, it may access everything the caller can. Mary s program might take full advantage of this for nefarious purposes. b. We d like to fix this: we want to make it possible for some user, say Mary, to create a program that others may run as set-uid-to-mary, with none of the problems mentioned in the answer to part a. Explain how this might be done. Hint: certain functionality may have to be turned off, and we may need to adopt a convention for passing files as parameters to Mary s program. We first disable the ability to change the effective UID while running the set-uid program. Rather than have the invoker of a set-uid program provide the names of files it would like accessed, it instead opens the files it wants accessed and provides the file descriptors as, for example, file descriptors 3, 4, 5. Thus the file descriptors act as capabilities and provide to Mary just the capabilities the invoker wishes. 3. An operating system is said to be kernel-preemptible if threads may be preempted, even while running in kernel mode, and their processor switched to another thread. There may, of course, be certain code sequences during which such preemption is disabled. An operating system is said to be SMP-safe (symmetric multiprocessor safe) if it can run on a multiprocessor system in which any thread can run on any processor, whether in user mode or privileged mode, and any interrupt handler can run on any processor. a. If an operating system is kernel-preemptible, is it necessarily SMP-safe? Explain why not. On a non-mp kernel-preemptible system, one can make certain that two interrupt handlers don t simultaneously access the same data structure by relying on the interrupt mask (or interrupt priority level) to insure that no more than one is accessing the data structure at once. But on an SMP system, even though interrupts may be masked on one processor, they are not necessarily masked on the others. Thus additional machinery is required to protect the data structure. b. If an operating system is SMP-safe, is it necessarily kernel-preemptible? Explain why not. Consider a processor-specific resource, such as a page, that is manipulated only in the context of a thread running in kernel mode and never in the interrupt context. Thus no 2

special considerations are required for protecting it on an SMP-safe system. However, the system could assume that threads are not kernel-preemptible, and thus not need any form of synchronization to protect the resource. 4. Striping is a technique in which files are spread across multiple disks. The striping unit is the block size on each disk; the first striping-unit-sized portion of a file is written to the first disk, the second to the second disk, and so forth. An obvious concern is the size of the striping unit. Assume that data is read from and written to a striped file system in some standard length L. Thus the smallest possible striping unit is the size of a disk sector; the largest useful striping unit is L. You may assume that L is sufficiently large enough that it is worth discussing what the size of the striping unit is. a. Explain why it makes sense for the striping unit to be small (such as one sector in length) on systems in which just a single thread is generating file-system requests. If the striping unit is small relative to L, transfers to and from the disk array will utilize a number of disks and thus achieve the advantages of parallelism. Successive requests come from the same thread and are likely to be to consecutive locations within a file. Thus seek delays are minimal. b. Explain why it makes sense for the striping unit to be large on busy servers (i.e., on systems in which many threads are generating file-system requests). (Hint: you might answer this by showing why a small striping unit is not good.) On a busy server, successive file-system requests come from independent threads and are targeted at essentially randomly scattered disk locations. Thus each request generally requires a substantial seek delay. The number of such seek delays can be minimized by issuing large transfer requests. c. Suppose we replace the disks with flash memory, i.e., we are striping files over a number of flash devices. Would your answers to parts a and b be different? Explain. Yes. Since there are no seek delays with flash memory, the only concern is the amount of parallelism. Thus small striping units would be best. 5. The x64 architecture supports both 4KB and 2MB pages as shown below. 3

63 47 38 29 20 11 0 unused map pointer Physical page 63 47 38 29 20 0 unused map pointer Physical page 4

a. A little-known Linux feature allows one (if the kernel is properly configured) to map a file into the address space using 2MB pages rather than 4KB pages. What would be the advantage of doing this? Why isn t it always done? (Be sure to consider, among other things, the implementation of fork on Unix and typical use of fork.) The primary advantage of doing this is that many fewer page- and TLB entries would be required. This would mean less primary storage is required since there would be no need for the lowest-level page, and, with a limited number of TLB entries, more of the address-space translation could be handled directly by the TLB, thus reducing the number of TLB misses. The disadvantage of doing this is that there would be a greater wastage of memory due to internal fragmentation, since the average wastage would be 1 MB per mapped file with 2MB pages, but only 2KB per mapped file with 4KB pages. Unix systems implement fork using copy-on-write techniques, in which the pages of the parent and child are shared until either process attempts to modify a page. At that point, a copy of the page is made for the modifying process. In its typical use, a fork is followed soon after by an exec, and thus not much of a child process s original address space is modified before it does an exec. In such cases, the use of 4KB pages would involve much less copying than the use of 2MB pages. b. Suppose the x64 architecture supported 4KB and 4MB pages. Could the two different page sizes coexist in the same address space as the 4KB and 2MB page sizes do in the actual architecture? Explain. No (at least not without radical changes to the architecture). 4KB and 2 MB pages can coexist in the same address space because they share the same format page-map, page-pointer, and page- s. With 2MB pages, the page-- entries point directly to pages, while with 4KB pages, the page-- entries point to page s, whose entries point to pages. This is made possible because all the s have 2 9 8-byte entries (occupying 4KB bytes each); thus it takes 9 bits of address to index within a and 12 bits to index within a 4KB page. If we eliminate the last, we have 21 bits to index within a 2 21 -byte page. For such a scheme to work with 4KB and 4MB pages, we would have to have s containing 2 10 entries each. If we have four classes of s, as in the current x64 scheme, each with 2 10 entries, and the smaller page size is still 4KB, the maximum address space would be 2 52 rather than 2 48 as it is now. c. Consider two possible implementations of fork using copy on write. One, which we call shallow copy on write, involves sharing just the actual page frames, but both processes have their own private copies of the all page s (including page s, page pointer s, etc.). The other, which we call deep copy on write, involves sharing both page frames and all page s. Thus with deep copy on write, after fork, both parent and child share the page map and nothing is immediately copied. When page frames are modified, all page s (and page s, etc.) that must be modified are also copied. But with shallow copy on write, when a process forks, all page s, including the page map, are copied at the time of the fork. It should be clear that, with deep copy on write, forks are cheaper than with shallow copy on write. Why might it be advantageous, nevertheless, to use shallow copy on write? While we re looking for a qualitative and not a quantitative explanation, you should mention any important data structures that might be required in one but not the other. 5

The cost of handling a copy-on-write fault will clearly be cheaper with shallow copy on write than with deep copy on write. But an important additional concern is that deep copy on write requires the maintenance of reference counts on the various sorts of page s, while no such reference counts are required with shallow copy on write. d. How might one optimize shallow copy on write to make it much cheaper for the usual case of fork, in which it s followed quite soon by an exec? (Hint: this is yet another application of lazy evaluation ). One might avoid creating any page s at the time of fork, except for a copy of the page map, creating page s only as necessary to satisfy references to page frames. Thus the child will create, in response to a page fault, a chain of page s leading to the desired page frame. Initially, there's just a page map, all of whose entries are marked invalid. The process s first memory reference results in a page fault, and a chain of three page s will be created to map the referenced page frame. 6. NFS v2 and v3 systems allow clients to hard-mount remote file systems, so that in the event of a server crash, clients repeatedly retry RPC calls until the server comes back up. If clients held locks on any of the server s files, the NLM protocol recovers these locks for them. a. Assume that all RPC calls are to idempotent procedures and that the only possible failure is a server crash (i.e., clients don t crash and the network is well behaved). Other than timing issues, will server crashes have any adverse effects on clients (assuming the server comes back up)? Explain. No. When the server comes back up, clients will reclaim all locks they had prior to the crash. Applications running on the clients will be effectively suspended if they perform operations that require access to the server, but will resume once the server comes back up. No system call will fail due to the server s being unresponsive. b. Suppose now that the network is not well behaved it contains routers and it is possible that the network might partition itself for short periods so that some clients can communicate with the server and others cannot. Are clients immune from any ill effects of server crashes combined with network partitions? Explain. In this case, it is possible that a client might lock a file, but due to a network partition and server crashes, while the client is waiting for the server to respond, another client is able to lock the file, modify the file, unlock the file, without the original client being aware. This could happen in the following scenario: 1. Client A locks byte 0 of file X. 2. There is a network partition separating client A from the server. 3. The server crashes. 4. The server reboots. 5. Client B locks byte 0 of file X. 6. Client B modifies byte 0 of file X. 7. Client B unlocks the file. 8. The server crashes. 9. The network partition is repaired. 10. The server reboots. 6

11. Client A reclaims its lock. c. In DFS, servers give clients tokens that allow clients to use cached data from files. These tokens have time limits associated with them, so that if the token times out, the client has to assume that any cached data it has from the server is invalid and must obtain a new token (and new data) from the server. Explain how tokens might be used to help with the situation of part b. (Hint: it won t necessarily solve the problem, but it might help client applications become aware of problems.) If the time-out period of a token is sufficiently short, then client A of the example given in the solution to part b would discover that its tokens had timed out by the time Client B could have locked the file. Thus the client-a application could be delivered an error (from the return of a file-related system call), indicating that its lock is no longer good and has been effectively revoked. d. Explain why DFS does not support hard-mounted file systems. If a server is down long enough, clients tokens will time-out. The time-out requirement cannot be relaxed, because doing so would make it impossible for clients to detect potential problems due to network partitions. 7