CSE 306 -- Operating Systems Spring 2002 Solutions to Review Questions for the Final Exam 1. [20 points, 1 each] rue or False, circle or F. a. A binary semaphore takes on numerical values 0 and 1 only. b. An atomic operation is a machine instruction or a sequence of instructions that must be executed to completion without interruption. he answer in a database course might be different. In that setting, if the operation fails before completion, it must be retracted to the point of restoring the system to a previous state. F c. Deadlock is a situation in which two or more processes (or threads) are wait- ing for an event that will occur in the future. cannot occur d. Starvation is a situation in which a process is denied access to a resource because of the competitive activity of other, possibly unrelated, processes. denied access indefinitely, possibly infinitely F e. While a process is blocked on a semaphore's queue, it is engaged in busy waiting.
f. Circular waiting is a necessary condition for deadlock, but not a sufficient condition. a condition for the deadlock to occur F g. Mutual exclusion can be enforced with a general semaphore whose initial value is greater than 1. equal to 1 F h. External fragmentation can occur in a paged virtual memory system. question was too vague, either answer was accepted i. External fragmentation can be prevented (almost completely) by frequent use of compaction, but the cost would be too high for most systems. j. A page frame is a portion of main memory. F k. Once a virtual memory page is locked into main memory, it cannot be written to the disk. A locked page cannot be swapped out, but "swapped out" is more than "written to". F l. Pages that are shared between two or more processes can never be swapped out
to the disk. sharing does not require locking F m. he allocated portions of memory using a buddy system are all the same size. F n. Demand paging requires the programmer to take specific action to force the operating system to load a particular virtual memory page. o. Prepaging is one possibility for the fetch policy in a virtual memory system. p. he resident set of a process can be changed in response to actions by other processes. F q. he working set of a process can be changed in response to actions by other processes. F r. he translation lookaside buffer is a software data structure that supports the virtual memory address translation operation. hardware F s. In a symmetric multiprocessor, threads can always be run on any processor. from the hardware's point of view only, this would be true, but the OS and user can put restrictions on processor assignment.
F t. hrashing will never be a problem if the system has 1 GB of real memory. 2. [20 points, 5 each] Short answers and simple diagrams. (a) Define the resident set of a process. he subset of a process's pages that is actually in main memory at any time. (b) Define the working set of a process. he subset of a process's pages that have been referenced recently by the process, during the time it has actually been executing. he parameter delta defines "recently". (c) What problems could occur if virtual memory pages are always allocated in groups of four? You might want to consider the page as being four times larger, but then it might not agree with the processor's expectations for virtual memory address translation. Are the pages allocated together (consecutively) in memory, or just at the same time? If consecutive, are there alignment problems? A page fault is for one page not four, so how do we know how to pick the
other three pages? (d) What information is used by the Least Recently Used page replacement policy, and how does this compare to the information used by the various Clock algorithms? LRU - time of last reference to the page, set by the hardware Clock - use bit, perhaps also modified bit, set by hardware, cleared by software Other kinds of information used by all page replacement policies could be listed here. 3. [20 points, 5 each] Short answers and simple diagrams. (a) In terms of memory allocation, what is a reference counter? Why is it needed? reference counter = number of pointers (references) to an object. allocate or share = increment the counter deallocate or unshare = decrement the counter do not actually deallocate the object until the counter is 0 easier than keeping a list of references and checking if the list is empty Not the base address or the reference bit. Does not refer to time of reference or number of references over a period of time; these might be useful in a replacement algorithm, but that's another topic.
(b) Explain why, or why not, internal fragmentation can be a problem when using the best fit algorithm for memory allocation. he requested space could be less than the smallest suitable available partition. If using fixed partitions, internal fragmentation will result. If using dynamic partitions, external fragmentation will result. If using pages, best fit may not be appropriate. Best fit with fixed partitions tends to reduce the unused space, while for dynamic partitions it tends to produce very small unallocatable regions. (c) One of the options in a mainframe OS is to limit the number of jobs (processes) currently in the system. What are some of the benefits of this capability? More time available to existing processes, lower turnaround time for them. More space available per existing process, more likely to have runnable (ready) processes, fewer page faults. Reduce scheduling overhead. he ultimate reduction to 1 process is going too far. (d) In what circumstances of virtual memory is the placement policy an important issue? On a shared memory multiprocessor with non-uniform memory access times. If segments are used without pages, memory allocation will need to be efficient. Perhaps also if caching is used, the process should be restarted on its previous processor.
various wrong answers: disk sector placement, shared pages, fragmentation and compaction (but see above), pages reserved for the OS, locked pages, page replacement decisions. 4. [20 points, 5 each] Short answers. (a) What are four general characteristics of processor scheduling policies? clarification - "characteristics" could mean goals, requirements, features of algorithms, information requirements, etc. See able 9.3. selection function (valuation criterion) decision mode (time of selection; preemptive or nonpreemptive) behavior in practice according to various measurements, such as throughput, response time, possible starvation implementation overhead his has nothing to do with the four necessary conditions for deadlock. (b) Define urnaround ime and Normalized urnaround ime. Why are these useful for measuring the performance of a scheduling algorithm? turnaround time = finish time - arrival time (into the system) normalized t.time = t.time / service time
On average, or in the worst case, we want these measurements to be minimized. he normalized t.time gives better information about short jobs. Each one measures how quickly jobs move through the system. "How fast a process will run" is not a good answer. n.t.t. is not the average turnaround time over all the processes in the system. (c) What would be the effect of a large number of page faults by a process on that process's page allocation on a nonpreemptive operating system? he process's working set is changing rapidly. Its resident set will grow as other processes' pages are replaced (the other processes are not running since the OS is nonpreemptive). hrashing may or may not be the cause of the problem. he page faults will tend to slow down the process but will not affect it otherwise; anyway, the question is about page allocation and not about runtime. (d) What are four actions or decisions that a preemptive virtual memory operating system would make at the end of a time quantum (in response to a timer interrupt)? interrupt handler, basic operations save state, switch to kernel mode do operations appropriate to the interrupt (in this case, nothing) select a new process to run initiate paging operations for the new process
install its page tables reload process state, switch to user mode restart process Not necessary to check for completion of I/O operations since that would generate a different interrupt. Similar for page faults. 5. [20 points] his function is proposed for use in an operating system, with the definitions of Process, Process_Set and other functions given elsewhere. Process next_process(process_set available_processes) { Process_Set A = highest_valuation(available_processes); /* priority ranking */ Process_Set B = earliest(a); /* actual arrival time */ Process c = random_selection(b); /* tie-breaker */ return c; /* run this process next */ } (a) [5] Explain why this function could lead to processor starvation among the available processes. he ranking could lead to starvation. Jobs that arrive early get preference but this is not a problem in intself, as it resembles first-come-first-served. Long-running early jobs could
take precedence over shorter newer jobs. he randomization could be unfavorable. "Available" is intended to mean things like "Ready" or "Not Blocked". (b) [5] Suppose one of the criteria used by the highest_valuation function is the process's fraction of virtual memory pages currently in main memory. Explain why this is not a good idea. two possible definitions of the fraction: f1: process pages in memory / process pages in total (just for this process) f2: process pages in memory / page frames in memory Scheduling is usually based on processor activity, not consumption of other resources. Suppose higher fractions get preference. Once a process gets to run it will keep running until it gives up space or terminates; with page faults it will probably increase its fraction. What if the process is created with none or very few of its pages in main memory? It might never run. But the system could degenerate to First- Come-First-Served, which might not be too bad. A long-running job with large data structures but good locality may be OK with only a few pages in memory, but it would not be chosen. Suppose lower fractions get preference. Small jobs would be starved by large jobs with few pages and good locality. (c) [10] Define a version of the highest_valuation function (in the same style, but
with some more descriptive comments) for the Shortest Process Next scheduling policy. Describe the data requirements and how this data is obtained. programming, 6; data requirements, 2; data obtained, 2.