Alexandria University Faculty of Engineering Electrical Engineering - Communications Spring 2015 Final Exam CS333: Operating Systems Wednesday, June 17, 2015 Allowed Time: 3 Hours Maximum: 75 points Note: Keep your answers brief and precise. Answer all questions. Group A: General Questions [26 Marks] 1. (20 points) True/False. Justify your answer. Note: No credit will be given for unjustified answer. (a) The OS scheduler is invoked when a process finishes execution. true. The kernel will gain control of the CPU so it can select another process to execute on it. (b) External fragmentation can happen in a virtual memory paging system. False. Only internal fragmentation can occur. (c) The kernel gets back CPU control, when a process makes a blocking read() call. True. The kernel will gain control of the CPU so it can select another process to execute on it. (d) Rate Monotonic Scheduling can be used for both periodic and aperiodic processes. False. RMS scheduling requires computation of rate (1/period) of each process and hence the underlying assumption of each process being periodic. (e) If there is a system in which there is circularity in the wait-for graph of the system, this a definitive sign of a deadlock. False, it is not a definitive sign of a deadlock. In addition to the Circular Wait, you also need all the other 3 conditions (Mutual Exclusion, Hold and Wait, No Preemption) for a deadlock to occur. if one of the condition above is not satisfied, then a deadlock can be broken. (f) Round Robin scheduling favors CPU bound processes to I/O bound processes. True. Each process is put at the end of the queue after finishing I/O. This means that an I/O process that gets blocked so often even if its finishes its entire quantum will have to wait for its turn. This ends up with I/O processes waiting for long time in the queue 1 of 9
(g) In a computer system with a single CPU and we do not know the runtime of each process, we can use the Shortest Remaining Time Next scheduling. False. The SRTN algorithm requires knowledge of the estimated execution time of each job. (h) Suppose that a multi-threaded program requires a lot of I/O, it is better to (from program execution time) to have kernel-level threads. True. Using kernel-level thread is better in this case so that if a thread gets blocked by an I/O request (done by a system call), other threads can continue executing. (i) If there are no global variables, then no locks are necessary. False. Locks for all other resources shared by the processes /threads (not just memory) (j) Contiguous allocation of files leads to internal disk fragmentation. True. The disk is divided into blocks. Therefore, there will be a wasted space in the last block of the file. This is minor though as compared to external fragmentation caused by contiguous allocation. 2. (3 points) Briefly explain the difference between the user mode and the kernel mode of a CPU. Why is that difference important to achieve the goals of an operating system? User mode: user applications runs in this mode. Certain instructions cannot be executed in this mode, certain areas in the memory and certain registered are inaccessible. Kernel mode: OS functions run in this mode. All instructions can be executed in this mode. All memory areas and registers are accessible. These restrictions are used by the OS to maintain control over the hardware and to protect itself from accidental or deliberate interference from user programs. 2 of 9
3. (3 points) List 2 events that might occur to cause a thread s state to change from running to ready? Possible answers: When a thread voluntarily yields When a thread s time slice expires When priorities change (or a higher priority thread unblocks) so that a thread no longer has highest priority Group B: Concurrency: Synchronization and Deadlocks [10 Marks] 4. (4 points) Answer the following questions. (a) Define what a race condition is. Two or more threads (processes) are reading and writing shared data and the final result depends on the relative order of executing these threads (processes). (b) In the below code with a shared variable x, state whether there is a race condition. Justify your answer. Lock(); If (x == 0) { Unlock(); Lock(); x++; } Unlock(); There is a race condition that would cause the x++ to be executed multiple times on different threads. 3 of 9
5. (6 points) Assume a computer system with 2 DVD drives. Consider the sequence of requests shown below: P1 Request(S) Request(T) P2 Request(T) Request(S) (a) Draw the resource allocation graph for the two processes P1 and P2, and two resources S and T. P1 has S and is waiting for T P2 has T and is waiting for S (b) Does this sequence of requests causes a deadlock? Justify your answer. Yes. There is a cycle in the wait-for graph. (c) Assume that deadlock avoidance is used. Assume that both processes P1 and P2 have stated that their maximum resource needed includes both S and T. Briefly explain how the execution proceeds with a deadlock free fashion. P1 requests S. It is safe to proceed. P2 requests T. It is not safe to proceed. Therefore it will not be granted its request and will have to wait. P1 requests T. It is safe to proceed. IT finishes and releases its resources. P2 requests its resources and is able to finish. Group C: Scheduling [15 Marks] 6. (2 points) What is the difference between FIFO and round-robin scheduling? Round-robin scheduling employs time-slicing and preemption 7. (9 points) Consider the below table that has information about three processes, their burst times, and arrival times. The CPU burst time is the time required by the process to execute on the processor with no interruption to perform I/O or any other blocking system call. The arrival time represents the time a given process is ready for execution. All the times in the table are given in milliseconds. 4 of 9
Process Burst Time Arrival Time P1 10 0 P2 3 1 P3 2 3 (a) Show the process scheduling using round robin policy with quantum 2 ms. What time does each process finish? (b) Show the process scheduling using round robin policy with quantum 100 ms. (c) Show the process scheduling using shortest remaining time next policy. (d) What are the average waiting time, turnaround time for all the above policies? 5 of 9
8. (4 points) Consider three processes, P1, P2, and P3. The processing time, and deadlines are shown in the figure below. All the three processes are ready for scheduling at time 0. Process Processing time Deadline P1 20 100 P2 30 145 P3 68 150 (a) Can these three processes be scheduled using rate-monotonic scheduling? You will need to apply the utilization condition here. (b) Using Rate Monotonic Scheduling, will all three deadlines be met? Illustrate your answer using a chart similar to the one studied in class. Group D: Memory Management [16 Marks] 9. (5 points) A processor has a 32 bit address space, and uses paging. 20 bits are used for the page number, and 12 for the offset. A page table entry is 4 bytes. Describe in detail what happens in the memory management unit and OS when: (a) A user process does a read of address 0xC0DEDBAD and it is in memory. Page 0xC0DED, Offset 0xBAD Look up the base of the page table (should be in the base register) and access the the page table entry number 0xC0DED. This is calculated as: address of the page tabe + 0xC0DED * 4 bytes Read the page frame number from the page table entry Concatenate 0xBAD to the page frame number shifted 12 bits to the left Access memory at the physical address (b) What different/more/less happens if it is paged out to disk? In the page table entry, the present bit will be zero A page fault occurs Find the address of this page on disk ( read from a different table that tells where the pages of each process are stored on disk) The OS requests the disk block that has the requested page number to be loaded into the memory 6 of 9
The OS blocks the process When the page is read from disk, a page frame is chosen to store it in memory. If there is no empty page frame in memory, a page replacement policy is used The page table of the process is updates with the new page frame number and the present bit is set The OS moves the process to the list of ready processes (c) What different/more/less happens if it is not a valid address? At any of the steps in part A) we could find an invalid PTE, an address out of bounds etc. In which case the process would raise a page fault, and since it s not valid, probably be killed. 7 of 9
10. (3 points) Briefly explain the usage and benefits of a translation lookaside buffer (TLB) for paging. Usage of the TLB: The TLB is a cache of virtual-to-physical translations (parts of the page table). This means that it translates a page number to a frame number. The MMU tries to translate and address using the information in a TLB. If the page number is not in the TLB, it reads the page information from the page table. The benefit of using the TLB is to speed up the address translation process. When TLB is not used, two memory references are made for each instruction referencing the memory (read or write): the first is to read the page table and the second is to actually access the requested memory address. Now the memory translation is done through the cashed info which is much faster that reading page table from memory. 11. (8 points) Consider the following first 12 pages of an execution string for a process: A C B E A C D A C B E D There are 5 unique pages (A, B, C, D, and E), but it is to be run on a system that has only 4 frames of physical memory allocated to the process. None of the pages are resident at the beginning of the execution (ie: all frames are uninitialized). (a) For each of the following page replacement algorithms, state which page(s) would NOT be in the resident memory at the end of the execution: FIFO, OPT, and LRU. If you cannot answer, state your reason (eg: not enough information or a tie between X and Y ). FIFO (First In, First Out): B OPT (Optimal): tie between B, C, D, and E LRU (Least Recently Used): D (b) For the following two page replacement algorithms: OPT, LRU, how many page faults would there be? OPT: 19 LRU: 19 Group E: Input/Output Management and File System [8 Marks] 12. (2 points) In class, we studied three file allocations methods: (1) contiguous allocation; (2) linkedlist allocation; and (3) index node. Each of the methods has its advantages and disadvantages depending on the objectives of the file system and the expected access pattern of the files. If the the most important objective of the file system is the performance of random access to very large files, rank the three structures in order of preference and state clearly a justification for this ranking. 8 of 9
Rank: contiguous, indexed, linked Contiguous allocation is the best because seeking to a random location in a file can easily be achieved by adding the address of the first block and the offset of the data in file. For large files, finding the right block using indexing will need access to additional block(s). Linked list is the worst because we will need to traverse the linked list and therefore reading all the blocks before the one with the requested data. 13. (2 points) Explain the reasoning behind the use of the LRU algorithm for virtual memory page replacement and file system buffer caches. The LRU (least recently used) algorithm tries to approximate future behavior by using past behavior. The approximation is done using temporal locality. To evict that element that will not be seen in the near future, the decision is approximated by choosing the element that was not seen recently in the past. 14. (2 points) Briefly explain when using DMA is beneficial and when it is not. Direct memory access A device has DMA controllers, and DMA controller has access to system bus. Contains several registers that can be written and read by the CPU Data is transferred to the main memory without the intervention of the CPU in the process. when the transfer is finished, the DMA interrupts the CPU OS starts and it does not have to copy data into memory as it is already there 15. (2 points) What is the difference between internal and external fragmentation? These appear when the data are organized in blocks (disk storage) or pages/segments (memory). Internal Fragmentation: is the phenomenon when there is wasted space internal to a partition (block or page ) due to the fact that the data loaded is smaller than the partition. External Fragmentation: is the phenomenon when there are a lot of small holes in memory/disk. This is indicating that the memory/disk space that is external to all partitions becomes increasingly fragmented. 9 of 9