SOLUTIONS ENGR 3950U / CSCI 3020U (Operating Systems) Midterm Exam October 23, 2012, Duration: 80 Minutes (10 pages, 12 questions, 100 Marks) Instructor: Dr. Kamran Sartipi Question 1 (Computer Systgem) [8 marks] In the computer system below: a. What are the main Bus s that connect the CPU to different components of the computer system (i.e., memory, and controllers). Briefly explain how each Bus is used. Address Bus (uni-directioanl): CPU drives the address bus to address memory locations and to activate different I/O devices. The size of address bus determines the number of memory locations that the CPU can access. Low part of the address lines are directly connected to memory or I/O devices, but the high-part of it is used by address decoders to enable memory or I/O devices. Data Bus (bi-directioanl): Data bus connects CPU to the rest of components in the computer system (i.e., memory, DMA, and I/O controllers) to transport parallel data between CPU and other components. At each instance only two components can be connected to the data bus, and the others must be disconnected (high-impedence or tri-state). Control Bus (uni-directional): Control lines include: memory read/write; I/O read/write; hold; wait; interrupt; interrupt acknowledge; reset; etc. These lines are used to enable/disable different devices (and CPU) for different purposes. Page 1 of 9
b. How many address lines are needed (for the address Bus) to address a memory with size 128 Giga Bytes? Show your calculation below. 1KB 2^10 10 lines 1MB 2^20 20 lines 1GB 2^30 30 lines 128 2^7 7 lines 128 GB 2^30 x 2^7 2^37 37 lines c. In the above computer system, explain the advantage of using DMA (Direct Memory Access) compared to using CPU for transferring data from disk controller to memory. DMA can be used to send data from local buffer of disk controller to main memory without CPU intervention. To use DMA after setting up buffers pointers, and counters for the I/O device, the device controller transfers an entire block of data directly from its own buffer to memory with no intervention by the CPU. Only one interrupt is generated per block rather than one interrupt per byte. While the device controller is performing these operations the CPU is available to accomplish other works. Using DMA for data transfer has high advantage over using CPU to reduce overhead when transferring large data. It also makes better use of CPU-time by letting the CPU do other tasks during data movement. Moreover, during data transfer the BUS is being used by DMA, and the CPU can t use BUS. d. Can a DMA and the CPU work in parallel? Why? Yes, during the data transfer from device controller buffer to memory (or the reverse direction) by the DMA, the CPU disconnects itself from the system BUS but it can perform internal operations such as decoding instruction opcodes and performing arithematic and logic operations using ALU. Question 2. Give 3 reasons for Process Creation and five reasons for Process Termination. [8 Marks] Page 2 of 9
Question 3. Name five items in the Process Control Block (PCB). [5 Marks] Process state Process id Program counter CPU registers CPU scheduling information Memory-management information Accounting information I/O status information Question 4 [7 marks] Figure below is part of a lecture slide representing the application of command process status ( ps ) in Unix OS. In simple steps and with the logic we discussed in class, explain how you would identify the shell process -tcsh of a crashed terminal and kill the process to release the terminal. UID: user ID; PID: process ID; PPID: parent PID; TTY: terminal; CMD: command. In the above example, the first line belongs to the currently created shell process since its PID is close to the PID of the "grep ksartipi" command (two PIDs 9002 and 9092), as opposed to the PID of the shell process in the second line (11670). Now, you can kill the crashed shell process by the command: "kill -9 11670" at the Unix prompt and release the crashed terminal. Page 3 of 9
Question 5. Using the line-numbers in the C code shown below: [10 Marks] 1 Int main ( ) { 2 int pid; 3 /* fork another process */ 4 pid = fork (); 5 if (pid < 0) { /*error occurred */ 6 fprintf (stderr, Fork Failed ); 7 exit (-1); 8 } 9 else if (pid == 0) { /* child process */ 10 execlp ( /bin/ls, ls, NULL); 11 } 12 else { /* parent process */ 13 /* parent will wait for the child to complete */ 14 wait (NULL); 15 printf ( Child Complete ); 16 exit (0); 17 } 18 } a) What is the value of PID in line 4: PID = fork(); after returning from the fork system call, and why? fork() retunes the PID of child process for the parent process, and returns 0 for the child process. b) Briefly explain different parts of the code in order to perform a fork() system call, including the synchronization between parent and child processes. Line 2: defines an Integer variable to store the child s PID Lines 4-8: The process performs a fork() system call to create a child process which is completely similar to the this process. This fork() retunes the PID of child process for the parent process, and returns 0 for the child process. If the fork() system call is not successful, the OS returns a negative number which causes an exit(-1) in which case the program terminates by indicating an error. Lines 9-11: The child process performs this part (since it receives 0). It then executes execlp( ) operation which causes the OS to: i) access the executable file with pathname /bin/ls from the file system; ii) place it into the address space of the child process; and iii) run the command ls (list) on it, and return NULL signal after Page 4 of 9
termination. Lines 12-17: The parent process performs this part. It performs wait (NULL) command which causes it to block itself and wait until it receives a signal from OS that its child has terminated (with signal NULL). Then the parent process prints Child Complete and exits normally exit(0). Question 6 (IPC) [6 marks] In a remote procedure call (RPC) mechanism between a client and a server computers using sockets, what happens if the client process does not know the port-number of the server process to make the connection. Answer briefly. The server runs a matchmaker daemon on a known port. The client contacts the matchmaker daemon, providing a service or procedure name that it wishes to access. Then the matchmaker daemon returns the proper port number to the client. The client contacts service on the server using this port. In most cases the OS will take care of this steps on behalf of the client. Question 7 (File System) [14 Marks] Consider figure below to answer the following questions using the terms listed below: Allocation; File descriptor; Path-name component; I-node; Index block; I-number; Access permission; File permissions; Open-file counter; Pointer to file blocks; File owner name; File size; File dates; Super block; File allocation table; File control block. a. What are the contents of each directory entry in A? <Path-name component, I-number> b. What is the name of indices shown by B? I-number c. Name three items in C. File permissions; Open-file counter; Pointer to file blocks; File owner name; File size; File dates; d. What is the specific name of the indices shown by D? File descriptor Page 5 of 9
e. What are the contents of each entry E? Access permission; f. Which of the entries A, C, or E contains a pointer to file blocks? C g. Show how 4 blocks from the hard disk can be used as data blocks for a file X using Unix index allocation method.. Page 6 of 9
Question 8. What is the Principle of Locality? How is this concept used in designing cache memory? [6 Marks] Program and data references within a process tend to cluster. Therefore, in a short period of time the process will access to memory locations that are close to each other (adjacent). After a short period of time the CPU addresses to another location and performs the code or accesses data that are closely located in memory in that location. For this reason it is reasonable that when the process addresses a location in memory the hardware (Memory Management Unit MMU) automatically move a block of memory from that location to the cache memory which is much faster than main memory. Therefore, due to the Principle of Locality of Reference, with very high chance (above 90%) the next memory access can be done from the cache (not from main memory). This is the reason that computer systems have cache memory. Question 9. For each statement below, indicate whether it is False or True. In case of False briefly explain why. [10 Marks] a) Multiprogramming is intended to increase CPU utilization, and time-sharing is intended to increase computer s responsiveness in interacting with user. True. b) A heavy-weight process is a thread which can handle a large task. is a Unix process which.. c) A parent-process and its child-processes can easily communicate information since they share the same address space. True. d) Threads have difficulty in communicating information among each other since they do not share the address space of their process. Threads have ease in communication since they do share e) CPU has two modes of operations as user mode and kernel mode, where the kernel mode is used for accessing privileged library functions. privileged I/O functions. Page 7 of 9
Question 10. Why in a busy system with several processes in memory, the ready queue becomes empty and the CPU becomes idle? How does the operating system handle this situation to increase CPU utilization? [6 Marks] Suppose all processes need to use I/O devices for their tasks; for example, printing, waiting for keyboard; waiting for disk blocks, waiting for memory to be allocated, etc. Since the speed of I/O operations are much slower than CPU operation (some orders of magnitude) then all processes after a short period of time block themselves and wait for their requested resources to be available. In this case the Ready Queue becomes empty and CPU becomes idle. In this situation, the OS admits more processes (i.e., brings more programs from disk to memory to be executed) in order to make Ready Queue full and active. Question 11. (Threads) For each statement below, indicate whether it is False or True. In case of False briefly explain why. [8 Marks] a) User-level threads are created and managed by thread libraries and they are easy to create and delete. True b) If a user-level thread is blocked for I/O operation, the kernel of operating system will perform context switching to run another user-thread which is not blocked. The Kernel does not have any information about the user-level threads, therefore, it can not do task switching for user-level threads. c) For efficiency, the operating system maintains a pool of threads to allocate to different tasks? True d) Many-to-One is the most efficient model of multi-threading since it allows several user-level threads to be assigned to different processors in a multi-processor computer system. Many-to-One uses one kernel thread therefore it cannot use multi-processor facility. Question 12 (False / True) [12 Marks] Answer False or True. In case of False briefly explain why. a) Lower level of file system implementation deals with logical file names and logical properties of files. Page 8 of 9
High level of file system implementation. b) Various files can be allocated space on disk through "contiguous", "linked", or "hyper-linked" allocation mechanisms. Indexing c) "File descriptor" is the name of a pointer which points to an entry in the perprocess open-file table. True. d) Super block in hard disk consists of all references to the open file blocks. System-wide open-file table in memory e) In Linked Allocation mechanism, all the pointers to a file s data blocks are collected in one block called linked block. In Index Allocation f) Unix uses both direct blocks, and index blocks for disk allocation to files, where direct block addressing is used for fast accessing to small files. True. END Page 9 of 9