Vidyalankar T.E. Sem. V [INFT] Operating System for Computational Devices Prelim Question Paper Solution

Size: px
Start display at page:

Download "Vidyalankar T.E. Sem. V [INFT] Operating System for Computational Devices Prelim Question Paper Solution"

Transcription

1 T.E. Sem. V [INFT] Operating System for Computational Devices Prelim Question Paper Solution 1. (a) Operating System An operating system is a program that controls the execution of application programs and acts as an interface between applications and the computer hardware. Functions Of Operating System Are (i) Controls the execution of application programs (ii) Act as an interface between application and computer hardware. OS provides the services for the convenience to the user and for efficient operation of the system itself. The services may listed as (a) Program execution: Users want to execute programs. The system must be able to load program into memory and run it. The program must end its execution either normally or abnormally. (b) IO operations : A running program may require IO devices routines. For different devices, special functions may be desired. The OS must provide routines to do IO operations. (c) File System Manipulation (d) User Interface (e) Resources Allocation (f) Protection (g) Accounting : utilisation of CPU may be accounted for configuration of system. OS services are provided through system calls. System calls provides the interface between a running program and OS. They can roughly grouped into i) Process Management ii) Device and File Management iii) Memory Management Objective of Operating System are: (i) An Operating System makes a computer more convenient to use. (ii) An Operating System allows the computer system resources to be used in an efficient manner. (iii) Operating System should permit the effective development, testing and introduction of new system functions without interfacing with service. File Allocation Methods Three methods of the file allocation are : (a) Contiguous Allocation Here, a single contiguous set of blocks is allocated to a file at the time of file creation. Thus, this is a preallocation strategy, using variablesize portions. The file allocation table needs just a single entry for each file, showing the starting block and the length of the file. Contiguous allocation is the best from the point of view of the individual sequential file. Multiple blocks can be brought in at a time to improve I/O performance for sequential processing. It is also easy to retrieve a single block. The contiguous allocation method requires each file to occupy a set of contiguous addresses on the disk. Disk address define a linear ordering on the disk. 1. (b) 1113/Engg/TE/Pre Pap/2013/INFT/OS_Soln 1

2 : T.E. OS Accessing block (b+1) after block b normally requires no head movement. When head movement is needed (from the last sector of one cylinder to the first sector of the next cylinder), it is only one track Contiguous allocation of a file is defined by the disk address of the first block and its length. If the file is n blocks long, and starts at location b, then it occupies blocks b, b+1, b +2, b+ n1. The directory entry for each file indicates the address of the starting blocks and the length of the area allocated for this file. This is as shown in figure below : count T contiguous allocation F (b) Chained Allocation At the opposite extreme from contiguous allocation is chained allocation (figure below). Here, allocation is on an individual block basis. Each block contains a pointer to the next block in the chain. The file allocation table needs just a single entry for each file, showing the starting block and the length of the file. Any free block can be added to a chain. There is a no external fragmentation to worry about because only one block at a time is needed. Fig.: Chained allocation Directory File Start length Count 0 2 F 5 3 T 12 4 Although preallocation is possible, it is more common simply to allocate blocks as needed. The selection of blocks is now a simple matter: Any free block can be added to a chain. There is no external fragmentation to worry about because only one block at a time is needed. This type of physical organization is best suited to sequential files that are to be processed sequentially. To select an individual block of a file-requires tracing through the chain to the desired block /Engg/TE/Pre Pap/2013/INFT/OS_Soln

3 Prelim Question Paper Solution (c) Indexed Allocation Indexed allocation addresses many of the problems of contiguous and chained allocation. The file allocation table contains a separate one level index for each file, the index has one entry for each portion allocated to the file. The file indexes are not physically stored as part of the file allocation table. The file index for a file is kept in a separate block and the entry for the file in the file allocation table points to that block. Allocation may be on the basis of either fixedsize blocks or variable size portions. Allocation by blocks eliminates external fragmentation, whereas allocation by variable size portions improves locality. Linked allocation cannot support direct access, since the blocks are scattered all over the disk and also pointers to the blocks are scattered all over the disk. Indexed allocation solves this problem by bringing all of the pointers together into one location. This block is called as index block. Each file has its own index block, which is an array of disk block addresses. The i th entry in the index block points to the i th block of the file. The directory contains the address of the index block as shown in figure below. To read the i th block, we use the pointer in the i th index block entry to find and read the desired block. When the file is created, all pointers in the index block are set to nil. When the i th block is first written, a block is removed from the free space list and its address is put in the i th index block entry. Indexed allocation supports direct access, without suffering from external fragmentation. Indexed allocation does suffer from wasted space Directory Index Block 19 indexed allocation of disk space 2. (a) PROCESS A process is a program in execution. A process is the unit of work in a modern timesharing system. Many processes can execute concurrently, with the CPU (or CPUs) multiplexed among them. By the CPU (or CPUs) multiplexed among them. By switching the CPU between processes, the operating system can make the computer more productive. On a singleuser system, such as Microsoft Windows, a user may be able to run several programs at one time: a word processor, web browser and package. A program is a passive entity, such as the contents of a file stored on disk, whereas a process is an active entity, with a program counter specifying the next instruction to execute a set of associated resources. File J /Engg/TE/Pre Pap/2013/INFT/OS_Soln 3

4 : T.E. OS A process is more than the program code, which is sometimes known as the text section. It also includes the current activity, as represented by the value of the program counter and the contents of the processor s registers. It also contain the process stack, which contains temporary data and a data section, which contains global variables. THREADS Thread is a light weight process comprises of threads ID, program counter, register set and a stock. It shares its code section, data section and other operating system resources such as open files and signal with other threads belonging to the same process. A traditional (heavy weight) process has a single thread of control. If the process has multiple threads of control it can do more than one task at a time. An application is implemented as a separate process with several threads of control. i) A web browser might have one thread to display images of text another that retrieves data from the network. ii) A word processor may have a thread for displaying graphic, another thread for reading keystrokes from the user and third thread for performing spelling and grammar checking in the background. e.g. A web server accepts client request for web pages, images, sound and so forth. A busy web server may have several (hundreds) of clients concurrently accessing it. If the web server ran a traditional single threaded process it would be able to service only one client at a time. The amount of time a client night have to wait for its request to be serviced could be enormous. One solution is to have the server run as a single process that accepts requests. When the server receives the request, it creates a separate process to service that request. But process creation is heavy weight. If the new process will perform the same tasks as the existing process, why incur all that overhead? It s more efficient for one process that contains multiple threads to serve the same purpose. The server would create a separate thread that would listen for client requests, when a request was made rather than creating another process, it would create another thread to that request. Benefits of multithreaded programming i) Responsiveness : Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user /Engg/TE/Pre Pap/2013/INFT/OS_Soln

5 Prelim Question Paper Solution ii) Resource sharing : By default, threads share the memory and the resources of the process to which they belong. The benefits of code sharing is that it allows an application to have several different threads of activity all within the same address space. iii) Economy : Allocating memory and resources for process creation is costly. Alternatively, because threads share resources of the process to which they belong it is more economical to create and context switch threads. iv) Utilization of multiprocessor architecture : The benefits of multithreading can be greatly increased in a multiprocessor architecture where each thread may be running in parallel on a different processor. In single processor architecture, the CPU generally moves between each thread so quickly as to create on illusion of parallelism but in relief only one thread. 2. (b) FIFO 2, 3, 5, 4, 2, 5, 7, 3, 8, Page faults : 8 OPT 2, 3, 5, 4, 2, 5, 7, 3, 8, Page faults : 7 LRU 2, 3, 5, 4, 2, 5, 7, 3, 8, Page faults : 8 3. (a) SCHEDULING ALGORITHMS In interactive environment e.g. as time sharing systems the primary requirement is to provide reasonable good response time share system resources equitably among all users. For this the scheduling algorithms are : i) Round Robin Scheduling One of the oldest, simplest, fairest and most widely used algorithms is Round Robin. Each process is assigned a time interval, called its quantum, which is allowed to run. If the process is still running at the end of the quantum, the CPU is preempted and given to another process. If the process has blocked or finished before the quantum has elapsed, the CPU switching is done when the process blocks. Round robin is easy to implement. All the scheduler needs to do is maintain a list of runnable processes as shown below. Switching from one process to another requires a certain amount of time for doing the administration saving, loading registers and memory maps, updating various tables and lists, etc. 1113/Engg/TE/Pre Pap/2013/INFT/OS_Soln 5

6 : T.E. OS 2. Priority Scheduling The basic idea behind this is : each process is assigned a priority and the runnable process with the highest priority is allowed to run. This prevent highpriority processes from running indefinitely, the scheduler may decrease the priority of the currently running process at each clock tick (i.e. at each interrupt). If this action causes its priority to drop below that of the next highest process, a process switch occurs. Priorities can also be assigned dynamically by the system to achieve certain system goals. Making the I/O bound process wait a long time for the CPU will just mean having it around occupying memory for an unnecessarily long time. 3. Multilevel Queue Scheduling This type of scheduling separates the available processes into different groups. Classification is done on the basis of the fact that different types of processes have different response requirement and hence may require different scheduling. A multilevel queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to the one queue, generally based on some property of the process, such as memory size, process priority etc. Each queue has its own scheduling algorithm. 4. First Come First Served Scheduling : The average waiting time under the FCFS policy, however is often quite long. Consider the following set of processes that arrive at time 0, with the length of the CPU burst time given in milliseconds Process Burst time P1 24 P2 3 P3 3 If the processes arrive in the order P1, P2, P3 and are served in FCFS order, we get the result shown in the following Gantt Chart. P1 P2 P The average waiting time = = 17 milliseconds 3 Now, the waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2 and 27 milliseconds for process P3. If the processes arrive in the order P2, P3, P1, the result will be as shown in the following Gnatt Chart: P2 P3 P Now, the average waiting time = = 3 milliseconds. 3 This reduction is substantial. Thus, the average waiting time under a FCFS is generally not minimal. The FCFS scheduling algorithm is non preemptive. Once the CPU has been allocated to a process, it releases the CPU, either by terminating or by requesting I/O. 5. Shortest Job first Scheduling (SJF) Consider the following set of processes with the length of the CPU burst time given in milliseconds : /Engg/TE/Pre Pap/2013/INFT/OS_Soln

7 Prelim Question Paper Solution Process Burst time P1 6 P2 8 P3 7 P4 3 Using SJF scheduling, we would schedule these processes according to the following Gantt Chart: P4 P1 P3 P The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average waiting time = milliseconds 4 The SJF scheduling algorithm is provably optimal. In that it gives the minimum average waiting time for a given set of processes. If two processes have the same length next CPU burst, FCFS scheduling is used to break the tie. 3. (b) i) If a process issues an I/O command, is suspended awaiting the result, and then is swapped out prior to the beginning of the operation, the process is blocked waiting on the I/O event and the I/O operation is blocked waiting for the process to be swapped in. ii) To avoid this deadlock, the user memory involved in the I/O operation must be locked in main memory immediately after the I/O request is issued, even though the I/O operation is queued and may not be executed for sometime. iii) The same consideration applies to an output operation. If a block is being transferred from a user process area directly to an I/O module, the process is blocked during the transfer and the process may not be swapped out. iv) It is sometimes convenient to perform input transfers in advance of requests being made and to perform output transfers sometime after the request is made. This technique is known as buffering. Single Buffer i) When a user process issues an I/O request, the OS assigns a buffer in the system portion of main memory to the operation. ii) For block oriented devices, input transfers are made to the system buffer. When the transfer is complete, the process moves the block into user space and immediately requests another block. Fig. : I/O Buffering Schemes (Input) 1113/Engg/TE/Pre Pap/2013/INFT/OS_Soln 7

8 : T.E. OS iii) Similar considerations apply to block oriented output. When data are being transmitted to a device, they are first copied from the user space into the system buffer from which they will ultimately be written. The requesting process is free to continue or to be swapped as necessary. iv) For stream oriented I/O, the single buffering scheme can be used in a line at a time fashion or a byte at a time fashion. v) This approach provide a speedup compared to the lack of system buffering. The user process can be processing one block of data while the rest block is being read in. Double Buffer i) One can assign two system buffers to the operation. A process now transfers data to (or from) one buffer while the OS empties (or fills) the other. This technique is known as double buffering. Circular Buffer i) Double buffering may be inadequate if the process performs rapid bursts of I/O. The problem can often be alleviated by using more than two buffers. ii) When more than two buffers are used, the collection of buffers is itself referred to as a circular buffer, with each individual buffer being one unit in the circular buffer. 4. (a) Necessary Conditions A deadlock situation can arise if and only if the following four conditions hold simultaneously in a system. Mutual Exclusion The mutual exclusion condition must hold for non-sharable types of resources. For example, several processes cannot simultaneously share a printer. Sharable resources, on the other hand, do not require mutually exclusive access, and thus cannot be involved in a deadlock. Read-only files are a good example of a sharable resource. If several processes attempt to open a read-only file at the same time, they can be ranted simultaneous access to the file. A process never needs to wait a sharable resource. In general, however, it is not possible to prevent deadlocks by denying the mutual-exclusion condition. Some resources are intrinsically non-sharable. Hold and Wait In order to ensure that the hold-wait condition never in the system, we must guarantee that whenever a process request a resource it does not hold any other resources. One protocol that can be used requires each process to request and be allocated all of its resources before it begins execution. This provision can be implemented by requiring that system calls requesting resources for a process precede all other system call. No preemption The third necessary condition is that be no preemption of resources that have already been allocated. In order to ensure that this condition does not hold, the following protocol cannot be /Engg/TE/Pre Pap/2013/INFT/OS_Soln

9 Prelim Question Paper Solution immediately allocated to it I that is, the process must wait), then all resources currently being held are preempted. That is, these resources are implicitly released. The preempted resources are added to the list of resources for which the process is waiting. The process will only be restarted when it can regain its old resources, as well as the new ones that it is requesting. Alternatively, if a process request some resources, we first check if they are available. If so, we allocate them. If they are not available, we check whether they are allocated to some other process that is waiting for additional resources. If so, we preempt the desired resources from the waiting or held by a waiting process, the requesting process must wait. While it is waiting, some of its resources may be preempted but only if another process request them. A process can only be restarted when it is allocated the new resources it is requesting and recovers any resources that we preempted while it was waiting. Circular Wait In order to ensure that the circular wait condition never holds we may impose a total ordering of all resource types. That is, we assign to each resource type a unique integer number, which allows us to compare two resources and determine whether one precedes another in our ordering. More formally, let R {r1, r2, rm}be the set of resource types. We can define a one to-one function F: R N, where N is the set of natural numbers. For example, if the set of resource types R includes disk drives, tape drives, card readers, and printers, then the F might be defined as follows: F (card readers) = 1 F (disk drive) = 5 F (tape drive) = 7 F (printer) = 12 Deadlock Avoidance Deadlock prevention algorithms, as discussed above, prevent deadlocks by restraining how requests can be made. The restraints ensure that at least one of the necessary conditions for deadlock cannot occur, and hence, that deadlock cannot hold. A side effect of preventing deadlocks by this method, however, is possibly low device utilization and reduced system throughput. More formally, a system is in a safe state if there exists safe sequence. A sequence of processes < p1, p2,, pn,> is a safe sequence for the current allocation state if for each pi, the resources which pi can still request can be satisfied by the currently available resources plus the resources held by all the pj, with j < i. In this situation, if the resource need of process pi is not immediately available, then pi could wait until all pj have finished. When they have finished, pi can obtain all of its needed resources, complete its designated task, return it s allocated resources, and terminate. When pi terminates, pi + 1 can obtain its needed resources, and so on. If no such sequence exists, then the system is said to be unsafe. A safe state is not a deadlock state, and a deadlock state is an unsafe state. Not all unsafe states are deadlocks, however an unsafe state may lead to a deadlock. As long as the state is safe, the operating system can avoid unsafe (and deadlock) states, the operating system cannot prevent processes from requesting resources in such a way that a deadlock occurs: the behavior of the processes controls unsafe states. To illustrate, consider a system with twelve magnetic tape drivers and three processes: p0, p1 and p2. Process p0 requires ten tape drives, p1 may need as many as four, and process p2 may need up to nine tape drives. Suppose that at time to, process p0 is holding five tape drives, process p1 is holding two, and process p2 is holding two. (Thus are three free tape drives). 1113/Engg/TE/Pre Pap/2013/INFT/OS_Soln 9

10 : T.E. OS 4. (b) MESSAGE PASSING That something else is message passing. This method of interprocess communication uses two primitive SEND and RECEIVE, which, like semaphores and unlike monitors, are system calls rather than language constructs. As such, they an easily be put into library procedures, such as Send (destination, & message); and receive (source, & message); The former sends a message to a given destination and the latter receives a message from a given sources (or from ANY, if the receiver does not care). If message is available, the receiver could block until one arrives. Design Issues for Message Passing Systems Message passing systems have many challenging problems and design issues that do not arise with semaphores or monitors, especially if the communicating processes are on different machines connected by a network. For example, the network can lose messages. To guard against lost messages the sender and receiver can agree that as soon as a message has been received, the receiver will send back a special acknowledgement message. If as not received the acknowledgement within a certain time interval, it transmit the message. Message systems also have to deal with the question of how processes arc named, so that the process specified in an SEND or RECEIVE call is unamniguous. Often a naming scheme such as process@machine or machine:process is used. If the number of machines is very large, and there is no central authority that allocates machine names, it may happen that two organizations give their machine the same name. The problem of conflicts can be reduced considerably by grouping machines into domains, and then addressing processes@mahine.domain. In this scheme there is no problem f two machines have the same name, provided that they are in different domains. The domain names must also be unique. For the producer-consumer problem, both the producer and consumer would create mailboxes, large enough to hold N messages. The producer would send messages containing data to the consumer's mailbox, and the consumer would send empty messages to the producer's mailbox. When mailboxes used, the buffering mechanism is clear: the destination mailbox holds messages that have been sent to the destination process but have not yet been accepted. The other extreme from having mailboxes is to eliminate all buffering. When this approach is followed, if the SEND is done before the RECEIVE, sending process is blocked until the RECEIVE happens, at which time message can be copied directly from the sender to the receiver, with no intermediate buffering. Similarly, if the RECEIVE is done first, the receiver is blocked until a SEND happens. This strategy is often known as a rendezvous. It is easier to implement than a buffered message scheme but its less flexible since the sender and receiver are forced to run in lockstep. # include "prototypes.h" # define N 100 /* number of slots in the buffer */ # define MSIZE 4 /* message size */ typedef int message (MSIZE); void producer (void) { int item; message m; /* message buffer */ while (TRUE) { produce itme (&itcm); /* generate something to put in buffer */ receive (consumer, &m); /* wait for an empty to arrive */ bui!d_niessage(&n,itcm); /* construct a message to send */ send(consunier,&m); /* send item to consumer */ } } void consumer (void){ /Engg/TE/Pre Pap/2013/INFT/OS_Soln

11 Prelim Question Paper Solution int item, i; message m; /* message buffer */ for (I = 0; I < N; i++) send (producer, & m) /* send N empties */ while (TRUE){ receive (producer, &m) /* get message containing item */ extract_item(&n,item); /* take item out of message */ send(producer,&m); /* send back empty reply */ consumer_item (item); /* do something with item */ } } 5. (a) Block Diagram of RT System Fig.: Block diagram of a generic real-time control system. RTOS is a real time operating system. The important features are : The necessary signalling functions between interrupt routines and taskcodes are handled by RTOS. It works as an independent system with no internal or external interdependencies. There are no loop decisions in RTOS. The RTOS can suspend one task code subroutine in the middle order to run another. The time lag is veryless compared to other systems There are no random time variables, this is good for a direct relationship between instruction and process. Tasks are simpler to write. Under most RTOS tasks are simply subroutines. Fast context switch Small size Ability to respond to external interrupts quickly Multitasking with interprocess communication tools such as semaphores, signals, and events Use of special sequential files that can accumulate data at a fast rate Preemptive scheduling based on priority Minimization of intervals during which interrupts are disabled Delay tasks for fixed amount of time Special alarms and timeouts 1113/Engg/TE/Pre Pap/2013/INFT/OS_Soln 11

12 : T.E. OS Characteristics of Real-Time Operating Systems Deterministic Operations are performed at fixed, predetermined times or within predetermined time intervals. Concerned with how long the operating system delays before acknowledging an interrupt. Responsiveness How long, after acknowledgment, it takes the operating system to service the interrupt. Includes amount of time to begin execution of the interrupt. Includes the amount of time to perform the interrupt. User control User specifies priority Specify paging What processes must always reside in main memory Disks algorithms to use Rights of processes Reliability Degradation of performance may have catastrophic consequences Attempt either to correct the problem or minimize its effects while continuing to run Most critical, high priority tasks execute Fail-soft operation Ability of a system to fail in such a way as to preserve as much capability and data as possible; The RTOS tries to correct the problem or minimize its effects while continuing to run; RTOS is stable, i.e. it will meet the deadlines of its most critical, highest-priority tasks, even if some less critical task deadlines are not always met. 5. (b) (i) File Structure The Pile : The leastcomplicated form of file organization may be termed the pile. Data are collected in the order in which they arrive. Each record consists of one burst of data. The purpose of the pile is simply to accumulate the mass of data and save it. The Sequential File : The most common form of file structure is the sequential file. In this type of file, a fixed format is used for records. All records are of the same length, consisting of the same number of fixed length fields in a particular order. One particular field, usually the first field in each record, is referred to as the Key field. The Key field uniquely identifies the record, thus key values for different records are always different. The Indexed Sequential File : The indexed sequential file maintains the key characteristics of the sequential file : Records are organized in sequence based on a key field. Two features are added : an index to the file to support random access and an overflow file. The index provides a lookup capability to reach quickly the vicinity of a desired record. The overflow file is similar to the log file used with a sequential file but is integrated so that records in the overflow file are located by following a pointer from their predecessor record. The Indexed File : The indexed sequential file retains one limitation of the sequential file : Effective processing is limited to that which is based on a single field of the file. When it is necessary to search for a record on the basis of some others attribute than the key field, both forms of sequential file are inadequate. For some applications, this flexibility is desirable /Engg/TE/Pre Pap/2013/INFT/OS_Soln

13 Prelim Question Paper Solution To achieve this flexibility, a structure is needed that employs multiple indexes, one for each type of field that may be the subject of a search. Two types of indexes are used. An exhaustive index contains one entry for every record in the main file. The index itself is organized as a sequential file for ease of searching. A partial index contain entries to records where the field of interest exists. The Direct or Hashed File : The direct, or hashed, file exploits the capability found on disks to access directly any block of a known address. There is no concept of sequential ordering here. The direct file makes use of hashing on the key value. Direct files are often used where very rapid access is required, where fixedlength records are used, and where records are always accessed one at a time. Examples are directories, pricing tables, schedules and name lists. (ii) File Operation File exist to store information and allow it to be retrieved later. Different systems provide different operations to allow storage and retrieval. Commonly use file operations 1) For CREATE operation : Space in the file system must be found for the file An entry for the new file must be made in the directory. The directory entry records the name of the file and its location in the file system. 2) To DELETE a file : search the directory for the named file, having found the associated directory entry, release all file space invalidate the directory entry 3) To WRITE a file : a system call is made specifying both the name of the file and the information to be written to the file. Given the name of the file, the system searches the directory to find the location of the file The directory entry will need to store a pointer to the current end of the file. Using this pointer, the address of the next block can be computed and the information can be written. The write pointer must be updated, in this way successive writes a sequence of blocks to the file. 4) To READ from file : A system call specifies the name of the file and where (in memory) the next block of the file should be put. Again, the directory is searched for the associated directory entry and again, the directory will need a pointer to the next block to be read. Once the block is read, the pointer is updated. (iii) File Access ACCESS METHODS Access methods Sequential Access Direct Access Other Access methods 1113/Engg/TE/Pre Pap/2013/INFT/OS_Soln 13

14 : T.E. OS 1) Sequential Access The bulk of the operation on a file are reads and writes. A read reads the next portion of the file and automatically advances the file pointer. Similarly a write appends to the end of the file and advances to the end of the newly written material. Such a file can be rewound, and on some systems, a program may be able to skip forward or backward n records, for some integer n. This scheme is known as sequential access to a file. Sequential access is based upon a tape model of a file. current position beginning end 2) Direct Access An alternative access method is direct access, which is based upon a disk model of a file. For direct access, the file is viewed as a numbered sequence of blocks or records. A block is generally a fixed length quantity, defined by the operating system as the minimal positioning unit. A block size is depends upon a system. A direct access file allows arbitrary blocks to be read or written. Thus we may read block 14, then read block 53, and then write block 7. There are no restriction on the order of reading or writing for a direct access file. 3) Other access methods Other access methods can be built on top of a direct access method. These additional methods generally involve the construction of an index for the file. The index contains pointer to the various blocks. To find an entry in the file, we first search the index and then use the pointer to directly access the file and find the desired entry. With large files, the index file itself may become too large to be kept in memory. One solution is then to create an index for the index file. The primary index file would contain pointers to secondary index files which then point to the actual data items. (iv) File Types The information in a file is defined by its creator. Many different types of information may be stored in a file. A file has a certain defined structure according to its use. Many different types of information may be stored in a file : source programs, object programs, numeric data, text, payroll and so on. A file has a certain defined structure according to its use. A text file : is a sequence of character organized into lines (and possibly pages) A source file : is a sequence of subroutines and functions each of which is further organized as declaration followed by executable statements. An object file : is a sequence of words organized into loader record blocks. rewind read or write Fig.: sequential access file UNIX and MSDOS have Regular file are the ones that contain user information Directories are system files for maintaining the structure of the file system. Character special file are related to input / output and used to model serial I/O devices such as terminals, printers and networks. Blockspecial file are used to model disk /Engg/TE/Pre Pap/2013/INFT/OS_Soln

15 6. (a) Prelim Question Paper Solution SEMAPHORE A semaphore could have the value 0, indicating that no wakeups were saved, or some positive values if one or more wakeup were pending. Dijkstra proposed having two operation, DOWN and UP (generalization of SLEEP and WAKEUP, respectively). The DOWN operation on a semaphore checks to see if the value is greater than 0. If so, it decrements the value (i.e. uses up tone stored wakup) and just continues. If the value is 0, the process is put to sleep. Checking the value, changing it, and possibly going to sleep is all done as a single, indivisible, atomic action. It is guaranteed that once a semaphore operation is started, no other process can access the semaphore until the operation has completed or blocked. This atomicity is absolutely essential to solving synchronization problems and avoiding race conditions. The UP operation increments the value of the semaphore addressed. If one or more processes were sleeping on that semaphore, unable to complete an earlier DOWN operation, one of them is chosen by the system (e.g. at random) and is allowed to complete its DOWN. Thus, after an UP on a semaphore with processes sleeping on it, the semaphore will still be 0, but, there will be one fewer process sleeping on it. The operation of incrementing the semaphore and waking up one process is also indivisible. No process ever blocks doing an up, just as no process ever blocks doing a WAKEUP in the earlier model. The monitor is a programming language construct that provides equivalent functionality as that of semaphores and that is easier to control. A monitor is a collection of procedures, variables and data structures that are all grouped together in a special kind of module or package. Process may call the procedures in a monitor whenever they want to, but they cannot directly access the monitor s internal data structures from procedures declared outside the monitor. Direct Memory Access 6. (b) When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information: i) Whether a read or write is requested, using the read or write control line between the processor and the DMA module. ii) The address of the I/O device involved, communicated on the data lines. iii) The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register. iv) The number of words to be read or written, again communicated via the data lines and stored in the data count register. Fig.: Typical DMA Block Diagram 1113/Engg/TE/Pre Pap/2013/INFT/OS_Soln 15

16 : T.E. OS The processor then continues with other work. It has delegated this I/O operation to the DMA module. The DMA module transfers the entire block of data, one word at a time, directly to or from memory, without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor. Thus, the processor is involved only at the beginning and end of the transfer. Disk Read Without DMA The controller reads the block from the drive serially, bit by bit, until the entire block is in the controller s internal buffer. Computes the checksum to verify that no read errors have occurred. An interrupt is caused by the controller. When the OS starts running, it can read the disk block from the controller s buffer a byte or a word at a time by executing a loop, with each iteration reading one byte or word from a controller device register and storing it in memory. A programmed CPU loop to read the bytes one at a time from the controller wastes CPU time. Disk Read with DMA CPU gives the controller the number of bytes to transfer, memory address where the block is to go, and the disk address of the block. After the controller has read the entire block from the device into its buffer and verified the checksum, it copies the first byte or work into the main memory at a address specified by the DMA memory address. DMA address is incremented by the controller and it decrements the DMA count by the number of bytes just transferred. This process is repeated until the DMA count becomes zero, at which time the controller causes an interrupt. Simple controllers just cannot cope with doing input and output at the same time. As a result, the controller will be able to read only every other block. Reading a complete track will then require two full rotations, one for the even blocks and one for the odd blocks. It may be necessary to read one block and then skip two (or more) blocks, if the time to transfer a block from the controller to memory over the bus is longer than the time to read a block from the disk. Inodes All types of UNIX files are administered by the operating system by means of inodes. An inode (information node) is a control structure that contains the key information needed 7. (a) /Engg/TE/Pre Pap/2013/INFT/OS_Soln

17 Prelim Question Paper Solution by the operating system for a particular file. Several file names may be associated with a single inode, but an active inode is associated with exactly one file, and each file is controlled by exactly one inode. The attributes of the file as well as its permissions and other control information are stored in the inode. File Mode : 16-bit flag that stores access and execution permissions associated with the file File type (regular, directory, character or block special, FIFO pipe) 9-11 Execution flags 8 Owner read permission 7 Owner write permission 6 Owner execute permission 5 Group read permission 4 Group write permission 3 Group execute permission 2 Other read permission 1 Other write permission 0 Other execute permission Link Count : Number of directory references to this inode Owner ID : Individual owner of file Group ID : Group owner associated with this file File Size : Number of bytes in file File Addresses : 39 bytes of address information Last Accessed : Time of last file access Last Modified : Time of last file modification Inode Modified : Time of last inode modification 7. (b) File Structure File Organization is refers to the logical structuring of the records as determined by the way in which they are accessed. The physical organization of the file on secondary storage depends on the blocking strategy and the file allocation strategy. In choosing a file organization, several criteria are important : i) Rapid access ii) Ease of update iii) Economy of storage iv) Simple maintenance v) Reliability The relative priority of these criteria will depend on the applications that will use the file. The five organizations are as follows : i) Pile ii) The sequential file iii) The indexed sequential file iv) The indexed file v) The direct or hashed file The Pile : The leastcomplicated form of file organization may be termed the pile. Data are collected in the order in which they arrive. Each record consists of one burst of data. The purpose of the pile is simply to accumulate the mass of data and save it. Because there is no structure to the pile file, record access is by exhaustive search. Pile files are encountered when data are collected and stored prior to processing or when data are not easy to organize. This type of file uses space well when the stored data vary in size and structure. The Sequential File : The most common form of file structure is the sequential file. In this type of file, a fixed format is used for records. All records are of the same length, consisting of the same number of fixed length fields in a particular order. One particular field, usually the first field in each record, is referred to as the Key field. The Key field uniquely identifies the record, thus key values for different records are always different. 1113/Engg/TE/Pre Pap/2013/INFT/OS_Soln 17

18 : T.E. OS The sequential file organization is the only one that is easily stored on tape as well as on disk. For interactive applications that involve queries and/or update of individual records, the sequential file provides poor performance. Access requires the sequential search of the file for a key match. The Indexed Sequential File : The indexed sequential file maintains the key characteristics of the sequential file : Records are organized in sequence based on a key field. Two features are added : an index to the file to support random access and an overflow file. The index provides a lookup capability to reach quickly the vicinity of a desired record. The overflow file is similar to the log file used with a sequential file but is integrated so that records in the overflow file are located by following a pointer from their predecessor record. In the simplest indexed sequential structure, a single level of indexing is used. The index in this case is a simple sequential file. Each record in the index file consists of two fields : a key field, which is the same as the key field in the main file, and a pointer into the main file. To find a specific field, the index is searched to find the highest key value that is equal to or precedes the desired key value. The search continues in the main file at the location indicated by the pointer. The indexed sequential file greatly reduces the time required to access a single record. To provide even greater efficiency in access, multiple levels of indexing can be used. The Indexed File : The indexed sequential file retains one limitation of the sequential file : Effective processing is limited to that which is based on a single field of the file. When it is necessary to search for a record on the basis of some others attribute than the key field, both forms of sequential file are inadequate. For some applications, this flexibility is desirable. To achieve this flexibility, a structure is needed that employs multiple indexes, one for each type of field that may be the subject of a search. Two types of indexes are used. An exhaustive index contains one entry for every record in the main file. The index itself is organized as a sequential file for ease of searching. A partial index contain entries to records where the field of interest exists. The Direct or Hashed File : The direct, or hashed, file exploits the capability found on disks to access directly any block of a known address. There is no concept of sequential ordering here. The direct file makes use of hashing on the key value. Direct files are often used where very rapid access is required, where fixedlength records are used, and where records are always accessed one at a time. Examples are directories, pricing tables, schedules and name lists. 7. (c) Race Condition Suppose the buffer is empty and the consumer has just read count to see if it is 0. Thus the scheduler decides to stop running the consumer temporarily and start running the producer. The producer enters an item in the buffer, increment count and notices that it is now 1. As count was just 0, and thus the consumer must be sleeping, the produces calls wakeup to wake the consumer up. The consumer is not yet logically asleep, so the wakeup signal is lost. When the consumer next runs, it will test he value of count it previously read, find it to be 0, and go to sleep. Sooner or later the producer will fill up the buffer and also go to sleep. Both will sleep forever. Solutions Algorithm for the producer consumer problem using Eventcounters (i) Two event counters are used. a) in : counts the cumulative number of items that the producer has put into the buffer since the program started running. b) out : counts the cumulative number of items that the consumer has removed from the buffer so far. Note : in must be greater than or equal to out, but not by more than the size of the buffer /Engg/TE/Pre Pap/2013/INFT/OS_Soln

19 Prelim Question Paper Solution (ii) When the producer has computed a new item, it checks to see if there is room in the buffer, using the AWAIT system call. Initially, out = 0 and sequence N will be negative so the producer does not block. (Where : sequence number of items in the buffer N number of slots in the buffer) If the producer generates N + 1 items before the consumer has begun, the AWAIT statement will wait until out becomes (d) Network O.S. Vs. Distributed O.S. Network O.S. Distributed O.S. 1. Resources Owned By Local NODES Resources Owned By Global System 2. Local resources Managed be Local Operating Local resources Managed be Global DOS System. 3. Access performed by local operating system Access performed by DOS 4. Requests passes from one local operating Requests passes Directly From Node to system to another via NOS. Node via the DOS Anonymous 5. Autonomy is less. Highly autonomous. 1113/Engg/TE/Pre Pap/2013/INFT/OS_Soln 19

UNIT:2. Process Management

UNIT:2. Process Management 1 UNIT:2 Process Management SYLLABUS 2.1 Process and Process management i. Process model overview ii. Programmers view of process iii. Process states 2.2 Process and Processor Scheduling i Scheduling Criteria

More information

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to: F2007/Unit5/1 UNIT 5 OBJECTIVES General Objectives: To understand the process management in operating system Specific Objectives: At the end of the unit you should be able to: define program, process and

More information

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University CS 571 Operating Systems Midterm Review Angelos Stavrou, George Mason University Class Midterm: Grading 2 Grading Midterm: 25% Theory Part 60% (1h 30m) Programming Part 40% (1h) Theory Part (Closed Books):

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009 CS211: Programming and Operating Systems Lecture 17: Threads and Scheduling Thursday, 05 Nov 2009 CS211 Lecture 17: Threads and Scheduling 1/22 Today 1 Introduction to threads Advantages of threads 2 User

More information

CSC 553 Operating Systems

CSC 553 Operating Systems CSC 553 Operating Systems Lecture 12 - File Management Files Data collections created by users The File System is one of the most important parts of the OS to a user Desirable properties of files: Long-term

More information

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according

More information

Operating Systems: Quiz2 December 15, Class: No. Name:

Operating Systems: Quiz2 December 15, Class: No. Name: Operating Systems: Quiz2 December 15, 2006 Class: No. Name: Part I (30%) Multiple Choice Each of the following questions has only one correct answer. Fill the correct one in the blank in front of each

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

CSE 120 PRACTICE FINAL EXAM, WINTER 2013

CSE 120 PRACTICE FINAL EXAM, WINTER 2013 CSE 120 PRACTICE FINAL EXAM, WINTER 2013 For each question, select the best choice. In the space provided below each question, justify your choice by providing a succinct (one sentence) explanation. 1.

More information

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling Algorithm Evaluation CPU

More information

Subject Teacher: Prof. Sheela Bankar

Subject Teacher: Prof. Sheela Bankar Peoples Empowerment Group ISB&M SCHOOL OF TECHNOLOGY, NANDE, PUNE DEPARTMENT OF COMPUTER ENGINEERING Academic Year 2017-18 Subject: SP&OS Class: T.E. computer Subject Teacher: Prof. Sheela Bankar 1. Explain

More information

Main Points of the Computer Organization and System Software Module

Main Points of the Computer Organization and System Software Module Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a

More information

Ch 4 : CPU scheduling

Ch 4 : CPU scheduling Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

COMP 3361: Operating Systems 1 Final Exam Winter 2009

COMP 3361: Operating Systems 1 Final Exam Winter 2009 COMP 3361: Operating Systems 1 Final Exam Winter 2009 Name: Instructions This is an open book exam. The exam is worth 100 points, and each question indicates how many points it is worth. Read the exam

More information

MC7204 OPERATING SYSTEMS

MC7204 OPERATING SYSTEMS MC7204 OPERATING SYSTEMS QUESTION BANK UNIT I INTRODUCTION 9 Introduction Types of operating systems operating systems structures Systems components operating systems services System calls Systems programs

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem? What is the Race Condition? And what is its solution? Race Condition: Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular

More information

CHAPTER NO - 1 : Introduction:

CHAPTER NO - 1 : Introduction: Sr. No L.J. Institute of Engineering & Technology Semester: IV (26) Subject Name: Operating System Subject Code:21402 Faculties: Prof. Saurin Dave CHAPTER NO - 1 : Introduction: TOPIC:1 Basics of Operating

More information

Department of Computer applications. [Part I: Medium Answer Type Questions]

Department of Computer applications. [Part I: Medium Answer Type Questions] Department of Computer applications BBDNITM, Lucknow MCA 311: OPERATING SYSTEM [Part I: Medium Answer Type Questions] UNIT 1 Q1. What do you mean by an Operating System? What are the main functions of

More information

LECTURE 3:CPU SCHEDULING

LECTURE 3:CPU SCHEDULING LECTURE 3:CPU SCHEDULING 1 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation 2 Objectives

More information

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable What s An OS? Provides environment for executing programs Process abstraction for multitasking/concurrency scheduling Hardware abstraction layer (device drivers) File systems Communication Do we need an

More information

CS 143A - Principles of Operating Systems

CS 143A - Principles of Operating Systems CS 143A - Principles of Operating Systems Operating Systems - Review of content from midterm to final Prof. Nalini Venkatasubramanian nalini@ics.uci.edu Deadlocks System Model Resource allocation graph,

More information

Chapter 12. File Management

Chapter 12. File Management Operating System Chapter 12. File Management Lynn Choi School of Electrical Engineering Files In most applications, files are key elements For most systems except some real-time systems, files are used

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions: Techno India Batanagar Department of Computer Science & Engineering Model Questions Subject Name: Operating System Multiple Choice Questions: Subject Code: CS603 1) Shell is the exclusive feature of a)

More information

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski Operating Systems Design Fall 2010 Exam 1 Review Paul Krzyzanowski pxk@cs.rutgers.edu 1 Question 1 To a programmer, a system call looks just like a function call. Explain the difference in the underlying

More information

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses. 1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

Midterm Exam. October 20th, Thursday NSC

Midterm Exam. October 20th, Thursday NSC CSE 421/521 - Operating Systems Fall 2011 Lecture - XIV Midterm Review Tevfik Koşar University at Buffalo October 18 th, 2011 1 Midterm Exam October 20th, Thursday 9:30am-10:50am @215 NSC Chapters included

More information

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Class: MCA Semester: II Year:2013 Paper Title: Principles of Operating Systems Max Marks: 60 Section A: (All

More information

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs: OPERATING SYSTEMS UNIT II Sections A, B & D PREPARED BY ANIL KUMAR PRATHIPATI, ASST. PROF., DEPARTMENT OF CSE. PROCESS CONCEPT An operating system executes a variety of programs: Batch system jobs Time-shared

More information

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year Dr. Rafiq Zakaria Campus Maulana Azad College of Arts, Science & Commerce, Aurangabad Department of Computer Science Academic Year 2015-16 MCQs on Operating System Sem.-II 1.What is operating system? a)

More information

Multiprocessor and Real-Time Scheduling. Chapter 10

Multiprocessor and Real-Time Scheduling. Chapter 10 Multiprocessor and Real-Time Scheduling Chapter 10 1 Roadmap Multiprocessor Scheduling Real-Time Scheduling Linux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor Systems

More information

QUESTION BANK UNIT I

QUESTION BANK UNIT I QUESTION BANK Subject Name: Operating Systems UNIT I 1) Differentiate between tightly coupled systems and loosely coupled systems. 2) Define OS 3) What are the differences between Batch OS and Multiprogramming?

More information

* What are the different states for a task in an OS?

* What are the different states for a task in an OS? * Kernel, Services, Libraries, Application: define the 4 terms, and their roles. The kernel is a computer program that manages input/output requests from software, and translates them into data processing

More information

CSE 421/521 - Operating Systems Fall Lecture - XXV. Final Review. University at Buffalo

CSE 421/521 - Operating Systems Fall Lecture - XXV. Final Review. University at Buffalo CSE 421/521 - Operating Systems Fall 2014 Lecture - XXV Final Review Tevfik Koşar University at Buffalo December 2nd, 2014 1 Final Exam December 4th, Thursday 11:00am - 12:20pm Room: 110 Knox Chapters

More information

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) A. Multiple Choice Questions (60 questions) Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) Unit-I 1. What is operating system? a) collection of programs that manages hardware

More information

CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, Review

CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, Review CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, 2003 Review 1 Overview 1.1 The definition, objectives and evolution of operating system An operating system exploits and manages

More information

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition

More information

Remaining Contemplation Questions

Remaining Contemplation Questions Process Synchronisation Remaining Contemplation Questions 1. The first known correct software solution to the critical-section problem for two processes was developed by Dekker. The two processes, P0 and

More information

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006 Operating Systems Comprehensive Exam Spring 2006 Student ID # 3/16/2006 You must complete all of part I (60%) You must complete two of the three sections in part II (20% each) In Part I, circle or select

More information

COMP SCI 3SH3: Operating System Concepts (Term 2 Winter 2006) Test 2 February 27, 2006; Time: 50 Minutes ;. Questions Instructor: Dr.

COMP SCI 3SH3: Operating System Concepts (Term 2 Winter 2006) Test 2 February 27, 2006; Time: 50 Minutes ;. Questions Instructor: Dr. COMP SCI 3SH3: Operating System Concepts (Term 2 Winter 2006) Test 2 February 27, 2006; Time: 50 Minutes ;. Questions Instructor: Dr. Kamran Sartipi Name: Student ID: Question 1 (Disk Block Allocation):

More information

(b) External fragmentation can happen in a virtual memory paging system.

(b) External fragmentation can happen in a virtual memory paging system. Alexandria University Faculty of Engineering Electrical Engineering - Communications Spring 2015 Final Exam CS333: Operating Systems Wednesday, June 17, 2015 Allowed Time: 3 Hours Maximum: 75 points Note:

More information

Scheduling. The Basics

Scheduling. The Basics The Basics refers to a set of policies and mechanisms to control the order of work to be performed by a computer system. Of all the resources in a computer system that are scheduled before use, the CPU

More information

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31 CPU scheduling CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In a single-processor

More information

Disks and I/O Hakan Uraz - File Organization 1

Disks and I/O Hakan Uraz - File Organization 1 Disks and I/O 2006 Hakan Uraz - File Organization 1 Disk Drive 2006 Hakan Uraz - File Organization 2 Tracks and Sectors on Disk Surface 2006 Hakan Uraz - File Organization 3 A Set of Cylinders on Disk

More information

CHAPTER 2: PROCESS MANAGEMENT

CHAPTER 2: PROCESS MANAGEMENT 1 CHAPTER 2: PROCESS MANAGEMENT Slides by: Ms. Shree Jaswal TOPICS TO BE COVERED Process description: Process, Process States, Process Control Block (PCB), Threads, Thread management. Process Scheduling:

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

Lecture 2 Process Management

Lecture 2 Process Management Lecture 2 Process Management Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks The terms job and process may be interchangeable

More information

FORTH SEMESTER DIPLOMA EXAMINATION IN ENGINEERING/ TECHNOLIGY- OCTOBER, 2012

FORTH SEMESTER DIPLOMA EXAMINATION IN ENGINEERING/ TECHNOLIGY- OCTOBER, 2012 TED (10)-3071 (REVISION-2010) Reg. No.. Signature. FORTH SEMESTER DIPLOMA EXAMINATION IN ENGINEERING/ TECHNOLIGY- OCTOBER, 2012 OPERATING SYSTEM (Common to CT and IF) (Maximum marks: 100) [Time: 3 hours

More information

Course Syllabus. Operating Systems

Course Syllabus. Operating Systems Course Syllabus. Introduction - History; Views; Concepts; Structure 2. Process Management - Processes; State + Resources; Threads; Unix implementation of Processes 3. Scheduling Paradigms; Unix; Modeling

More information

DATA STRUCTURES USING C

DATA STRUCTURES USING C DATA STRUCTURES USING C File Management Chapter 9 2 File Concept Contiguous logical address space Types: Data numeric character binary Program 3 File Attributes Name the only information kept in human-readable

More information

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad - 500 043 COMPUTER SCIENCE AND ENGINEERING DEFINITIONS AND TERMINOLOGY Course Name : OPERATING SYSTEMS Course Code : ACS007 Program

More information

CPU Scheduling: Objectives

CPU Scheduling: Objectives CPU Scheduling: Objectives CPU scheduling, the basis for multiprogrammed operating systems CPU-scheduling algorithms Evaluation criteria for selecting a CPU-scheduling algorithm for a particular system

More information

Operating system Dr. Shroouq J.

Operating system Dr. Shroouq J. 2.2.2 DMA Structure In a simple terminal-input driver, when a line is to be read from the terminal, the first character typed is sent to the computer. When that character is received, the asynchronous-communication

More information

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Operating System Third Year CSE( Sem:I) 2 marks Questions and Answers UNIT I

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Operating System Third Year CSE( Sem:I) 2 marks Questions and Answers UNIT I DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Operating System Third Year CSE( Sem:I) 2 marks Questions and Answers UNIT I 1. What is an Operating system? An operating system is a program that manages

More information

Multiprocessor and Real- Time Scheduling. Chapter 10

Multiprocessor and Real- Time Scheduling. Chapter 10 Multiprocessor and Real- Time Scheduling Chapter 10 Classifications of Multiprocessor Loosely coupled multiprocessor each processor has its own memory and I/O channels Functionally specialized processors

More information

Process behavior. Categories of scheduling algorithms.

Process behavior. Categories of scheduling algorithms. Week 5 When a computer is multiprogrammed, it frequently has multiple processes competing for CPU at the same time. This situation occurs whenever two or more processes are simultaneously in the ready

More information

Operating Systems. Figure: Process States. 1 P a g e

Operating Systems. Figure: Process States. 1 P a g e 1. THE PROCESS CONCEPT A. The Process: A process is a program in execution. A process is more than the program code, which is sometimes known as the text section. It also includes the current activity,

More information

FCM 710: Architecture of Secure Operating Systems

FCM 710: Architecture of Secure Operating Systems FCM 710: Architecture of Secure Operating Systems Practice Exam, Spring 2010 Email your answer to ssengupta@jjay.cuny.edu March 16, 2010 Instructor: Shamik Sengupta Multiple-Choice 1. operating systems

More information

Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS

Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS Chapter1: Introduction of Operating System An operating system acts as an intermediary between the

More information

SPOS MODEL ANSWER MAY 2018

SPOS MODEL ANSWER MAY 2018 SPOS MODEL ANSWER MAY 2018 Q 1. a ) Write Algorithm of pass I of two pass assembler. [5] Ans :- begin if starting address is given LOCCTR = starting address; else LOCCTR = 0; while OPCODE!= END do ;; or

More information

Computer Hardware and System Software Concepts

Computer Hardware and System Software Concepts Computer Hardware and System Software Concepts Introduction to concepts of Operating System (Process & File Management) Welcome to this course on Computer Hardware and System Software Concepts 1 RoadMap

More information

MYcsvtu Notes. Unit - 1

MYcsvtu Notes. Unit - 1 Unit - 1 An Operating system is a program that manages the computer hardware. It also provides a basis for application programs and acts as an intermediary between a user of a computer and the computer

More information

Q1. What is Deadlock? Explain essential conditions for deadlock to occur?

Q1. What is Deadlock? Explain essential conditions for deadlock to occur? II nd Midterm session 2017-18 Subject: Operating System ( V CSE-B ) Q1. What is Deadlock? Explain essential conditions for deadlock to occur? In a multiprogramming environment, several processes may compete

More information

Chapter 5. File and Memory Management

Chapter 5. File and Memory Management K. K. Wagh Polytechnic, Nashik Department: Information Technology Class: TYIF Sem: 5G System Subject: Operating Name of Staff: Suyog S.Dhoot Chapter 5. File and Memory Management A. Define file and explain

More information

PESIT Bangalore South Campus

PESIT Bangalore South Campus INTERNAL ASSESSMENT TEST II Date: 04/04/2018 Max Marks: 40 Subject & Code: Operating Systems 15CS64 Semester: VI (A & B) Name of the faculty: Mrs.Sharmila Banu.A Time: 8.30 am 10.00 am Answer any FIVE

More information

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing

More information

File Management. Marc s s first try, Please don t t sue me.

File Management. Marc s s first try, Please don t t sue me. File Management Marc s s first try, Please don t t sue me. Introduction Files Long-term existence Can be temporally decoupled from applications Sharable between processes Can be structured to the task

More information

Operating Systems Unit 3

Operating Systems Unit 3 Unit 3 CPU Scheduling Algorithms Structure 3.1 Introduction Objectives 3.2 Basic Concepts of Scheduling. CPU-I/O Burst Cycle. CPU Scheduler. Preemptive/non preemptive scheduling. Dispatcher Scheduling

More information

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling CSE120 Principles of Operating Systems Prof Yuanyuan (YY) Zhou Scheduling Announcement l Homework 2 due on October 26th l Project 1 due on October 27th 2 Scheduling Overview l In discussing process management

More information

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.)

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.) Process Concept Chapter 3 Processes Computers can do several activities at a time Executing user programs, reading from disks writing to a printer, etc. In multiprogramming: CPU switches from program to

More information

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 370: SYSTEM ARCHITECTURE & SOFTWARE [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University OpenMP compiler directives

More information

Topic 4 Scheduling. The objective of multi-programming is to have some process running at all times, to maximize CPU utilization.

Topic 4 Scheduling. The objective of multi-programming is to have some process running at all times, to maximize CPU utilization. Topic 4 Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently.

More information

Multitasking / Multithreading system Supports multiple tasks

Multitasking / Multithreading system Supports multiple tasks Tasks and Intertask Communication Introduction Multitasking / Multithreading system Supports multiple tasks As we ve noted Important job in multitasking system Exchanging data between tasks Synchronizing

More information

CS6401- OPERATING SYSTEM

CS6401- OPERATING SYSTEM 1. What is an Operating system? CS6401- OPERATING SYSTEM QUESTION BANK UNIT-I An operating system is a program that manages the computer hardware. It also provides a basis for application programs and

More information

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition, Chapter 5: CPU Scheduling Operating System Concepts 8 th Edition, Hanbat National Univ. Computer Eng. Dept. Y.J.Kim 2009 Chapter 5: Process Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms

More information

CSI3131 Final Exam Review

CSI3131 Final Exam Review CSI3131 Final Exam Review Final Exam: When: April 24, 2015 2:00 PM Where: SMD 425 File Systems I/O Hard Drive Virtual Memory Swap Memory Storage and I/O Introduction CSI3131 Topics Process Computing Systems

More information

ROEVER ENGINEERING COLLEGE, PERAMBALUR DEPARTMENT OF INFORMATION TECHNOLOGY OPERATING SYSTEMS QUESTION BANK UNIT-I

ROEVER ENGINEERING COLLEGE, PERAMBALUR DEPARTMENT OF INFORMATION TECHNOLOGY OPERATING SYSTEMS QUESTION BANK UNIT-I systems are based on time-sharing systems ROEVER ENGINEERING COLLEGE, PERAMBALUR DEPARTMENT OF INFORMATION TECHNOLOGY OPERATING SYSTEMS QUESTION BANK UNIT-I 1 What is an operating system? An operating

More information

Following are a few basic questions that cover the essentials of OS:

Following are a few basic questions that cover the essentials of OS: Operating Systems Following are a few basic questions that cover the essentials of OS: 1. Explain the concept of Reentrancy. It is a useful, memory-saving technique for multiprogrammed timesharing systems.

More information

Some popular Operating Systems include Linux, Unix, Windows, MS-DOS, Android, etc.

Some popular Operating Systems include Linux, Unix, Windows, MS-DOS, Android, etc. 1.1 Operating System Definition An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is a software which performs all the basic tasks like file management,

More information

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections ) CPU Scheduling CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections 6.7.2 6.8) 1 Contents Why Scheduling? Basic Concepts of Scheduling Scheduling Criteria A Basic Scheduling

More information

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems University of California, Berkeley College of Engineering Computer Science Division EECS all 2016 Anthony D. Joseph Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems Your Name: SID AND

More information

Operating System - Overview

Operating System - Overview Unit 37. Operating System Operating System - Overview An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is a software which performs all the basic

More information

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad - 500 043 INFORMATION TECHNOLOGY TUTORIAL QUESTION BANK Course Name Course Code Class Branch OPERATING SYSTEMS ACS007 IV Semester

More information

Unit In a time - sharing operating system, when the time slot given to a process is completed, the process goes from the RUNNING state to the

Unit In a time - sharing operating system, when the time slot given to a process is completed, the process goes from the RUNNING state to the Unit - 5 1. In a time - sharing operating system, when the time slot given to a process is completed, the process goes from the RUNNING state to the (A) BLOCKED state (B) READY state (C) SUSPENDED state

More information

UNIT 2 PROCESSES 2.0 INTRODUCTION

UNIT 2 PROCESSES 2.0 INTRODUCTION UNIT 2 PROCESSES Processes Structure Page Nos. 2.0 Introduction 25 2.1 Objectives 26 2.2 The Concept of Process 26 2.2.1 Implicit and Explicit Tasking 2.2.2 Processes Relationship 2.2.3 Process States

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Module 3. DEADLOCK AND STARVATION

Module 3. DEADLOCK AND STARVATION This document can be downloaded from www.chetanahegde.in with most recent updates. 1 Module 3. DEADLOCK AND STARVATION 3.1 PRINCIPLES OF DEADLOCK Deadlock can be defined as the permanent blocking of a

More information

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5 Threads CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5 1 Threads The Thread Model (1) (a) Three processes each with one thread (b) One process with three threads 2 1 The Thread Model (2)

More information

File Management. Chapter 12

File Management. Chapter 12 File Management Chapter 12 Files Used for: input to a program Program output saved for long-term storage Terms Used with Files Field basic element of data contains a single value characterized by its length

More information

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) CPU Scheduling Daniel Mosse (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process

More information

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation The given code is as following; boolean flag[2]; int turn; do { flag[i]=true; turn=j; while(flag[j] && turn==j); critical

More information

Computer Systems Assignment 4: Scheduling and I/O

Computer Systems Assignment 4: Scheduling and I/O Autumn Term 018 Distributed Computing Computer Systems Assignment : Scheduling and I/O Assigned on: October 19, 018 1 Scheduling The following table describes tasks to be scheduled. The table contains

More information

Computer Science 4500 Operating Systems

Computer Science 4500 Operating Systems Computer Science 4500 Operating Systems Module 6 Process Scheduling Methods Updated: September 25, 2014 2008 Stanley A. Wileman, Jr. Operating Systems Slide 1 1 In This Module Batch and interactive workloads

More information

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms Operating System Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts Scheduling Criteria Scheduling Algorithms OS Process Review Multicore Programming Multithreading Models Thread Libraries Implicit

More information