Chapter-2 Process and Thread

Size: px
Start display at page:

Download "Chapter-2 Process and Thread"

Transcription

1 2.1.Introduction Chapter-2 Process and Thread A process is just an instance of an executing program, including the current values of the program counter, registers, and variables. Conceptually, each process has its own virtual CPU. We are assuming a multiprogramming OS that can switch from one process to another. Sometimes this is called pseudoparallelism since one has the illusion of a parallel processor. The other possibility is real parallelism in which two or more processes are actually running at once because the computer system is a parallel processor, i.e., has more than one processor Fig. 2.1 Multiprogramming Process In Fig. 2-1(a) we see a computer multiprogramming four programs in memory. In Fig. 2-1(b) we see four processes, each with its own flow of control (i.e., its own logical program counter), and each one running independently of the other ones. Of course, there is only one physical program counter, so when each process runs, its logical program counter is loaded into the real program counter. When it is finished (for the time being), the physical program counter is saved in the process stored logical program counter in memory. In Fig. 2-1(c) we see that, viewed over a long enough time interval, all the processes have made progress, but at any given instant only one process is actually running. 2.2 PROCESSES A process is basically a program in execution. The execution of a process must progress in a sequential fashion. A process is defined as an entity which represents the basic unit of work to be implemented in the system. o put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program. When a program is loaded into the memory and it becomes a process, it can be divided into four sections stack, heap, text and data. The following image shows in fig.2.2. a simplified layout of a process inside main memory. Prof.Manoj S. Kavedia ( 1

2 Fig. 2.2 Process in Memory Stack: The process Stack contains the temporary data such as method/function parameters, return address and local variables. Heap: This is dynamically allocated memory to a process during its run time. Text: This includes the current activity represented by the value of Program Counter and the contents of the processor's registers. Data: This section contains the global and static variables. Program: A program is a piece of code which may be a single line or millions of lines. A computer program is usually written by a computer programmer in a programming language. For example, here is a simple program written in C programming language #include <stdio.h> #include <conio.h> void main() { printf("welcome to Operating System By Er.Manoj Kavedia! \n"); getch(); } A computer program is a collection of instructions that performs a specific task when executed by a computer. When we compare a program with a process, we can conclude that a process is a dynamic instance of a computer program. We emphasize that a program by itself is not a process; a program is a passive entity, such as a file containing a list of instructions stored on disk (often called an executable file), whereas a process is an active entity, with a program counter specifying the next instruction to execute and a set Prof.Manoj S. Kavedia ( 2

3 of associated resources. A program becomes a process when an executable file is loaded into memory. A part of a computer program that performs a well-defined task is known as an algorithm. A collection of computer programs, libraries and related data are referred to as a software Process States: As an when a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the following Five states as shown in fig.2.3. (A) New (B) Running (actually using CPU instant) (C) Waiting /Blocked (Unable to run until some external event happens) (D) Ready (Runnable, temporarily stopped to let another process run) (E) Terminated /release Fig. 2.3 Process States New/Start: This is the initial state when a process is first started/created. This state do not participate very frequently in the process state transitions during the execution of a process. They participate only at the beginning of a process. When you create a process, before getting into a queue of ready processes, it might wait as a new process if the Operating System feels that there are already too many ready processes to schedule. Ready - The process is waiting to be assigned to a processor: A process which is not waiting for any external event such as an I/0 operation is said to be in ready state. Actually, it could have been running, but for the fact that there is only one processor which is busy executing instructions from some other process, while this process is waiting for its chance to run. The rating System maintains a list of all such ready processes and when the CPU becomes free, it chooses one of them for execution as per its scheduling policy and dispatches it for execution. When you sit at a terminal and give a command to the Operating System, to execute a certain program, the Operating System locates the program on the disk, loads it in the memory, creates a new process for this program and enters this process in the list of ready processes. It cannot directly make it run because there might be another process Prof.Manoj S. Kavedia ( 3

4 running at that time. It eventually is scheduled when it starts executing. At that time its state is changed to running. Running - Instruction Being Executed: Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions. Is the only process which is executed by the CPU at any given moment. In multiprocessor systems with multiple CPUs however, them will be many running processes and the Operating System will have to keep track of all of them. Blocked / Waiting - The process is waiting for some event to occur: When a process is waiting for an external event such as an I/0 operation, the process is said to be in a blocked state. Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available. The major difference between a blocked and a ready process is that a blocked process cannot be directly scheduled even if the CPU is free, whereas, a ready process can be scheduled if the CPU is free. Terminated or Exit or Release: Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory. Similarly after the process terminates, the Operating System can put it in the halted state before actually removing all details about it. In UNIX, this state is called the Zombie state Process Control Block: The Operating System maintains the information about each process in a record or a data structure called Process Control Block (PCB) as shown in Fig.2.4. Each user process has a PCB. It is created when user creates a process and it is removed from the system when the process is killed. All these PCBs art kept in the memory reserved for the Operating System. Prof.Manoj S. Kavedia ( 4

5 Fig. 2.4 Structures of Process control Block Process ID: Process ID is a number allocated by the Operating System to the process on creation. This is the number which is used subsequently for carrying out any operation on the process is clear from Fig. The Operating System normally sets a limit on the maximum number of processes that it can handle and schedule. The Operating System starts allocating Pid from number 0. The next process is given Pid as 1, and so on. This continues till n-1. At this juncture. if a new process is created, the Operating System wraps around and starts again with 0 again. This is done on the assumption that at this juncture, the process with Pid = 0 would have terminated. UNIX follows this scheme. Process state: The state may be new, ready, running, waiting, halted, and so On. Process Priority: The processes which are urgently required to be completed are set at higher priority while are set with others (lower priority). This priority can be set externally by the user/system manager, or it can be decided by the Operating System, internally depending on various parameters. You could also have a combination of these schemes. Register Save Area: This area is used to save all the CPU registers at the context switch. CPU registers: The registers vary in number and type, depending on the co architecture. They include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information. Along with the program counter, this state information must be saved when an interrupt occurs, to allow the process to be continued correctly afterward Prof.Manoj S. Kavedia ( 5

6 Pointers to the Process Memory / Memory Management: This gives director indirect addresses of pointers to the locations where the process image resides in the memory. For instance, in paging systems, it could point towards the page map tables which in turn point towards the physical memory (indirect). In the same way, in contiguous memory systems, it could point to the starting physical memory address (direct). This information may include such information as the value of the base and limit registers, the page tables, or the segment tables, depending on the memory system used by the operating system. Pointers to Other Resources: This gives pointers to other data structures maintained for that process. List of Open Files: It is used by the Operating System to close all open files not closed by a process explicitly on termination. Accounting Information: This gives the account of the usage of resources such as CPU time, connect time, disk, I/O used, etc. by the process. This information is used especially in a data centre environment or cost centre environment where different users are to be charged for their system usage. This obviously means an extra overhead for the Operating System as it has to collect all this information and update the PCBs with it for different process. CPU-scheduling information: This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. Other Information: As an example, with regard to the directory, this contains the pathname or the BFD number of the current directory. As we know, at the time of logging in, the home directory mentioned in the system file (e.g. user profile in AOS/VS or /etc/passwd in UNIX) also becomes the current directory. Therefore, at the time of logging in, this home directory is moved in this field as current directory in the PCB. Subsequently, when the user changes his directory, this field also is appropriately updated. This is done so that all subsequent operations can be performed easily. For instance, at any time if a user gives an instruction to list all the files from the current directory, this field in the PCB is consulted, its corresponding directory is accessed and the files within it are listed. Apart from the current directory, similar useful information is maintained by the Operating System in the PCB. Pointers to Other PCBs: This essentially gives the address of the next PCB (e.g. PCB number) within a specific category. This category could mean the process state. For instance, the Operating System maintains a list of ready processes. In this case, this pointer field could mean the address of the next PCB with state = ready. Similarly, the Operating System maintains a hierarchy of all processes so that a parent process could traverse to the PCBs of all the child processes that it has created Process State Model: Prof.Manoj S. Kavedia ( 6

7 Process management is an integral part of any modern day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronisation among processes. To meet these requirements, the OS must maintain a data structure for each process, which describes the state and resource ownership of that process, and which enables the OS to exert control over each process. Different process management models are (1) Two State Process management model (2) Three state Process management model (3) Five State Process management model 2.3 THREADS A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history. A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that. A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process. Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. The following figure shows the working of a single-threaded and a multithreaded process. Prof.Manoj S. Kavedia ( 7

8 Fig. 2.9 Process with Single Thread and Three Thread Why Threads? Following are some reasons why we use threads in designing operating systems. (1) A process with multiple threads make a great server for example printer server. (2) Because threads can share common data, they do not need to use inter process communication. (3) Because of the very nature, threads can take advantage of multiprocessors. Threads are cheap in the sense that (1) They only need a stack and storage for registers therefore, threads are cheap to create. (2) Threads use very little resources of an operating system in which they are working. That is, threads do not need new address space, global data, program code or operating system resources. (3) Context switching are fast when working with threads. The reason is that we only have to save and/or restore PC, SP and registers. But this cheapness does not come free - the biggest drawback is that there is no protection between threads. Types of Thread Threads are implemented in following two ways User Level Threads: User managed threads. Kernel Level Threads: Operating System managed threads acting on kernel, an operating system core. User Level Threads: In this case, the thread management kernel is not aware of the existence of threads. User level thread shown in fig The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application starts with a single thread. Fig User Level and Kernel Level Thread Prof.Manoj S. Kavedia ( 8

9 Advantages: The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Some other advantages are User-level threads does not require modification to operating systems. Simple Representation: Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space. Simple Management: This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel. Fast and Efficient: Thread switching is not much more expensive than a procedure call. Disadvantages: There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads. User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runable threads left in the processes. For example, if one thread causes a page fault, the process blocks. Kernel Level Threads: In this case, thread management is done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process. The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads. Advantages: Kernel can simultaneously schedule multiple threads from the same process on multiple processes. If one thread in a process is blocked, the Kernel can schedule another thread of the same process. Kernel routines themselves can be multithreaded. Disadvantages: Kernel threads are generally slower to create and manage than the user threads. Transfer of control from one thread to another within the same process requires a mode switch to the Kernel. Advantages of Threads over Multiple Processes: Context Switching Threads are very inexpensive to create and destroy, and they are inexpensive to represent. For example, they require space to store, the PC, the SP, and the general-purpose registers, but they do not require space to share Prof.Manoj S. Kavedia ( 9

10 memory information, Information about open files of I/O devices in use, etc. With so little context, it is much faster to switch between threads. In other words, it is relatively easier for a context switch using threads. Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for example, sharing code section, data section, Operating System resources like open file etc. Disadvantages of Threads over Multiprocesses: Blocking The major disadvantage if that if the kernel is single threaded, a system call of one thread will block the whole process and CPU may be idle during the blocking period. Security Since there is, an extensive sharing among threads there is a potential problem of security. It is quite possible that one thread over writes the stack of another thread (or damaged shared data) although it is very unlikely since threads are meant to cooperate on a single task. Multithreading Models: Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types Many to many relationship. Many to one relationship. One to one relationship. Many to Many Model: The many-to-many model as shown fig multiplexes any number of user threads onto an equal or smaller number of kernel threads. The following diagram shows the many-to-many threading model where 6 user level threads are multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This model provides the best accuracy on concurrency and when a thread performs a blocking system call, the kernel can schedule another thread for execution. IRIX, HP-UX, and Tru64 UNIX use the two-tier model, as did Solaris prior to Solaris 9. Prof.Manoj S. Kavedia ( 1 0

11 Fig Many to Many Model Many to One Model: Many-to-one model shown in fig maps many user level threads to one Kernellevel thread. Thread management is done in user space by the thread library. When thread makes a blocking system call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors. Green threads for Solaris and GNU Portable Threads implement the many-to-one model in the past, but few systems continue to do so today. Fig Many-to-One Model One to One Model: There is one-to-one relationship of user-level thread to the kernel-level thread. one to one model is shown in fig This model provides more concurrency than the many-to-one model. It also allows another thread to run when a thread makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors. Fig One to One Model Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. Linux, windows(95 to XP)OS/2, windows NT and windows 2000 use one to one relationship model. Prof.Manoj S. Kavedia ( 1 1

12 Thread Libraries: Thread libraries provide programmers with an API for creating and managing threads. Thread libraries may be implemented either in user space or in kernel space. The former involves API functions implemented solely within user space, with no kernel support. The latter involves system calls, and requires a kernel with thread library support. There are three main thread libraries in use today: (1) POSIX Pthreads - may be provided as either a user or kernel library, as an extension to the POSIX standard. (2) Win32 threads - provided as a kernel-level library on Windows systems. (3) Java threads - Since Java generally runs on a Java Virtual Machine, the implementation of threads is based upon whatever OS and hardware the JVM is running on, i.e. either Pthreads or Win32 threads depending on the system. Difference between Thread and Process: No. Process Thread 1 Process is heavy weight or resource intensive. 2 Process switching needs interaction with operating system. 3 In multiple processing environments, each process executes the same code but has its own memory and file resources. 4 If one process is blocked, then no other process can execute until the first process is unblocked. 5 Multiple processes without using threads use more resources. 6 In multiple processes each process operates independently of the others Difference Between User Level and Kernel level Thread: Thread is light weight, taking lesser resources than a process. Thread switching does not need to interact with operating system. All threads can share same set of open files, child processes. While one thread is blocked and waiting, a second thread in the same task can run. Multiple threaded processes use fewer resources. One thread can read, write or change another thread's data. No. User Level Thread Kernel Thread 1 User-level threads are faster to create and manage 2 Implementation is by a thread library at the user level. 3 User-level thread is generic and can run on any operating system. 4 Multi-threaded applications cannot take advantage of multiprocessing. Benefits of Thread: Kernel-level threads are slower to create and manage. Operating system supports creation of Kernel threads. Kernel-level thread is specific to the operating system. Kernel routines themselves can be multithreaded. Prof.Manoj S. Kavedia ( 1 2

13 (1) Responsiveness: Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. For instance, a multithreaded web browser could still allow user interaction in one thread while an image is being loaded in another thread. (2) Resource sharing: By default, threads share the memory and the resources of the process to which they belong. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space. (3) Economy: It is much more time consuming to create and manage process than threads, therefore allocating memory and resources for process creation is costly. Because threads share resources of the process to which they belong, it is more economical to create and context switch threads. In Solaris 2, creating a process is about 30 times slower than is creating a thread, and context switching is about five times slower. (4) Utilization of multiprocessor architectures: The benefits of multithreading can be greatly increased in a multiprocessor architecture, where each thread may be running in parallel on a different processor. A single-threaded process can only run on one CPU, no matter how many are available. Multithreading on a multi-cpu machine increases concurrency. In a single processor architecture, the CPU generally moves between each thread so quickly as to create an illusion of parallelism, but in reality only one thread is running at a time. 2.4 INTERPROCESS COMMUNICATION Inter-Process communications or IPC, is the mechanism whereby one process can communicate, that is, exchange data, with another process. While these techniques are very useful for communicating with other programs they do not provide the fine grained control that's sometimes needed for larger scale applications. In these applications it is quite common for several processes to be used, each performing a dedicated task and other processes requesting services from them. Various techniques can be used to implement the Inter-Process Communication. There are two fundamental models of Inter-Process communication that are commonly used, these are: (1) Shared Memory Model (2) Message Passing Model Shared Memory Model: In shared memory model shown in fig the cooperating process shares a region of memory for sharing of information. some operating systems use the supervisor call to create a share memory space. Similarly, some operating system use file system to create RAM disk, which is a virtual disk created in the RAM. The shared files are stored in RAM disk to share the information between processes. The shared files in RAM disk are actually stored in the memory. The Process can share information by writing and reading data to the shared memory location or RAM disk. Prof.Manoj S. Kavedia ( 1 3

14 Fig Inter Process Communication Model Message Passing Model: In this model shown in fig , data is shared between process by passing and receiving messages between co-operating process. Message passing mechanism is easier to implement than shared memory but it is useful for exchanging smaller amount of data. In message passing mechanism data is exchange between processes through kernel of operating system using system calls. Message passing mechanism is particularly useful in a distributed environment where the communicating processes may reside on different components connected by the network. For example, A data program used on the internet could be designed so that chat participants communicate with each other by exchanging messages. It must be noted that passing message technique is slower than shared memory technique. Fig Message Format A message contains the following information: Header of message that identifies the sending and receiving processes Block of data Pointer to block of data Some control information about the process Prof.Manoj S. Kavedia ( 1 4

15 Fig Send and Receive Message Typically Inter-Process Communication is based on the ports associated with process. A port represent a queue of processes. Ports are controlled and managed by the kernel. The processes communicate with each other through kernel. In message passing mechanism, two operations are performed. Theses are sending message and receiving message. The function send() and receive() are used to implement these operations as shown in fig Supposed P1 and P2 want to communicate with each other. A communication link must be created between them to send and receive messages. The communication link can be created using different ways. The most important methods are: Direct model Indirect model Buffering Methods to Implement Interprocess Communication: Inter-process communication (IPC) is a set of interfaces, which is usually programmed in other for a programmer to communicate between a series of processes. This allows the running of programs concurrently in an operating system. There are quite a number of methods used in inter-process communications. They are: (1) Pipes: This allows the flow of data in one direction only. Data from the output is usually buffered until the input process receives it which must have a common origin. (2) Named Pipes: This is a pipe with a specific name. It can be used in processes that do not have a shared common process origin. Example is FIFO where the data is written to a pipe is first named. (3) Message queuing: This allows messages to be passed between messages using either a single queue or several message queues. This is managed by the system kernel. These messages are coordinated using an application program interface (API) (4) Semaphores: This is used in solving problems associated with synchronization and avoiding race conditions. They are integers values which are greater than or equal to zero Prof.Manoj S. Kavedia ( 1 5

16 (5) Shared Memory: This allows the interchange of data through a defined area of memory. Semaphore value has to be obtained before data can get access to shared memory. (6) Sockets: This method is mostly used to communicate over a network, between a client and a server. It allows for a standard connection which I computer and operating system independent. Facility Provided by IPC (1) IPC provides a mechanism to allow Processes to communicate and to synchronize their actions without sharing the same address space. (2) IPC is particularly useful in a distributed environment where the communicating processes may reside on different computers connected with a network. An example is a chat program used on the World Wide Web. IPC is best provided by a message-passing system, and message systems can be defined in many ways. Message Passing: The function of a message system is to allow processes to communicate with one another without the need to resort to shared data. We have already seen message passing used as a method of communication in microkernel s. In this scheme, services are provided as ordinary user processes. That is the services operate outside of the kernel. Communication among the user process is accomplished through the passing of messages. An IPC facility provides at least the two operations: send(message) and receive(message). Messages sent by a process can be of either fixed or variable size. Fixed Sized Message: When fixed-sized messages are sent, the system-level implementation is straightforward ie easy. But makes the task of programming more difficult. Variable Sized Message: If variable-sized messages needs a more complex system-level implementation, but the programming task becomes simpler. Example: If processes X and Y want to communicate, they must send messages to and receive messages from each other, there exist a link between two process. There are variety of ways to implement this link. Here physical implementation is not of much concern (such as shared memory, hardware bus, or network), but logical implementation is more important. There are several methods for logically implementing a link and the send/receive operations: (1) Direct or indirect communication (2) Symmetric or asymmetric communication (3) Automatic or explicit buffering (4) Send by copy or send by reference (5) Fixed-sized or variable-sized messages Prof.Manoj S. Kavedia ( 1 6

17 We look at each of these types of message systems next. Naming: Processes that want to communicate can use either direct or indirect communication to refer to each other. Direct Communication: With direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send and receive primitives are defined as: (1) send(x, message) Send a message to process X. (2) receive(y, message) Receive a message from process Y. Properties of communication Links: A link is established automatically between every pair of processes that want to communicate. The processes need to know only each other s identity to communicate. (1) A link is associated with exactly two processes. (2) Exactly one link exists between each pair of processes. There can be symmetry or not to address both sender and receiver. Symmetry in addressing: Symmetry in addressing means that, both the sender and the receiver processes must name the other to communicate. (1) send(x, message) Send a message to process X. (2) receive(y, message) Receive a message from process Y. Asymmetry in Addressing: Another scheme employs asymmetry in addressing. Only the sender names the recipient; the recipient is not required to name the sender and recipient name. The send and receive primitives are defined as follows: (1) Send(P, message) Send a message to process X. (2) Receive (id, message) Receives message from any process; the variable id is set to the name of the process with which communication has taken place. Disadvantage in both symmetric and asymmetric schemes: (1) Is the limited modularity of the resulting process definitions (2) Changing the name of a process may necessitate examining all other process definitions. (3) All references to the old name must be found, so that they can be modified to the new name. (4) This situation is not desirable from the viewpoint of separate compilation. Indirect Communication: The messages are sent to and Received from mailboxes, or ports in indirect communication. A mailbox can be viewed as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification. In indirect communication, a process can communicate with some other process via a number of different mailboxes. Two processes can communicate only if they share a mailbox. The send and receive primitives are defined as follows: (1) Send(P, message) Send a message to mailbox P. Prof.Manoj S. Kavedia ( 1 7

18 (2) Receive(P, message) Receive a message from mailbox P. Properties of Communication Link: (1) A link is established between a pair of processes only if both members of the pair have a shared mailbox. (2) A link may be associated with more than two processes. (3) A number of different links may exist between each pair of communicating processes, with each link corresponding to one mailbox. Example: Let processes X and Y all share mailbox P. Process X sends a message to P, while Y and Z each execute a receive from P. Which process will receive the message sent by X. The answer depends on the scheme that we choose: (A) Allow a link to be associated with at most two processes (B) Allow at most one process at a time to execute a receive operation. (C) Allow the system to select arbitrarily which process will receive the message (that is, either X or Y but not both, will receive the message). The system may identify the receiver to the sender. Synchronization: Communication between processes takes place by calls to send and receive primitives. There are different design options for implementing each primitive. Message passing may be either blocking or non blocking-also known as synchronous and asynchronous. Blocking send: The sending process is blocked until the message is received by the receiving process or by the mailbox. Nonblocking send: The sending process sends the message and resumes operation. Blocking receive: The receiver blocks until a message is available. Nonblocking receive: The receiver retrieves either a valid message or a null. Different combinations of send and receive are possible. When both the send and receive are blocking, we have a rendezvous between the sender and the receiver. 2.5 SCHEDULING Operating systems may feature up to three distinct types of schedulers: (1) A long-term scheduler (also known as an admission scheduler or high-level scheduler), (2) A mid-term or medium-term scheduler and (3) A short-term scheduler (also known as a dispatcher). The names suggest the relative frequency with which these functions are performed. Long-term Scheduler: The long-term, or admission, scheduler decides which jobs or processes are to be admitted to the ready queue; that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus this scheduler dictates what processes are to run on a system, and the degree of concurrency to be supported at any one time - ie whether a high or low amount of Prof.Manoj S. Kavedia ( 1 8

19 processes are to be executed concurrently, and how the split between IO intensive and CPU intensive processes is to be handled. In modern OS's, this is used to make sure that real time processes get enough CPU time to finish their tasks. Without proper real time scheduling, modern GUI interfaces would seem sluggish. Long-term scheduling is also important in large-scale systems such as batch processing systems, computer clusters, supercomputers and render farms. In these cases, special purpose job scheduler software is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system. Mid-term Scheduler: The mid-term scheduler, present in all systems with virtual memory, temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as "swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The mid-term scheduler may decide to swap out a process for example (1) Process which has not been active for some time, or (2) A process which has a low priority, or (3) A process which is page faulting frequently, or (4) A process which is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource. In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the mid-term scheduler may actually perform the role of the long-term scheduler, by treating binaries as "swapped out processes" upon their execution. In this way, when a segment of the binary is required it can be swapped in on demand, or "lazy loaded". Short-term Scheduler: The short-term scheduler (also known as the dispatcher) decides which of the ready, in-memory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as "voluntary" or "co-operative"), in which case the scheduler is unable to "force" processes off the CPU. The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. The ready queue is not necessarily a first-in, firstout (FIFO) queue. As we shall see when we consider the various scheduling algorithms, a ready queue may be implemented as a FIFO queue, a priority queue, a tree, or simply an unordered linked list. Prof.Manoj S. Kavedia ( 1 9

20 Conceptually, however, all the processes in the ready queue are lined up waiting for a chance to run on the CPU. The records in the queues are generally process control blocks (PCBs) of the processes. Types of Scheduling: Below given are the circumstances under which CPU Scheduling takes place. (1) When a process switches from the running state to the waiting state (for example, I/O request, or invocation of wait for the termination of one of the child processes). (2) When a process switches from the running state to the ready state (for example, when an interrupt occurs). (3) When a process switches from the waiting state to the ready state (for example, completion of I/O). (4) When a process terminates. Hence there are two types of scheduling: Non preemptive Scheduling Preemptive Scheduling Non preemptive Scheduling: In circumstances 1 and 4, there is no choice in terms of scheduling. When scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme is nonpreemptive; Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh operating systems. It is the only method that can be used on certain hardware platforms, because it does not require the special hardware (for example, a timer) needed for preemptive scheduling Example of Non Preemptive Scheduling Algorithm: (1) First-come-first served Preemptive Scheduling: A new process (if one exists in the ready queue) must be selected for execution. There is a choice, however, in circumstances 2 and 3. otherwise, the scheduling scheme is preemptive. But preemptive scheduling incurs a cost. Consider the case of two processes sharing data. One may be in the midst of updating the data when it is preempted and the second process is run. The second process may try to read the data, which are currently in an inconsistent state. New mechanisms thus are needed to coordinate access to shared data; Preemption also has an effect on the design of the operating-system kernel. Shown in fig.4.2. During the processing of a system call, the kernel may be busy with an activity on behalf of a process. Such activities may involve changing important kernel data (for instance, I/O queues). What happens if the process is preempted in the middle of these changes, and the kernel (or the device driver) needs to read or modify the same structure. Preemptive scheduling: (1) Shortest-job-first (2) Round Robin. Prof.Manoj S. Kavedia ( 2 0

21 (3) Priority based scheduling (4) Multi-level Queue Scheduling Criteria: Since there are many CPU-scheduling algorithms, They have different properties and may favor one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms. There are many criteria for comparing CPU-scheduling algorithms. The characteristics used for comparison can make a big difference in the determination of the best algorithm. The criteria include the following: (1) CPU Utilization (2) Throughput (3) Turn around Time (4) Waiting Time (5) Response around CPU utilization: We want to keep the CPU as busy as possible. CPU utilization may range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system). Throughput: If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes completed per time unit, called throughput. For long processes, this rate may be 1 process per hour; for short transactions, throughput might be 10 processes per second. Turnaround time: From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O. Waiting time: The CPU-scheduling algorithm does not affect the amount of time during which a process executes or does I/O; it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue. Response time: In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some output fairly early, and can continue computing new results while previous results are being output to the user. Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the amount of time it takes to start responding, but not the time that it takes to output that response. The turnaround time is generally limited by the speed of the output device. Final Conclusion: Prof.Manoj S. Kavedia ( 2 1

22 Hence it is necessary to maximize CPU utilization and throughput, and to minimize turnaround time, waiting time, and response time. In most cases, we optimize the average measure. However, in some circumstances we want to optimize the minimum or maximum values, rather than the average. For example, to guarantee that all users get good service, we may want to minimize the maximum response time. Comparison among Scheduler: No. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler 1 It is a job scheduler It is a CPU scheduler It is a process swapping scheduler. 2 Speed is lesser than short term scheduler 3 It controls the degree of multiprogramming 4 It is almost absent or minimal in time sharing system 5 It selects processes from pool and loads them into memory for execution Speed is fastest among other two It provides lesser control over degree of multiprogramming It is also minimal in time sharing system It selects those processes which are ready to execute Speed is in between both short and long term scheduler. It reduces the degree of multiprogramming. It is a part of Time sharing systems. It can re-introduce the process into memory and execution can be continued. Scheduling Algorithms: A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling algorithms. The different scheduling algorithm are listed below First-Come, First-Served (FCFS) Scheduling Shortest-Job-Next (SJN) Scheduling Priority Scheduling Round Robin(RR) Scheduling Multiple-Level Queues Scheduling These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so that once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority running process anytime when a high priority process enters into a ready state. First Come First Serve: The simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm. In this the process that requests the CPU first is allocated the CPU first. FIFO(First In First Out) Queue is used to implement FCFS scheduling. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. The code for FCFS scheduling is simple to Prof.Manoj S. Kavedia ( 2 2

23 write and understand. The average waiting time under the FCFS policy, however, is often quite long. Consider the following set of processes that arrive at time 0, with the length of the CPU-burst time given in milliseconds: Process Burst Waiting Time P P P P P If the processes arrive in the order PI, P2, P3,P4,P5 and are served in FCFS order, we get the result shown in the following Gantt chart: P1 /24 P2 / 3 P3 / 7 P4 / 13 P5 / The waiting time is 0 milliseconds for process P1 24 milliseconds for process P2 27 milliseconds for process P3 34 milliseconds for process P3 47 milliseconds for process P3 Thus when process arrives as P1,P2, P3,P4,P5 the average waiting time is = ( )/5 = 132 /5 milliseconds. = 26.4 millisecond. If the processes arrive in the order P3, P4,P1,P5,P2 the average waiting time is = ( )/5 =136/5 milliseconds = 27.2 millisecond. From above two calculation it is clear that one process need 26.4msec and other need 27.2 msec. The average waiting time under a FCFS policy is generally not minimal, and may vary substantially if the process CPU-burst times vary greatly. Disadvantage of FCFS: (1) Average waiting time is very large. (2) FCFS is not an attractive alternative on its own for a single processor system. Another difficulty with FCFS tends to favor processor bound, processes over I/O bound processes and may result in inefficient use of both the processor and the I/O devices. Short Job First Scheduling (SJF): Shortest-job-first (SJF) another scheduling algorithm. In this algorithm as soon as the CPU is available, it is assigned to the process that has the smallest next CPU burst. If two Prof.Manoj S. Kavedia ( 2 3

24 processes have the same length next CPU burst, FCFS scheduling is used to server the process which came first in the queue. Note that a more appropriate term would be the shortest next CPU burst, because the scheduling is done by examining the length of the next CPU burst of a process, rather than its total length. Consider the following set of Five processes, with the length of the CPU-burst time given in milliseconds Process Burst P1 24 P2 3 P3 7 P4 13 P5 21 Now the processes will arrive in the order P2, P3, P4,P5,P1 and are served in SJP order, we get the result shown in the following Gantt chart: P2 /3 P3 / 7 P4/ 13 P5 / 21 P1 / The waiting time is 0 milliseconds for process P2 3 milliseconds for process P3 10 milliseconds for process P4 23 milliseconds for process P5 44 milliseconds for process P1 Thus when process arrives as P2,P3, P4,P5,P1 the average waiting time is = ( )/5 = 80 /5 milliseconds. = 16 millisecond. Conclusion: The SJF scheduling algorithm is optimal because, it gives the minimum average waiting time for a given set of processes. By moving a short process before a long one, the waiting time of the short process decreases more than it increases the waiting time of the long process. Consequently, the average waiting time decreases. The SJF algorithm may be either preemptive or nonpreemptive. The choice arises when a new process arrives at the ready queue while a previous process is executing. The new process may have a shorter next CPU burst than what is left of the currently executing process. A preemptive SJF algorithm will preempt the currently executing process, whereas a nonpreemptive SJF algorithm will allow the currently running process to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest-remaining-time-first scheduling. Advantage and Disadvantage: Prof.Manoj S. Kavedia ( 2 4

25 Overall performance is significantly improved In terms of response time. There Is a risk of starvation of longer processes. Priority Scheduling: A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order. An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice versa. Scheduling may have priorities in terms of high priority and low priority. Priorities are generally some fixed range of numbers, such as 0 to 7, or 0 to 4,095. However, there is no general agreement on whether 0 is the highest or lowest priority. Some systems use low numbers to represent low priority; others use low numbers for high priority. Here we will use low numbers to represent high priority. As an example, consider the following set of processes, assumed to have arrived at time 0, in the order P1, P2,P3,P4,P5 with the length of the CPU-burst time given in milliseconds: Process Burst Priority P P2 1 1 P3 2 4 P4 1 5 P5 5 2 Using priority scheduling, we would schedule these processes according to the following Gantt chart P2/1 P5/5 P1/10 P3/2 P4/ The average waiting time is 8.2 milliseconds. Priorities can be defined either: (1) Internally defined priority and (2) Externally defined priority Internal Priority: Internally defined priorities use some measurable quantity or quantities to compute the priority of a process. For example, time limits, memory requirements, the number of open files, and the ratio of average I/O burst to average CPU burst have been used in computing priorities. External Priority: External priorities are set by criteria that are external to the operating system, such as the importance of the process, the type and amount of funds being paid for computer use, the department sponsoring the work, and other, often political, factors. Priority scheduling can be either (1) Preemptive priority scheduling or Prof.Manoj S. Kavedia ( 2 5

26 (2) Nonpreemptive priority scheduling. Preemptive priority scheduling: When a process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority-scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. Non Preemptive priority scheduling: A no preemptive priority-scheduling algorithm will simply put the new process at the head of the ready queue. Disadvantage of Priority scheduling: One problem with a pure priority scheduling scheme is lower priority processes may suffer starvation. Problem with Priority scheduling: A major problem with priority-scheduling algorithms is indefinite blocking (or starvation). A process that is ready to run but lacking the CPU can be considered blocked-waiting for the CPU. A priority-scheduling algorithm can leave some lowpriority processes waiting indefinitely for the CPU. In a heavily loaded computer system, a steady stream of higher-priority processes can prevent a low-priority process from ever getting the CPU. Solution to Above Problem: A solution to the problem of indefinite blockage of low-priority processes is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time. For example, if priorities range from 127 (low) to 0 (high), we could decrement the priority of a waiting process by 1 every 15 minutes. Eventually, even a process with an initial priority of 127 would have the highest priority in the system and would be executed. In fact, it would take no more than 32 hours for a priority 127 process to age to a priority 0 process. Round-Robin Scheduling: The round-robin (RR) scheduling algorithm is designed for timesharing systems. It is similar to FCFS scheduling, but preemption is added to switch between processes. A small unit of time, called a time quantum (or time slice), is defined. A time quantum is generally from 10 to 100 milliseconds. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum. To implement RR scheduling, there is the ready queue as a FIFO queue of processes. New processes are added to the end(tail) of the ready queue. The CPU scheduler takes the first process from the ready queue, sets a timer to interrupt after 1 time quantum/time slice, and dispatches the process. (1) One of two things will then happen. (2) The process may have a CPU burst of less than 1 time quantum. In this case, the process itself will release the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue. Prof.Manoj S. Kavedia ( 2 6

27 (3) otherwise, if the CPU burst of the currently running process is longer than 1 time quantum, the timer will go off and will cause an interrupt to the operating system. A context switch will be executed, and the process will be put at the tail of the ready queue. The CPU scheduler will then select the next process in the ready queue. The average waiting time under the RR policy, however, is often quite long. Consider the following set of processes that arrive at time 0, with the length of the CPU-burst time given in milliseconds: Process Burst P1 24 P2 3 P3 3 Le the time quantum of 4 milliseconds, then how takes place (1) Process P1 gets the first 4 milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time quantum, and (2) the CPU is given to the next process in the queue, process P2. Since process P2 does not need 4 milliseconds, it quits before its time quantum expires. (3) The CPU is then given to the next process, process P3. Once each process has received 1 time quantum, the CPU is returned to process P1 for an additional time quantum. The resulting RR schedule is P1 P2 P3 P4 P1 P1 P1 P1 P The average waiting time is = 17/3 = 5.66 milliseconds. In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a row. If a process' CPU burst exceeds 1 time quantum, that process is preempted and is put back in the ready queue. Hence the RR scheduling algorithm is preemptive. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units. Each process must wait no longer than (n - 1) x q time units until its next time quantum. For example, if there are five processes, with a time quantum of 20 milliseconds, then each process will get up to 20 milliseconds every 100 milliseconds. Advantages and Disadvantages of RR scheduling (1) Round robin is particularly effective In a general purpose time sharing system or transaction processing systems. (2) If there is a mix of processor bound and 1/0 bound processes, then the processor bound processes tend to receive an unfair portion of processor time, which result In poor performance for I/O bound processes, inefficient use of I/O devices and an increase in the variance of response time. Prof.Manoj S. Kavedia ( 2 7

28 3.1 INTRODUCTION Chapter-3 Memory Management Main memory (RAM) is an important resource that must be very carefully managed.. Today s need for a programmer is private, infinitely large, infinitely fast memory that is also nonvolatile, that is, does not lose its contents when the electric power is switched off. But today s technology cannot provide such memory in bulk at less cost ie such memories are expensive. Hence the concept of a memory hierarchy shown in fig.3.1. is discovered, in which computers have a few megabytes of very fast, expensive, volatile cache memory, a few gigabytes of medium-speed, medium-priced, volatile main memory, and a few terabytes of slow, cheap, nonvolatile magnetic or solid-state disk storage, not to mention removable storage, such as DVDs and USB sticks. Fig. 3.1 Memory Hierarchy It is the job of the operating system to abstract this hierarchy into a useful model and then manage the abstraction. Prof.Manoj S. Kavedia ( 2 8

29 The part of the operating system that manages (part of) the memory hierarchy is called the memory manager. Job of Memory Manager is to efficiently manage memory: keep track of which parts of memory are in use, allocate memory to processes when they need it, and deallocate it when they are done. 3.2 NO MEMORY ABSTRACTION Early mainframe computers (before 1960), early minicomputers (before 1970), and early personal computers (before 1980) had no memory abstraction. Only physical memory was used, no mapping, nothing. When a program executed an instruction like MOV R4,1000 the computer just moved the contents of physical memory location 1000 to REGISTER4. Thus, the model of memory presented to the programmer was simply physical memory, a set of addresses from 0 to some maximum, each address corresponding to a cell containing some number of bits, commonly byte or group of 8 bits. Hence running of multiple program was not possible at time. If the first program wrote a new value to, say, location 000, this would erase whatever value the second program was storing there. By any means if two program share same memory then nothing would work and both programs would crash almost immediately. Fig. 3.2 Organization of Memory Fig.3.2. Show different ways to use single physical memory as below the Three variations The operating system may be at the bottom of memory in RAM (Random Access Memory), as shown in Fig. 3-2(a), this model was formerly used on mainframes and minicomputers but is rarely used any more. It may be in ROM (Read-Only Memory) at the top of memory, as shown in Fig. 3-2b),this model is used on some handheld computers and embedded systems. The device drivers may be at the top of memory in a ROM and the rest of the system in RAM down below, as shown in Fig. 3-2(c). This model was used by early personal computers (e.g., running MSDOS), where the portion of the system in the ROM is called the BIOS (Basic Input Output System). Prof.Manoj S. Kavedia ( 2 9

30 Models (a) and (c) have the disadvantage that a bug in the user program can wipe out the operating system, possibly with disastrous results. When the system is organized in this way, generally only one process at a time can be running. As soon as the user types a command, The operating system copies the requested program from disk to memory and executes it. When the process finishes, the operating system displays a prompt character and waits for a user new command. When the operating system receives the command, it loads a new program into memory, overwriting the first one. One way to get some parallelism in a system with no memory abstraction is to program with multiple threads. Since all threads in a process are supposed to see the same memory image, the fact that they are forced to is not a problem. While this idea works, it is of limited use since what people often want is unrelated programs to be running at the same time, something the threads abstraction does not provide Running Multiple Programs without a Memory Abstraction It is possible to run multiple programs with no memory abstraction at the same time. Here the operating system save the entire contents of memory to a disk file, then bring one program in and execute the next program. As long as there is only one program at a time in memory, there are no conflicts. Brining in one program and executing and then sending back to disk is called as swapping Running Multiple Program without Swapping It is possible to run multiple programs concurrently, even without swapping with some additional hardware. This was implemented in IBM 370 Model. Here memory was divided into 2-KB blocks Each was assigned a 4-bit protection key held in special registers inside the CPU. A machine with a 1-MB memory needed only 512 of these 4-bit registers for a total of 256 bytes of key storage. Fig. 3.3 Relocation Problem The PSW (Program Status Word) also contained a 4-bit key. The 360 hardware trapped any attempt by a running process to access memory with a protection code different from the PSW key. Since only the operating system could change the protection keys, user processes were prevented from interfering with one another and with the operating system itself. Prof.Manoj S. Kavedia ( 3 0

31 This technique has major drawback, shown in Fig Here we have two programs, each 16 KB in size, as shown in Fig. 3-3(a) and (b). The first is shaded to indicate that it has a different memory key than the second. The first program starts out by jumping to address 24, which contains a MOV instruction. Fig. 3.4 Two Programs in Memory The second program starts out by jumping to address 28, which contains a CMP instruction. When the two programs are loaded consecutively in memory starting at address 0, we have the situation of Fig For this example, lets assume the operating system is in high memory and thus not shown. After the programs are loaded, they can be run. Since they have different memory keys, neither one can damage the other. But the problem is of a different nature. When the first program starts, it executes the JMP 24 instruction, which jumps to the instruction, as expected. This program functions normally. However, after the first program has run long enough, the operating system may decide to run the second program, which has been loaded above the first one, at address 16,384. The first instruction executed is JMP 28, which jumps to the ADD instruction in the first program, instead of the CMP instruction it is supposed to jump to. The program will most likely crash in well under 1 sec. Since the two programs both reference absolute physical memory this problem arises. But in reality it should be that each program to get different set of addresses. Hence as the solution the IBM 360 modified the second program on the fly(while loading) as it loaded it into memory using a technique known as static relocation. That is when a program was loaded at address 16,384, the constant 16,384 was added to every program address during the load process (so JMP 28 became JMP 16,412, etc.).this mechanism works if done right, it is not a very general solution and slows down loading. Furthermore, it requires extra information in all executable programs to indicate which words contain (relocatable) addresses and which do not. After all, the 28 in Fig. 3-3(b) has to be relocated but an instruction like MOV R1,28 which moves Prof.Manoj S. Kavedia ( 3 1

32 the number 28 to REGISTER1 must not be relocated. The loader needs some way to tell what is an address and what is a constant. 3.3 MEMORY ABSTRACTION : ADDRESS SPACES The major draw back of no memory abstraction are If user programs can address every byte of memory, they can easily trash the operating system, intentionally or by accident, bringing the system to a grinding halt (unless there is special hardware like the IBM 360 s lock-and-key scheme). This problem exists even if only one user program (application) is running. With this model, it is difficult to have multiple programs running at once (taking turns, if there is only one CPU). On personal computers, it is common to have several programs open at once (a word processor, an program, a Web browser), one of them having the current focus, but the others being reactivated at the click of a mouse. Since this situation is difficult to achieve when there is no abstraction from physical memory, something had to be done. The above two problem can be solved by allowing multiple applications to be in memory at the same time without interfering with each other: protection and relocation. First Solution : Protection and relocation: This solution to the former used on the IBM 360: label chunks of memory with a protection key and compare the key of the executing process to that of every memory word fetched. However, this approach by itself does not solve the latter problem, although it can be solved by relocating programs as they are loaded, but this is a slow and complicated solution. Second Solution: A better solution is to invent a new abstraction for memory: the address space. Just as the process concept creates a kind of abstract CPU to run programs, the address space creates a kind of abstract memory for programs to live in. An address space is the set of addresses that a process can use to address memory. Each process has its own address space, independent of those belonging to other processes (except in some special circumstances where processes want to share their address spaces). 3.4 ADDRESS SPACE The concept of an address space is very general as explained below with different examples. General Devices used for communication: Consider telephone numbers. In the United States and many other countries, a local telephone number is usually a 7-digit number. The address space for telephone numbers thus runs from 0,000,000 to 9,999,999, although some numbers, such as those beginning with 000 are not used. With the growth of smartphones, modems, and fax machines, this space is becoming too small, in which case more digits have to be used. Address space for I/O ports: The address space for I/O ports on the x86 runs from 0 to Address for Network: Prof.Manoj S. Kavedia ( 3 2

33 IPv4 addresses are 32-bit numbers, so their address space runs from 0 to (again, with some reserved numbers).address spaces do not have to be numeric. The set of.com Internet domains is also an address space. This address space consists of all the strings of length 2 to 63 characters that can be made using letters, numbers, and hyphens, followed by.com. Types of address space: The concept of a logical address space that is bound to a separate physical address space is central to proper memory management Logical address: Generated by the CPU; also referred to as virtual address Physical address: Address seen by the memory unit Logical and physical addresses are the same in compile: Time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme Implementation: Base and Limit Registers: Fig. 3.5 Base and limit registers can be used to give each process a separate address space Dynamic relocation technique is used to map each process address space onto a different part of physical memory in a simple way. This technique is used on machines ranging from the CDC 6600 (the world s first supercomputer) to the Intel 8088 (the heart of the original IBM PC). To achieve these each CPU has two special registers, usually called the base and limit registers. When these registers are used, programs are loaded into consecutive memory locations wherever there is space in memory and without relocation during loading, as shown in Fig Prof.Manoj S. Kavedia ( 3 3

34 Fig. 3.6 Address Space Calculation When a process is run, the base register is loaded with the physical address where its program begins in memory and the limit register is loaded with the length of the program. In Fig. 3-4, the base and limit values that would be loaded into these hardware registers when the first program is run are 0 and 16,384, respectively. The values used when the second program is run are 16,384 and 32,768, respectively. If a third 16-KB program were loaded directly above the second one and run, the base and limit registers would be 32,768 and 16,384. Fig. 3.6 Address Translation Every time a process references memory, either to fetch an instruction or read or write a data word, the CPU hardware automatically adds the base value to the address generated by the process before sending the address out on the memory bus. Simultaneously, it checks whether the address offered is equal to or greater than the value in the limit register, in which case a fault is generated and the access is aborted. Thus, in the case of the first instruction of the second program in Fig.3.4, the process executes a JMP 28 Instruction, but the hardware treats it as though it were JMP so it lands on the CMP instruction as expected. The settings of the base and limit registers during the execution of the second program of Fig. 3.4 is shown in Fig Using base and limit registers is an easy way to give each process its own private address space because every memory address generated automatically has the base- Prof.Manoj S. Kavedia ( 3 4

35 register contents added to it before being sent to memory. In many implementations, the base and limit registers are protected in such a way that only the operating system can modify them. Intel 8088/86 have multiple base registers, allowing program text and data, for example, to be independently relocated, but offered no protection from out-ofrange memory references. A disadvantage of relocation using base and limit registers is the need to perform an addition and a comparison on every memory reference. Comparisons can be done fast, but additions are slow due to carry-propagation time unless special addition circuits are used Swapping: Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory. Though performance is usually affected by swapping process but it helps in running multiple and big processes in parallel and that's the reason Swapping is also known as a technique for memory compaction. Fig. 3.7 Swapping The total time taken by swapping process includes the time it takes to move the entire process to a secondary disk and then to copy the process back to memory, as well as the time the process takes to regain main memory. Let us assume that the user process is of size 2048KB and on a standard hard disk where swapping will take place has a data transfer rate around 1 MB per second. The actual transfer of the 1000K process to or from memory will take 2048KB / 1024KB per second = 2 seconds Prof.Manoj S. Kavedia ( 3 5

36 = 200 milliseconds Now considering in and out time, it will take complete 400 milliseconds plus other overhead where the process competes to regain main memory. When swapping creates multiple holes in memory, it is possible to combine them all into one big one by moving all the processes downward as far as possible. This technique is known as memory compaction. It is usually not done because it requires a lot of CPU time. For example, on a 16-GB machine that can copy 8 bytes in 8 nsec, it would take about 16 sec to compact all of memory Managing Free Memory: Since there is only a limited amount of disk space, it is necessary to reuse the space from deleted files for new files. To keep track of free disk space, the system maintains a free-space list. The free-space list records all disk blocks that are free (i.e., are not allocated to some file). To create a file, the free-space list has to be searched for the required amount of space, and allocate that space to a new file. This space is then removed from the free-space list. When a file is deleted, its disk space is added to the free-space list. When memory allocation takes place dynamically, it is responsibility of an operating system to manage it properly. In free space management techniques mainly two methods are used as follows: (1) Bitmap (2) Linked List (3) Grouping (4) Counting Bit Map technique: Frequently, the free-space list is implemented as a bit map or bit vector shown in fig.3.8. Each block is represented by a 1 bit. If the block is free, the bit is 0; if the block is allocated, the bit is 1. For example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and 27 are free, and the rest of the blocks are allocated. The free-space bit map would be: Fig. 3.8 BitMap Advantages with Bit Map technique: The main advantage of this approach is that it is relatively simple and efficient to find n consecutive free blocks on the disk. Unfortunately, bit vectors are inefficient unless the entire vector is kept in memory for most accesses. Keeping it main memory is possible for smaller disks such as on microcomputers, but not for larger ones. Problem with Bitmap Technique: Prof.Manoj S. Kavedia ( 3 6

37 There is one problem associated with bitmap that when k unit processes runs in the memory, the memory manager has to search a 0 bitmap which are consecutive n units. The main argument against bitmap is that the searching a bitmap for a run of a given length is a slow operation Linked Lists Technique: Linked list is another way using which track of free and used memory can be tracked. Link list is collection of allocated memory or hole between two processes. As shown in figure P indicates the process and H indicates the hole(free memory). All the entries present in the list specifies a hole (H) process (P) and address at which it starts, the length and a pointer to the next entry. From the example shown in fig.3.9. This list is always kept sorted. One of the major advantages of sorting is that when a process terminates or when a process terminates or when a process is swapped out the updating list is straight forward. Fig. 3.9 Linked Free Space Management A terminating memory has two neighbors only. In exceptional cases like when it is at the top of the memory and at the bottom of the memory. At the top of the memory it do not have neighbor at the left side At the bottom of the memory there is no neighbor at the right side. The neighbors could be holes or process. Prof.Manoj S. Kavedia ( 3 7

38 There could be for combinations as shown in the Fig. (a) shows modifying the list requires replacing a P by H. Fig.(b) and (c)also two entries and combined into one and the list becomes one entry shorter. Doubly link list is useful to find out most recent entry and to see if merge is possible. When processes or even holes are kept sorted in the list, then different algorithms could be used to allocate the memory. These algorithm are called as (i) First Fit (ii) Best Fit (iii) Next Fit (iv) Worst Fit Grouping: A modification of the free-list approach is to store the addresses of n free blocks in the first free block. The first n-1 of these are actually free. The last one is the disk address of another block containing addresses of another n free blocks. The importance of this implementation is that addresses of a large number of free blocks can be found quickly Counting: Another approach is to take advantage of the fact that, generally, several contiguous blocks may be allocated or freed simultaneously, particularly when contiguous allocation is used. Thus, rather than keeping a list of free disk addresses, the address of the first free block is kept and the number n of free contiguous blocks that follow the first block. Each entry in the free-space list then consists of a disk address and a count. Although each entry requires more space than would a simple disk address, the overall list will be shorter, as long as the count is generally greater than VIRTUAL MEMORY Real memory refers to the actual memory chips that are installed in the computer. All programs actually run in this physical memory. However, it is often useful to allow the computer to think that it has memory that isn't actually there, in order to permit the use of programs that are larger than will physically fit in memory, or to allow multitasking (multiple programs running at once). This concept is called virtual memory.shown in fig An imaginary memory area supported by some operating systems (for example, Windows but not DOS) in conjunction with the hardware. You can think of virtual memory as an alternate set of memory addresses. Programs use these virtual addresses rather than real addresses to store instructions and data. When the program is actually executed, the virtual addresses are converted into real memory addresses. Prof.Manoj S. Kavedia ( 3 8

39 Fig Virtual Memory Purpose of Virtual Memory: The conceptual separation of user logical memory from physical memory. Thus we can have large virtual memory on a small physical memory. The purpose of virtual memory is to enlarge the address space, the set of addresses a program can utilize. For example, shown in fig virtual memory might contain twice as many addresses as main memory. A program using all of virtual memory, therefore, would not be able to fit in main memory all at once. Nevertheless, the computer could execute such a program by copying into main memory those portions of the program needed at any given point during execution. To facilitate copying virtual memory into real memory, the operating system divides virtual memory into pages, each of which contains a fixed number of addresses. Each page is stored on a disk until it is needed. When the page is needed, the operating system copies it from disk to main memory, translating the virtual addresses into real addresses. Fig Concept of Virtual Memory Mapping: The process of translating virtual addresses into real addresses is called mapping shown in fig The copying of virtual pages from disk to main memory is known as paging or swapping. Prof.Manoj S. Kavedia ( 3 9

40 Fig Address Mapping from Virtual to Physical All modern general-purpose computer operating systems use virtual memory techniques for ordinary applications, such as word processors, spreadsheets, multimedia players, accounting, etc. Older operating systems, such as DOS and Microsoft Windows of the 1980s, or those for the mainframes of the 1960s, generally had no virtual memory functionality - notable exceptions being the Atlas, B5000 and Apple Computer's Lisa Logical Versus Physical Address: Program must be brought into memory and placed within a process for it to be run. The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. Logical address generated by the CPU; also referred to as virtual address. Physical address address seen by the memory unit. The user program deals with logical addresses; it never sees the real physical addresses. Address binding is a process of mapping a logical (virtual) address into physical (memory) address Paging and Conversion of Logical Address to Physical: Dynamic memory partitioning suffers from external fragmentation. To overcome this problem either we can use compaction or paging. This method allows a program to be allocated physical memory wherever it is available. In paging physical memory is broken into fixed size blocks called frames. Also logical memory is broken into fixed size blocks called as pages. Whenever a process is to be executed its pages are loaded into available backing store. This back storage is nothing but the physical memory which is in the size of frames, which are fixed blocks. The size of page is dependent of hardware. It is typically a power of 2 varying between 512 bytes and 16MB per page depending on system architecture. This is done for translation of logical address into physical address in easy way. The logical address is in the following form. Prof.Manoj S. Kavedia ( 4 0

41 Page Number Offset Fig Address Conversion from Logical to Physical Every logical address is bound with physical address shown in fig and fig So paging is supposed to be a form of dynamic relocation. Every address generated by the CPU is divided into two parts (1) a Page number(p) and (2) a page offset (d). Page Number: The page number is used as an index into the page table as shown in fig. 3.13, fig The page table contains the base address of each page in physical memory. Base Address: This base address is combined with the page offset to define the physical memory address that is sent to the memory unit. Example of Paging: Fig Example of Page Translation Prof.Manoj S. Kavedia ( 4 1

42 Fig Example of Paging Demand Paging: When a page is referenced, either as code execution or data access, and that page isn t in memory, then get the page from disk and re-execute the statement. Fig Demand Paging Demand paging system is some how similar to paging system with swapping. In this type of method the process resides on the back storage (i.e. secondary memory or disk).shown in Fig Prof.Manoj S. Kavedia ( 4 2

43 When we want to execute the process, we swap it into memory. A lazy swapper is used to swap the page, which is needed, instead of swapping the whole process. The page is swapped strictly if it is needed. We can view the process as a sequence of pages rather than a large address space Instead of swapper we use pager, which is concern with the individual pages of the process. When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again. Instead of swapping in a whole process, the pager brings only those necessary pages into memory. It avoids reading into memory pages that will not be used anyway, decreasing the swap time and amount of physical memory needed. The hardware to support demand paging is same as that is required for paging and swapping as follows (1) Page table (2) Secondary memory (3) Software needed to solve page fault problem In demand paging if we guess right and demand for only those pages that are actually needed, then process will rim exactly as per our expectation. But if in case, process tries to access a page that was not brought into memory, then access to this page is called as a page fault. Advantages: (1) Demand paging, as opposed to loading all pages immediately. (2) Only loads pages that are demanded by the executing process. (3) When a process is swapped out (context switch) of memory, only those pages loaded in main memory need to be swapped out from main memory to secondary storage. (4) As there is more space in main memory, more processes can be loaded reducing context switching time which utilizes large amounts of resources. (5) Less loading latency occurs at program startup, as less information is accessed from secondary storage and less information is brought into main memory. (6) Does not need extra hardware support than what paging needs, since protection fault can be used to get page fault. Disadvantages: (1) Individual programs face extra latency when they access a page for the first time. So demand paging may have lower performance than anticipatory paging algorithms such as prepaging, a method of remembering which pages a process used when it last executed and preloading a few of them, is used to improve performance. (2) Programs running on low-cost, low-power embedded systems may not have a memory management unit that supports page replacement. (3) Memory management with page replacement algorithms becomes slightly more complex. (4) Possible security risks, including vulnerability to timing attacks; Page Table: Prof.Manoj S. Kavedia ( 4 3

44 A page table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. Virtual addresses are those unique to the accessing process. Physical addresses are those unique to the CPU, i.e., RAM. Fig Concept of Page table Concept of Page Table: The virtual address is divided into a virtual page number which are also known as high-code bits and an offset called low order bits and an offset (low-order bits). Example shown in fig with a 16 bit address and a 4KB page size, the upper bits could specify one of the 16 virtual pages and the lower 12 bits would then specific the byte offset (0 to 4095) within the selected page. However a split with S or 5 or some other number of bits for the page is also possible. Different splits imply different page sizes. Here the virtual page number is used as an index into the page table to search the entry for that virtual page. From the :page table entry, the page frame number (if any) is found. The page frame number attached to the high order end of the offset, replacing the virtual page number; to form a physical address that can be sent to the memory. The main use of page table is to map virtual pages onto page frames, in mathematically words; the page table is a function, with the virtual page number as argument and the physical frame number as result. This result is useful function; a page frame field, thus forming a physical memory address, can replace the virtual page field in a virtual address. Two major issues attached with the file are that page table can be extremely large and the mapping must be fast. Prof.Manoj S. Kavedia ( 4 4

45 Fig Concept of Page table Translation Process of virtual address to physical address: The CPU's memory management unit (MMU) stores a cache of recently used mappings from the operating system's page table. This is called the Translation Lookaside Buffer (TLB). When a virtual address needs to be translated into a physical address, the TLB is searched first. If a match is found (a TLB hit), the physical address is returned and memory access can continue. However, if there is no match (called a TLB miss), the CPU will generate a processor interrupt called a page fault. Shown in Fig (A) (B) Fig Translation Process Prof.Manoj S. Kavedia ( 4 5

46 The operating system will have an interrupt handler to deal with such page faults. The handler will typically look up the address mapping in the page table to see whether a mapping exists. If one exists, it is written back to the TLB (this must be done, as the hardware accesses memory through the TLB in a virtual memory system), and the faulting instruction is restarted. This subsequent translation will find a TLB hit, and the memory access will continue Valid Valid-Invalid Bit Invalid Bit: With each page table entry a valid invalid bit is associated.(1 - in-memory, 0 - not-inmemory) Initially valid invalid but is set to 0 on all entries shown in fig During address translation, if valid invalid bit in page table entry is 0 - page fault Fig Valid and Invalid Bit 3.6 PAGE REPLACEMENT ALGORITHMS Page replacement algorithms are the techniques using which an Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are not available or the number of free pages is lower than required pages. A page replacement algorithm looks at the limited information about accessing the pages provided by hardware, and tries to select which pages should be replaced to minimize the total number of page misses, while balancing it with the costs of primary storage and processor time of the algorithm itself. There are many different page replacement algorithms. We evaluate an algorithm by running it on a particular string of memory reference and computing the number of page faults, Example where page fault can occur (1) First example is in most computers have one or more memory caches consisting of recently used 32-byte or 64-byte memory blocks. When the cache is full, some block has to be chosen for removal. This problem is precisely the same as page replacement except on a shorter time scale (it has to be done in a few nanoseconds, Prof.Manoj S. Kavedia ( 4 6

47 not milliseconds as with page replacement). The reason for the shorter time scale is that cache block misses are satisfied from main memory, which has no seek time and no rotational latency. (2) A second example is in a Web server. The server can keep a certain number of heavily used Web pages in its memory cache. However, when the memory cache is full and a new page is referenced, a decision has to be made which Web page to evict. The considerations are similar to pages of virtual memory, except for the fact that the Web pages are never modified in the cache, so there is always a fresh copy on disk. In a virtual memory system, pages in main memory may be either clean or dirty Different Page Replacement Algorithms When total memory requirement in demand paging exceed the physical memory, then there is need to replace pages from memory to free frames for new pages. For replacement of pages various techniques are used. (1) FIFO Page Replacement (2) Optimal Page Replacement (3) LRU Page Replacement (4) NRU Not recently used (5) Clock Page Replacement Algorithm First-In, First-Out (FIFO) Page Replacement Algorithm: Oldest page in main memory is the one which will be selected for replacement. Easy to implement, keep a list, replace pages from the tail and add new pages at the head. FIFO algorithm low-overhead paging algorithm. The operating system maintains a list of all pages currently in memory, with the page at the head of the list the oldest one and the page at the tail the most recent arrival. On a page fault, the page at the head is removed and the new page added to the tail of the list. Example of FIFO: In all our examples, shown in Fig the reference string is 2,3,2,1,5,2,4,5,3,2,5,2. Fig FIFO Page Replacement Algorithm Example-1 In all our examples, Shown in Fig the reference string is 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1. Prof.Manoj S. Kavedia ( 4 7

48 Fig FIFO Page Replacement Algorithm Explanation : (1) The first three references cause page fault and brought into the empty frames. (2) The next page is 2 and it replaces the first page 7 that was brought first in queue. (3) The next page is 0 since it is already in memory, we have no fault. The next reference 3 is to be replaced with 0 as follows sincqitwas the first of 0,1,2 as follows. (4) The next first reference 1 is to replaced by page reference 0 or 2 is replaced by 4 and 3 i replaced by 2 sequentially as in following Figs and fig (5) Thus there are 6 page faults occurred. Example of FIFO with Different Frames: In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. 3 frames (3 pages can be in memory at a time per process) Fig Frames 4 frames (4 pages can be in memory at a time per process) Fig Frames Drawback of FIFO: Process execution is slow Prof.Manoj S. Kavedia ( 4 8

49 The rate of page fault increases Example-2: FIFO Page Replacement Algorithm: Optimal Page Replacement Algorithm: The optimal page algorithm simply says that the page with the highest label should be removed. If one page will not be used for 8 million instructions and another page will not be used for 6 million instructions, removing the former pushes the page fault that will fetch it back as far into the future as possible. The only problem with this algorithm is that it is unrealizable. At the time of the page fault, the operating system has no way of knowing when each of the pages will be referenced next. Still, by running a program on a simulator and keeping track of all page references, it is possible to implement optimal page replacement on the second run by using the page reference information collected during the first run. Example-1 Fig Optimal Page Replacement Algorithm Fig Optimal Page Replacement Algorithm Explanation: Fig shows optimal page replacement algorithm. The first three references cause page fault and we store these into first three frames. The reference page 2 replaces 7 because it is not used further for long period. The reference P age 3 replaces page 1 as: The next reference page 4 replaces 0 and so on for page 0 and 1. Thus for last reference 7 wili replace 2. Example with 4 Frames Prof.Manoj S. Kavedia ( 4 9

50 Fig Optimal Page Replacement Algorithm Limitations: (1) Algorithm is difficult to implement. (2) FIFO algorithm uses the time when a page was brought into replacement uses the time when page is used. (3) memory, whereas optimal page Example-2: Optimal Page Replacement LRU (Least recently used) Page replacement algorithm: A good approximation to the optimal algorithm is based on the observation that pages that have been heavily used in the last few instructions will probably be heavily used again in the next few. Conversely, pages that have not been used for ages will probably remain unused for a long time. The best way is to, throw out the page that has been unused for the longest time when page fault occurs. This strategy is called LRU (Least Recently Used) paging. As shown in fig Although LRU is theoretically realizable, it is not cheap. To fully implement LRU, it is necessary to maintain a linked list of all pages in memory, with the most recently used page at the front and the least recently used page at the rear. The difficulty is that the list must be updated on every memory reference. Finding a page in the list, deleting it, and then moving it to the front is a very time consuming operation, even in hardware (assuming that such hardware could be built) Fig LRU Page Replacement Algorithm Example-1: LRU place replacement Algorithm Explanation: Prof.Manoj S. Kavedia ( 5 0

51 Fig LRU Page Replacement Algorithm From the above figure for first five diagrams will be same as those found in optima page replacement. But in next figure with LRU page replacement algorithm, for page replacement it sees which page is recently used, and then it replaces that with required one. For e.g. for page 2 is replaced for page 4 and so on. Fig LRU Page Replacement Algorithm The major problem is how to implement LRU replacement: (1) Counter: whenever a reference to a page is made, the content of the clock register are copied to the time-of-use filed in the page table entry for the page. We replace the page with the smallest time value (2) Stack: Whenever a page is referenced, it is removed from the stack and put on the top. In this way, the most recently used page is always at the top of the stack Counter implementation: Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter. When a page needs to be changed, look at the counters to determine which are to change Stack implementation: o keep a stack of page numbers in a double link form: o Page referenced: T move it to the top T requires 6 pointers to be changed o No search for replacement Difficult to implement: One approach is to tag each page with the time of last reference. This requires a great deal of overhead. Example-2: LRU page replacement Algorithm Prof.Manoj S. Kavedia ( 5 1

52 Not Recently Used(NRU) Page: In order to allow the operating system to collect useful statistics about which pages are being used and which ones are not, most computers with virtual memory have two status bits associated with each page. R is set whenever the page is referenced (read or written). M is set when the page is written to (i.e., modified). The bits are contained in each page table entry. It is important to realize that these bits must be updated on every memory reference, so it is essential that they be set by the hardware. Once a bit has been set to 1, it stays 1 until the operating system resets it to 0 in software. If the hardware does not have these bits, they can be simulated as follows. When a process is started up, all of its page table entries are marked as not in memory. As soon as any page is referenced, a page fault will occur. The operating system then sets the R bit (in its internal tables), changes the page table entry to point to the correct page, with mode READ ONLY, and restarts the instruction. If the page is subsequently written on, another page fault will occur, allowing the operating system to set the M bit and change the page's mode to READ/WRITE. The R and M bits can be used to build a simple paging algorithm as follows. When a process is started up, both page bits for all its pages are set to 0 by the operating system. Periodically (e.g., on each clock interrupt), the R bit is cleared, to distinguish pages that have not been referenced recently from those that have been. When a page fault occurs, the operating system inspects all the pages and divides them into four categories based on the current values of their R and M Class 0: not referenced, not modified. Class 1: not referenced, modified. Class 2: referenced, not modified. Class 3: referenced, modified. Although class 1 pages seem, at first glance, impossible, they occur when a class 3 page has its R bit cleared by a clock interrupt. Clock interrupts do not clear the M bit because this information is needed to know whether the page has to be rewritten to disk or not. Clearing R but not M leads to a class 1 page. The NRU (Not Recently Used) algorithm removes a page at random from the lowest numbered nonempty class. Implicit in this algorithm is that it is better to remove a modified page that has not been referenced in at least one clock tick (typically 20 msec) than a clean page that is in heavy use. The main attraction of NRU is that it is easy to understand, moderately efficient to implement, and gives a performance that, while certainly not optimal, may be adequate The Clock Page Replacement Algorithm: Prof.Manoj S. Kavedia ( 5 2

53 Although second chance is a reasonable algorithm, it is unnecessarily inefficient because it is constantly moving pages around on its list. A better approach is to keep all the page frames on a circular list in the form of a clock, as shown in Fig A hand points to the oldest page. When a page fault occurs, the page being pointed to by the hand is inspected. If its R bit is 0, the page is evicted, the new page is inserted into the clock in its place, and the hand is advanced one position. If R is 1, it is cleared and the hand is advanced to the next page. This process is repeated until a page is found with. Figure The Clock Page Replacement Algorithm R = 0. Not surprisingly, this algorithm is called clock. It differs from second chance only in the implementation. Example for All the page replacement techniques: Fig All Page Replacement Algorithm Prof.Manoj S. Kavedia ( 5 3

54 3.7 SEGMENTATION Segmentation is a memory management technique in which each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. Each segment is actually a different logical address space of the program. When a process is to be executed, its corresponding segmentation are loaded into non-contiguous memory though every segment is loaded into a contiguous block of available memory. Segmentation memory management works very similar to paging but here segments are of variable-length where as in paging pages are of fixed size. Segmented memory allocation. Job 1 includes a main program, subroutine A, and subroutine B, so it s divided into three segments. Fig Segmentation and Segmap Table A program segment contains the program's main function, utility functions, data structures, and so on. The operating system maintains a segment map table shown in fig.3.42,or every process and a list of free memory blocks along with segment numbers, their size and corresponding memory locations in main memory. For each segment, the table stores the starting address of the segment and the length of the segment. A reference to a memory location includes a value that identifies a segment and an offset. The system maintains three tables: (1) The job table (as with static paging). (2) The segment Map table list details about each job (one for each job). (3) Memory Map Table (as before). Prof.Manoj S. Kavedia ( 5 4

55 Fig Segmentation The addressing scheme requires the segment number and the displacement within that segment, and, because the segments are of different sizes, the displacement must be verified to make sure it isn't outside of the segment's range. A segmented address reference requires the following steps: (1) extract the segment number and the displacement from the logical address (2) use the segment number to index the segment table, to obtain the segment base address and length. (3) check that the offset is not greater than the given length; if so an invalid address is signaled. (4) generate the required physical address by adding the offset to the base address. Main Points: Fig Address Mapping The benefits of segmentation include modularity of programs and sharing and protection. There is a maximum segment size that the programmer must be aware of. No internal fragmentation. Unequal sized segments. Prof.Manoj S. Kavedia ( 5 5

56 Non contiguous memory. Some external fragmentation. Segmentation greatly facilitates the sharing of procedures or data between a number of processes. Because a segment normally has either program code or data within them different segments within the same process can have different protections set for them. While protection is possible in a paging environment it is far more difficult to implement, the programmer is unaware of what pages hold a procedure or data area that he may wish to share, to do this the programmer would have to keep track of the exact pages that held a procedure and then assign the necessary protection to those pages Difference between paging and segmentation: No. Paging Segmentation 1. A page is a contiguous range of memory addresses which is mapped to physical memory. A segment is an independent address space. Each segment has addresses in a range from 0 to maximum value. 2. It has only one linear address space. It has many address spaces. 3. Programmer does not know that it is implemented 4. Procedures and data cannot be separated 5. Procedures cannot be shared between users 6. Procedures and data cannot be protected separately 7. Compilation cannot be done separately Programmer knows that it is implemented. Procedures and data can be separated Procedures can be shared between users Procedures and data can be protected separately Compilation can be done separately 8. A page is a physical unit A segment is a logical unit 9. A page is of fixed size A segment is of arbitrary size. Prof.Manoj S. Kavedia ( 5 6

57 4.1 INTRODUCTION Chapter-4 File System A file can be described as a container that stores data that is accessible by a computer system - it is basically a container holding a set of related information that will be stored on some form of secondary storage. A data file may consist of for example, a set of computer program instructions, text or another form of data. A text file may contain just a short letter, or contain a complete book. In other words a file is basically a defined set of named data. A file is a named collection of related information that is recorded on secondary storage. Also, a file is the smallest allotment of logical secondary storage; that is, data cannot be written to secondary storage unless they are within a file. Information may be stored in are: file-source programs, object programs, executable programs, numeric data, text, payroll records, graphic images, Sound recordings File Naming: The rules for file naming vary somewhat from system to system, but all current operating systems allow strings of one to eight letters as legal file names (in MSDOS). Many file systems support names as long as 255 characters(in Windows). Some file systems distinguish between upper- and lowercase letters, whereas others do not. Unix considers upper and lower case as different while MSDOS consider both as same. Hence vipul.txt and VIPUL.txt will be different file in unix but in MSDOS it 2 will considered as same. Many operating systems support two-part file names, with the two parts separated by a period, as in vipul.txt. The part following the period is called the file extension and usually indicates something about the file. In MS-DOS, for example, file names are 1 to 8 characters, plus an optional extension of 1 to 3 characters. In UNIX, the size of the extension, if any, is up to the user, and a file may even have two or more extensions, as in vipul.html.zip, where.html indicates a Web page in HTML and.zip indicates that the file (vipul.html) has been compressed using the zip program. Some of the more common file extensions and their meanings are shown in Fig Extension.bak Backup file.c C source program.gif.hlp Meaning Compuserve Graphical Interchange Format Image Help file Prof.Manoj S. Kavedia ( 5 7

58 .html.jpg.mp3.mpg World Wide Web Hyper Text markup Language document Still picture encoded with the JPEG standard Music encoded in MPEG layer 3 audio format Movie encoded with the MPEG standard.o Object file (compiler output, not yet linked).pdf.ps.tex.txt.zip Portable Document Format file PostScript file Input for the TEX formatting program General text file Compressed archive Fig. 4.1 Example of File Extension The system uses the extension to indicate the type of the file and the type of operations that can be done on that file like what windows does. When a user double clicks on a file name, the program assigned to its file extension is launched with the file as parameter. For example, double clicking on vipul.txt starts notepad for editing that file,and if extension is vipul.doc then it will be opened in Microsoft to edit the same. For example only a file with a.com,.exe, or.bat extension can be executed. com and.exe files are two forms of binary executable files, (1) bat file is a batch file containing, in ASCII format, commands to the operating system. (2) asm assemblers expect source files to have an.asm extension it consist of assembly language program for the processor like 8086, 8051 etc, (3) wp the WordPerfect word processor expects its file to end with a.wp extension. (4) jpeg, gif, png for the file which consist of image (5) mpeg for the file which consist of Motion pictures File Structuring: A file has a certain defined structure according to its type. For example (1) A text file is a sequence of characters organized into lines (and possibly pages). (2) A source file is a sequence of subroutines and functions, each of which is further organized as declarations followed by executable statements. (3) An object file is a sequence of bytes organized into blocks understandable by the system's linker. (4) An executable file is a series of code sections that the loader can bring into memory and execute. (5) When operating system defines different file structures, it also contains the code to support these file structure. Unix, MS-DOS support minimum number of file structure. Files can be structured in any of the are shown Fig.4.2 several ways. The three common possibilities are as follows: 1. Stream of Bytes 2. Records Prof.Manoj S. Kavedia ( 5 8

59 3. Tree Stream of Bytes: For operating system regard files is just sequences of bytes and provides the maximum amount of flexibility. User programs can put any type of data they want in their files and accordingly. All versions of UNIX (including Linux and OS X) and Windows use this file model. The main advantage of such file structuring is that it simplifies file management for the operating system and Applications can have their own structure. Fig Types of File Structure Record File Structure: A file is a sequence of fixed length record, each with some Internal structure shown in fig.4.2. Collection of bytes is treated as a unit. For example employee record. Operations are at the level of records (read_record, write_record).file is a collection of similar records. Operating System can optimize operations on records. No current generalpurpose system uses this model as its primary file system any more, but back in the days of 80-column punched cards and 132-character line printer paper this was a common model on mainframe computers. Tree File Structure: In this organization, a file consists of a tree of records, not necessarily all the same length, each containing a key field in a fixed position in the record. The tree is sorted on the key field, to allow rapid searching for a particular key. The basic operation here is not to get the next record, although that is also possible, but to get the record with a specific key. The system can search a file with key without worrying about its exact position in the file. Also, new records can be added to the file, with the operating system, and not the user, deciding where to place them. This type of Prof.Manoj S. Kavedia ( 5 9

60 file is clearly quite different from the unstructured byte streams used in UNIX and Windows and is used on some large mainframe computers for commercial data processing File Type: The types of files recognized by the system are either regular, directory, or special. However, the operating system uses many variations of these basic types. The following basic types of files exist: 1 Regular Contain user information, Stores data (text, binary, and executable).all files are regular file. 2 Directory Contains information used to access other files. for maintaining the structure of the file system. 3 Special Character special files are related to input/output and used to model serial I/O devices, such as terminals, printers, and networks. Block special files are used to model disks. Regular files: Regular files are the most common files and are used to contain data. Regular files are in the form of text files or binary files: Regular files are generally either ASCII files or binary files. ASCII files consist of lines of text. In some systems each line is terminated by a carriage return character. In others, the line feed character is used. Some systems (e.g., Windows) use both. Lines need not all be of the same length. The great advantage of ASCII files is that they can be displayed and printed as is, and they can be edited with any text editor. Text files: Text files are regular files that contain information stored in ASCII format text and are readable by the user. You can display and print these files. The lines of a text file must not contain NUL characters, and none can exceed {LINE_MAX} bytes in length, including the new line character. The term text file does not prevent the inclusion of control or other nonprintable characters (other than NUL). Binary files: Binary files are regular files that contain information readable by the computer. Binary files might be executable files that instruct the system to accomplish a job. Commands and programs are stored in executable, binary files. Special compiling programs translate ASCII text into binary code. Directory files: Directory files contain information that the system needs to access all types of files, but directory files do not contain the actual file data. As a result, directories occupy less space than a regular file and give the file system structure flexibility and depth. Each directory entry represents either a file or a subdirectory. Each entry contains the name of the file and the file's index node reference number (i-node number). The i-node number points to the unique index node assigned to the file. The i-node number describes the location of the data associated with the file. Directories are created and controlled by a separate set of commands. Prof.Manoj S. Kavedia ( 6 0

61 Special files: Special files define devices for the system or are temporary files created by processes. These files are also known as device files. These files represent physical device like disks, terminals, printers, networks, tape drive etc. These files are of two types: Character special files: Data is handled character by character as in case of terminals or printers. Block special files: Data is handled in blocks as in the case of disks and tapes File Access: Access Methods: Purpose of file is to store information. This information must be accessed and read into computer memory. The information in the file can be accessed in several ways. Some systems provide only one access method for files and some provide many methods of access. For example IBM support many access methods, and But the problem is selection of the right method for a particular application. Different access methods are Sequential access method Direct access method Indexed access method Sequential access method: The simplest access method is sequential access. Information in the file is processed in order, one record after the other. This mode of access is the most common method; for example, editors and compilers usually access files in this fashion. Fig. 4.3 Sequential Access The bulk of the operations on a file is reads and writes. A read operation reads the next portion(data) of the file and automatically advances a file pointer, which tracks the I/O location. Similarly, a write appends to the end of the file and advances to the end of the newly written material (the new end of file). Such a file can be reset to the beginning and, on some systems, a program may be able to skip forward or backward n records, for some integer n-perhaps only for n = 1. Sequential access is depicted in Figure.4.3. Sequential access is based on a tape model of a file, and works as well on sequential-access devices as it does on random-access ones. Roll No Name Address Age 123 Manoj Shd Kaushal Unr Rishabh Kyn 10 Prof.Manoj S. Kavedia ( 6 1

62 126 Asha Dom Vimla Unr 60 Fig Sequential Access In this type of file a fixed format is used for records as shown in fig.4.4. All records are of the same length consisting of the same number of fixed length fields in a particular order. One particular field usually the first field in each record is referred to as the key field. The key field uniquely identifies the record. Advantages: (1) Organization of data is in simple way (2) Easy to access the next record. (3) Data structures is absent. (4) Sequential files are typically used in batch applications where they are involved in the processing of all the records such as payroll, billing etc. (5) They are easily stored on tapes as well as disks. (6) Automatic creation of backup. Disadvantages: (1) Wastage of memory space because of master file and transaction file. For interactive applications that involve queries and/or updates of individual records, the sequential file provides poor performance. Direct Access (Relative Access): A file is made up of fixed length logical records that allow programs to read and write records in without a particular order. The direct-access method is based on a disk model of a file, since disks allow random access to any file block. For direct access, the file is seen as a numbered sequence of blocks or records. A direct-access file allows arbitrary blocks to be read or written. Thus, we may read block 24, then read block 13, and then write block 17. Shown in fig.4.5. Fig. 4.5 Direct Access For the direct-access method, the file operations must be modified to include the block number as a parameter. Thus, we have read n, where n is the block number, rather than read next, and write n rather than write next. An alternative approach is to retain read next and write next, as with sequential access, and to add an operation position file to n, where n is the block number. Then, to effect a read n, we would position to n and then read next. Prof.Manoj S. Kavedia ( 6 2

63 Given a logical record length L, a request for record N is turned into an I/O request for L bytes starting at location L * (N -1) within the file (assuming first record is N = 1). Since logical records are of a fixed size, it is also easy to read, write, or delete a record. Indexed Access Method: These methods generally involve the construction of an index for the file. The index, similar to an index in the back of a book, contains pointers to the various blocks. To find a record in the file, user has to first search the index, and then use the pointer to access the file directly and to find the desired record. First Name Manoj Logical Record Number Kaushal Asha Manager 37 KYN Rishabh Asha Vimla Indexed File Relative File Fig Indexed Access For example, IBM's indexed sequential-access method (ISAM) uses a small master index that points to disk blocks of a secondary index. The secondary index blocks point to the actual file blocks. The file is kept sorted on a defined key. To find a particular item, we first make a binary search of the master index, which provides the block number of the secondary index. This block is read in, and again a binary search is used to find the block containing the desired record. Finally, this block is searched sequentially. In this way, any record can be located from its key by at most two directaccess reads. Figure 4.6 shows a similar situation as implemented by VMS index and relative files. Advantages: (1) Variable length records are allowed. (2) Indexed sequential file may be updated in sequential or random mode. (3) Very fast operation. Disadvantages: (1) The major disadvantage of the indexed sequential file is that, as the file grows a performance deteriorates rapidly because of overflows and consequently there arises the need for periodic reorganization. Reorganization is an expensive process and the file becomes unavailable during reorganization. (2) When a new record is added to the main file. all of the Index files must be updated. Consumes large memory space for maintaining index files File Attributes: Prof.Manoj S. Kavedia ( 6 3

64 A file is named, for the convenience of its human users, and is referred to by its name. A name is usually a string of characters, such as examp1e.java or.vbp. Some systems differentiate between upper- and lowercase characters in names, whereas other systems consider the two cases to be equivalent. A file has certain other attributes, which vary from one operating system to another, but typically consist of these: Name Identifier Type Location Size Protection Time of creation Date of creation, and user identification Name: The symbolic file name is the only information kept in human readable form. Identifier: This unique tag, usually a number, identifies the file within the file system; it is the non-human-readable name for the file. Type: This information is needed for those systems that support different types. Location: This information is a pointer to a device and to the location of the file on that device. Size: The current size of the file (in bytes, words, or blocks), and possibly the maximum allowed size are included in this attribute. Protection: Access-control information determines who can do reading, writing, executing, and so on. Time, date of creation and user identification: This information may be kept for creation, last modification, and last use. These data can be useful for protection, security, and usage monitoring. The information about all files is kept in the directory structure that also resides on secondary storage. Typically, the directory entry consists of the file's name and its unique identifier. The identifier in turn locates the other file attributes. It may take more than a kilobyte to record this information for each file. In a system with many files, the size of the directory itself may be megabytes. Protection Password Creator Attribute Meaning Who can access the file and in what way Password needed to access the file ID of the person who created the file Prof.Manoj S. Kavedia ( 6 4

65 Owner Read-only flag Hidden flag System flag Archive flag ASCII/Binary flag Random access flag Temporary flag Lock flags Record length Key position Key length Creation time Time of last access Time of last change Current size Maximum size Current owner 0 for read/write; 1 for read only 0 for normal; 1 for do not display in listings 0 for normal files; 1 for system file 0 for has been backed up; 1 for needs to be backed up 0 for ASCII file; 1 for binary file 0 for sequential access only; 1 for random access 0 for normal; 1 for delete file on process exit 0 for unlocked; nonzero for locked Number of bytes in a record Offset of the key within each record Number of bytes in the key field Date and time the file was created Date and time the file was last accessed Data and time the file was last changed Number of bytes in the file Number of bytes the file may grow to Fig. 4.7 File Attributes File Operation: A file is an ADT (Abstract Data Type). To define a file properly, we need to consider the operations that can be performed on files. The operating system can provide system calls to create, write, read, reposition, delete, and truncate files. Creating a file: Two steps are necessary to create a file. (1) Space in the file system must be found for the file. (2) An entry for the new file must be made in the directory. The directory entry records the name of the file and the location in the file system, and possibly other information. Writing a file: To write a file, we make a system call specifying both the name of the file and the information to be written to the file. Given the name of the file, the system searches the directory to find the location of the file. The system must keep a write pointer to the Prof.Manoj S. Kavedia ( 6 5

66 location in the file where the next write is to take place. The write pointer must be updated whenever a write occurs. Reading a file: To read from a file, we use a system call that specifies the name of the file and where (in memory) the next block of the file should be put. Again, the directory is searched for the associated directory entry, and the system needs to keep a read pointer to the location in the file where the next read is to take place. Once the read has taken place, the read pointer is updated. A given process is usually only reading or writing a given file, and the current operation location is kept as a perprocess current-file-position pointer. Both the read and write operations use this same pointer, saving space and reducing the system complexity. Repositioning within a file: The directory is searched for the appropriate entry, and the current-file-position is set to a given value. Repositioning within a file does not need to involve any actual I/O. This file operation is also known as a file seek. Deleting a file: To delete a file, we search the directory for the named file. Having found the associated directory entry, we release all file space, so that it can be reused by other files, and erase the directory entry. Truncating a file: The user may want to erase the contents of a file but keep its attributes. Rather than forcing the user to delete the file and then recreate it, this function allows all attributes to remain unchanged-except for file length-but lets the file be reset to length zero and its file space released. Other operation which are performed on file: Other common operations include (1) Appending a file (2) Renaming a file (3) Creating a Copy (4) Searching Appending a File: An appending new information to the end of an existing file. This call is a restricted form of write. It can add data only to the end of the file. Systems that provide a minimal set of system calls rarely have append, but many systems provide multiple ways of doing the same thing, and these systems sometimes have append. Renaming a File: Renaming an existing file ie changing the name of the file for example vipul.txt is renamed to new name vipul.pub. Creating a copy of File: Basic operations may then be combined to perform other file operations. For instance, creating a copy of a file, or copying the file to another I/O device, such as a printer or a display, may be accomplished by creating a new file, and reading from the old and writing to the new. Prof.Manoj S. Kavedia ( 6 6

67 We also want to have operations that allow a user to get and set the various attributes of a file. For example, we may want to have operations that allow a user to determine the status of a file, such as the file's length, and allow a user to set file attributes, such as the file's owner. Searching: The file operations mentioned involve searching the directory for the entry associated with the named file. To avoid this constant searching, many systems require that an open system call be used before that file is first used actively. The operating system keeps a small table containing information about all open files (the open-file table). When a file operation is requested, the file is specified via an index into this table, so no searching is required. When the file is no longer actively used, it is closed by the process and the operating system removes its entry in the open-file table Information Associated with an Open File: Several pieces of information are associated with an open file. (1) File Pointer (2) File open count (3) Disk location of the file (4) Access rights File pointer: On systems that do not include a file offset as part of the read and write system calls, the system must track the last read-write location as a current-file-position pointer. This pointer is unique to each process operating on the file, and therefore must be kept separate from the on-disk file attributes. File open count: As files are closed, the operating system must reuse its open-file table entries, or it could run out of space in the table. Because multiple processes may open a file, the system must wait for the last file to close before removing the open-file table entry. This counter tracks the number of opens and closes and reaches zero on the last close. The system can then remove the entry. Disk location of the file: Most file operations require the system to modify data within the file. The information needed to locate the file on disk is kept in memory to avoid having to read it from disk for each operation. Access rights: Each process opens a file in an access mode. This information is stored on the perprocess table so the operating system can allow or deny subsequent I/O requests. 4.2 DIRECTORIES The file systems of computers can be extensive. Some systems store Megabytes of files on terabytes of disk. To manage all these data, we need to organize them. When there are many file searching time increases hence in order to save that time organization of data can be done on disk. Directory structure Organizes and provides information (e.g., name, size location, and type) on all the files in the system Prof.Manoj S. Kavedia ( 6 7

68 Partition A Partition C Partition B Operating System Notes Fig. 4.8 Directory Structure This organization is usually done in two parts. (1) Disks are split into one or more partitions, also known as minidisks in the IBM world or volumes in the PC and Macintosh. Each disk on a system contains at least one partition, which is a low-level structure in which files and directories reside. The disk space can be divided into partitions, or slices Directory files Di sk 2 Directory files Both the directory structure and the files reside on disk Directory files Di sk 1 Di sk 3 Fig. 4.9 File system Organization About Partitions and Volume: (a) Partitions are used to provide several separate areas within one disk, each treated as a separate storage device, whereas other systems allow partitions to be larger than a disk to group disks into one logical structure. (b) Hence the user needs to be concerned with only the logical directory and file structure, and can ignore completely the problems of physically allocating Prof.Manoj S. Kavedia ( 6 8

69 space for files. Hence for this reason partitions can be thought of as virtual disks. (c) Partitions can also store multiple operating systems, allowing a system to boot and run more than one. (2) Each partition contains information about files within it. This information is kept in entries in a device directory or volume table of contents. The device directory (more commonly known simply as a directory) records information-such as name, location, size, and type-for all files on that partition. Figure 4.9 shows the typical file-system organization. The directory can be viewed as a symbol table that translates file names into their directory entries. If we take such a view, we see that the directory itself can be organized in many ways. The most common schemes for defining the logical structure of a directory. Single Level Directory Two-Level Directory Tree Directory Structure Single Level Directory: The simplest directory structure is the single-level directory. All files are contained in the same directory, which is easy to support and understand shown in Figure A single-level directory has significant limitations, however when the number of files increases or when the system has more than one user. Since all files are in the same directory, they must have unique names. If two users call their data file test, then the unique-name rule is violated. Single Level Directory All files are contained in the same directory. Limitations (Naming problem, Grouping problem) Fig Single Level Directory Structure For example, in one programming class, 23 students called the program for their second assignment prog2; another 11 called it assign2. Although file names are generally selected to reflect the content of the file, they are often limited in length. The MS-DOS operating system allows only 11-character file names; UNIX allows 255 characters. Even a single user on a single-level directory may find it difficult to remember the names of all the files, as the number of files increases. It is not uncommon for a user to have hundreds of files on one computer system and an equal number of additional files on another system. A single-level directory often leads to confusion of file names between different users. The standard solution is to create a separate directory for each user Two Level Directory Structure: In the two-level directory structure, each user has her own User File Directory (UFD). Each UFD has a similar structure, but lists only the files of a single user. When a Prof.Manoj S. Kavedia ( 6 9

70 user job starts or a user logs in, the system's Master File Directory (MFD) is searched. The MFD is indexed by user name or account number, and each entry points to the UFD for that user. When a user refers to a particular file, only his own UFD is searched. Thus, different users may have files with the same name, as long as all the file names within each UFD are unique. Separate directory for each user Two Level Directory Can have the same file name for different user Efficient searching No grouping capability Fig Two Level Directory Structure To create a file for a user, the operating system searches only that user's UFD to ascertain whether another file of that name exists. To delete a file, the operating system confines its search to the local UFD; thus, it cannot accidentally delete another user's file that has the same name. Fig Shows Two level File Structure. The user directories themselves must be created and deleted as necessary. A special system program is run with the appropriate user name and account information. The program creates a new UFD and adds an entry for it to the MFD. The execution of this program might be restricted to system administrators Hierarchical / Tree Directory: Tree directory structure is the most popular tree structure. It is very useful because it solves the problem of grouping. i.e. it can group different users or directories. Again the data can be efficiently searched due to path concept. Path describes where exactly that file or directory is stored. Tree-Structured Directories Subdirectories Efficient searching Grouping Capability Current directory (working directory) Fig Tree structured Directory Prof.Manoj S. Kavedia ( 7 0

71 In the tree-structured directory, the directory themselves are files. This leads to the possibility of having sub-directories that can contain files and sub-subdirectories shown in fig An interesting policy decision in a tree-structured directory structure is how to handle the deletion of a directory. If a directory is empty, its entry in its containing directory can simply be deleted. However, suppose the directory to be deleted id not empty, but contains several files, or possibly sub-directories. Some systems will not delete a directory unless it is empty. Thus, to delete a directory, someone must first delete all the files in that directory. If these are any sub-directories, this procedure must be applied recursively to them, so that they can be deleted also. This approach may result in a insubstantial amount of work. An alternative approach is just to assume that, when a request is made to delete a directory, all of that directory's files and sub-directories are also to be deleted. The Microsoft Windows family of operating systems (95,95, NT, 2000) maintains an extended two-level directory structure, with devices and partitions assigned a drive letter Acyclic-Graph Directories: Fig Acyclic Directory Structure The acyclic directory structure shown in fig is an extension of the tree-structured directory structure. In the tree-structured directory, files and directories starting from some fixed directory are owned by one particular user. In the acyclic structure, this prohibition is taken out and thus a directory or file under directory can be owned by several users Path Name: A path, the general form of the name of a file or directory, specifies a unique location in a file system. A path points to a file system location by following the directory tree hierarchy expressed in a string of characters in which path components, separated by a delimiting character, represent each directory. The delimiting character is most commonly the slash ("/"), the backslash character ("\"), or colon (":"), though some operating systems may use a different delimiter. Paths are used extensively in computer science to represent the directory/file relationships common in modern Prof.Manoj S. Kavedia ( 7 1

72 operating systems, and are essential in the construction of Uniform Resource Locators (URLs). Resources can be represented by either absolute or relative paths. Absolute path name or Fully Qualified path name: Here each file is given an absolute path name consisting of the path from the root directory to the file. As an example, the path /usr/ast/mailbox means that the root directory contains a subdirectory usr, which in turn contains a subdirectory ast, which contains the file mailbox. Absolute path names always start at the root directory and are unique. In UNIX the components of the path are separated by /. In Windows the separator is \. In MULTICS it was >. Thus, the same path name would be written as follows in these three systems: Windows \usr\ast\mailbox UNIX /usr/ast/mailbox MULTICS >usr>ast>mailbox No matter which character is used, if the first character of the path name is the separator, then the path is absolute. Relative path name: The other kind of name is the relative path name. This is used in conjunction with the concept of the working directory (also called the current directory). A user can designate one directory as the current working directory, in which case all path names not beginning at the root directory are taken relative to the working directory. For example, if the current working directory is /usr/ast, then the file whose absolute path is /usr/ast/mailbox can be referenced simply as mailbox. In other words, the UNIX command cp /usr/ast/mailbox /usr/ast/mailbox.bak and the command cp mailbox mailbox.bak do exactly the same thing if the working directory is /usr/ast. The relative form is often more convenient, but it does the same thing as the absolute form. Most operating systems that support a hierarchical directory system have two special entries in every directory,. and.., generally pronounced dot and dotdot. Dot refers to the current directory; dotdot refers to its parent (except in the root directory, where it refers to itself). A certain process has /usr/ast as its working directory. It can use.. to go higher up the tree. For example, it can copy the file /usr/lib/dictionary to its own directory using the command cp../lib/dictionary. The first path instructs the system to go upward (to the usr directory), then to go down to the directory lib to find the file dictionary. The second argument (dot) names the current directory. When the cp command gets a directory name (including dot) as its last argument, it copies all the files to that directory. Of course, a more normal way to do the copy would be to use the full absolute path name of the source file: cp /usr/lib/dictionary. Here the use of dot saves the user the trouble of typing dictionary a second time. Prof.Manoj S. Kavedia ( 7 2

73 Nevertheless, typing cp /usr/lib/dictionary dictionar y also works fine, as does cp /usr/lib/dictionary /usr/ast/dictionar y All of these do exactly the same thing. Other Examples: If a file name begins with only a disk designator but not the backslash after the colon, it is interpreted as a relative path to the current directory on the drive with the specified letter. Note that the current directory may or may not be the root directory depending on what it was set to during the most recent "change directory" operation on that disk. Examples of this format are as follows: "C:tmp.txt" refers to a file named "tmp.txt" in the current directory on drive C. "C:tempdir\tmp.txt" refers to a file in a subdirectory to the current directory on drive C. A path is also said to be relative if it contains "double-dots"; that is, two periods together in one component of the path. This special specifier is used to denote the directory above the current directory, otherwise known as the "parent directory". Examples of this format are as follows: "..\tmp.txt" specifies a file named tmp.txt located in the parent of the current directory. "..\..\tmp.txt" specifies a file that is two directories above the current directory. "..\tempdir\tmp.txt" specifies a file named tmp.txt located in a directory named tempdir that is a peer directory to the current directory Directory Operation: System call for operations of directories varies from system to system. Following are the some system calls for managing directory used in unix oepating system Create: A directory is created. It is empty except for dot and dotdot, which are put there automatically by the system. Delete: A directory is deleted. Only an empty directory can be deleted. A directory containing only dot and dotdot is considered empty as these cannot be deleted. Opendir: Directories can be read. For example, to list all the files in a directory, a listing program opens the directory to read out the names of all the files it contains. Closedir: When a directory has been read, it should be closed to free up internal table space. Readdir: This call returns the next entry in an open directory. Formerly, it was possible to read directories using the usual read system call, but that approach has the disadvantage of forcing the programmer to know and deal with the internal structure of directories. Rename: It renames the directory. Link: Linking is a technique that allows a file to appear in more than one directory. This system call specifies an existing file and a path name, and creates Prof.Manoj S. Kavedia ( 7 3

74 a link from the existing file to the name specified by the path. In this way, the same file may appear in multiple directories. A link of this kind, which increments the counter in the file s i-node (to keep track of the number of directory entries containing the file), is sometimes called a hard link. Unlink: A directory entry is removed. If the file being unlinked is only present in one directory (the normal case), it is removed from the file system. If it is present in multiple directories, only the path name specified is removed. 4.3 FILE SYSTEM IMPLEMENTATION File System Layout: Disk on which operating system can be single or multiple partition. File system is stored on this Disk. Each partition has independent File System. Sector 0 of disk is called as Master Boot Record, Booting information is stored in this Sector. The end of the MBR contains the partition table. This table gives the starting and ending addresses of each partition. Out of Many partition One of the partitions in the table is marked as active. This active partition help computer in booting. Control from BIOS is transferred to this MBR by boot strap loader. The first block of MBR is also called as Boot Block. Prof.Manoj S. Kavedia ( 7 4

75 Fig File System Layout The file system will contain some of the items shown in Fig Boot Block Super Block Free Space Management I-Nodes Root Dir Files and Directories Superblock: It contains all the key parameters about the file system and is read into memory when the computer is booted or the file system is first touched. Information in the superblock includes a number to identify the file-system type, the number of blocks in the file system, and other key administrative information. Free Space Management: free blocks in the file system have information that which memory block are free, for example in the form of a bitmap or a list of pointers. i-nodes: An array of data structures, one per file, telling all about the file. root directory: It contains the top of the file-system tree. Files and Directories: The remainder of the disk contains all the other directories and files. Prof.Manoj S. Kavedia ( 7 5

76 4.3.2 Implementation of File System: Contiguous Allocation: Fig Contiguous File Allocation Method Contiguous Allocation: Fig Contiguous File allocation The contiguous allocation method requires each file to occupy a set of contiguous address on the disk. Disk addresses define a linear ordering on the disk. Notice that, with this ordering, accessing block b+1 after block b normally requires no head movement. When head movement is needed (from the last sector of one cylinder to the first sector of the next cylinder), it is only one track. Thus, the number of disk seeks required for accessing contiguous allocated files in minimal, as is seek time when a seek is finally needed. Contiguous allocation of a file is defined by the disk address and the length of the first block. If the file is n blocks long, and starts at location b, then it occupies blocks b, b+1, b+2,, b+n-1. The directory entry for each file indicates the address of the starting block and the length of the area allocated for this file. Fig.4.15 and Fig.4.16 show contiguous File allocation method. Prof.Manoj S. Kavedia ( 7 6

77 Advantages of contiguous allocation: (1) Supports both sequential and direct access methods. (2) Contiguous allocation is the best form u allocation for sequential files. Multiple blocks can be brought in at a time to Improve I/O performance for sequential processing. (3) It is also easy to retrieve a single block from a file. For example if a file starts at block n and the 1 block of the me Is wanted, its location on secondary storage is simply n + i. (4) Quick and easy calculation of block holding data - just offset from start of file. (5) For sequential access, almost no seeks required. (6) Even direct access is fast - just seek and read only one disk access. Disadvantages of contiguous file system are as follows: (1) There is no best place to put a new file. (2) Problems when file gets bigger - may have to move whole file. (3) External Fragmentation. (4) Compaction may be required, and it can be very expensive. Linked Allocation: Fig Linked File Allocation. Prof.Manoj S. Kavedia ( 7 7

78 Fig Linked File Allocation In linked allocation shown in fig.4.18., each file is a linked list of disk blocks. The directory contains a pointer to the first and (optionally the last) block of the file. For example, a file of 5 blocks which starts at block 4, might continue at block 7, then block 16, block 10, and finally block 27. Each block contains a pointer to the next block and the last block contains a NIL pointer. The value -1 may be used for NIL to differentiate it from block 0. With linked allocation, each directory entry has a pointer to the first disk block of the file. This pointer is initialized to nil (the end-of-list pointer value) to signify an empty file. A write to a file removes the first free block and writes to that block. This new block is then linked to the end of the file. To read a file, the pointers are just followed from block to block. Advantages of Linked Allocation: (1) Does not suffer from external fragmentation. (2) Support both sequential and direct access to the file. (3) No more variable-sized file allocation problems. Everything takes place in fixed-size chunks, which makes memory allocation a lot easier. (4) No more external fragmentation. (5) No need to compact or relocate files. Disadvantages of linked allocation: (1) Potentially terrible performance for direct access files - have to follow pointers from one disk block to the next! (2) Even sequential access is less efficient than for contiguous files because may generate long seeks between blocks. (3) Reliability -if loses one pointer, have big problems. Linked with memory table / Indexed Allocation: The indexed allocation method is the solution to the problem of both contiguous and linked allocation. This is done by bringing all the pointers together into one location called the index block. Of course, the index block will occupy some space and thus could Prof.Manoj S. Kavedia ( 7 8

79 be considered as an overhead of the method. In indexed allocation, each file has its own index block, which is an array of disk sector of addresses. Fig a Indexed Allocation Fig b Indexed Allocation Fig.4.19.a. and Fig.4.19.b. shows indexed File Allocation. The with entry in the index block points to the ith sector of the file. The directory contains the address of the index block of a file. To read the ith sector of the file, the pointer in the ith index block entry is read to find the desired sector. Indexed allocation supports direct access, without suffering from external fragmentation. Any free block anywhere on the disk may satisfy a request for more space. I-Node Allocation: For keeping track of which blocks belong to which file is to associate with each file a data structure called an i-node (index-node), which lists the attributes and disk Prof.Manoj S. Kavedia ( 7 9

80 addresses of the file s blocks. A simple example is shown in Fig Given the i-node, it is then possible to find all the blocks of the file. The main advantage of this scheme over linked files using an in-memory table is that the i-node need be in memory only when the corresponding file is open. If each inode occupies n bytes and a maximum of k files may be open at once, the total memory occupied by the array holding the i-nodes for the open files is only kn bytes. But this space need be reserved in advance. Fig I-Node File Allocation This array occupies less memory or space then file table. The table for holding the linked list of all disk blocks is proportional in size to the disk itself. If the disk has n blocks, the table needs n entries. As disks grow larger, this table grows linearly with them. But, the i-node scheme requires an array in memory whose size is proportional to the maximum number of files that may be open at once. It does not matter if the disk is 100 GB to 10,000 GB. One problem with i-nodes is that if each one has room for a fixed number of disk addresses, what happens when a file grows beyond this limit? One solution is to reserve the last disk address not for a data block, but instead for the address of a block containing more disk-block addresses, as shown in Fig Even more advanced would be two or more such blocks containing disk addresses or even disk blocks pointing to other disk blocks full of addresses Implementation of Directory: The operating system uses the path name supplied by the user to locate the directory entry on the disk. The directory entry provides the information needed to find the disk blocks. File can be located on the disk using the method discussed above. In all cases, the main function of the directory system is to map the ASCII name of the file onto the information needed to locate the data. Prof.Manoj S. Kavedia ( 8 0

81 The attributes of file system are generally stored with directory entry as shown in Fig. 4-21(a). following are different ways of implementing In this simple design, a directory consists of a list of fixed-size entries, one per file, containing a (fixed-length) file name, a structure of the file attributes, One or more disk addresses (up to some maximum) telling where the disk blocks are. For systems that use i-nodes, another possibility for storing the attributes is in the i-nodes, rather than in the directory entries. In that case, the directory entry can be shorter: just a file name and an i-node number. This approach is illustrated in Fig. 4-21(b). Prof.Manoj S. Kavedia ( 8 1

82 Fixed Length File Name: Generally files have short, fixed-length names. In MS-DOS files have a 1 8 character base name and an optional extension of 1 3 characters. In UNIX Version 7, file names were 1 14 characters, including any extensions. But all modern operating systems support longer, variable- length file names. Variable Length File Name: The simplest approach is to set a limit on file-name length, typically 255 characters, and then use one of the designs of Fig with 255 characters reserved for each file name. This approach is simple, but wastes a great deal of directory space, since few files have such long names. Another way is that all directory entries are the same size. Here, each directory entry contains a fixed portion, typically starting with the length of the entry, and then followed by data with a fixed format, usually including the owner, creation time, protection information, and other attributes. A disadvantage of this method is that when a file is removed, a variable-sized gap is introduced into the directory into which the next file to be entered may not fit. Another way to handle variable-length names is to make the directory entries themselves all fixed length and keep the file names together in a heap at the end of the directory, as shown in Fig. 4-22(b). This method has the advantage that when an entry is removed, the next file entered will always fit there. Of course, the heap must be managed and page faults can still occur while processing file names. Searching Directories: Different way of searching in directories are: Prof.Manoj S. Kavedia ( 8 2

83 Linear Search Hashing Caching Linear Search: Generally directories are searched linearly from beginning to end when a file name has to be looked up. For extremely long directories, linear searching can be slow. Hence to speed up search is to use a hash table in each directory. For the size of the table n. To enter a file name, the name is hashed onto a value between 0 and n-1, Hashing: The table entry corresponding to the hash code is inspected. If it is unused, a pointer is placed there to the file entry. File entries follow the hash table. If that slot is already in use, a linked list is constructed, headed at the table entry and threading through all entries with the same hash value. While searching the file name is hashed to select a hash-table entry. All the entries on the chain headed at that slot are checked to see if the file name is present. If the name is not on the chain, the file is not present in the directory. The disadvantage of more complex administration but searching is faster. Caching: Yet another way to speed up searching large directories is to cache the results of searches. Before starting a search, a check is first made to see if the file name is in the cache. If so, it can be located immediately Virtual File System: Modern operating systems now a days concurrently support multiple types of file systems. The method of implementing multiple types of file systems is to write directory and file routines for each type. Instead, however, most operating systems, including UNIX, use object-oriented techniques to simplify, organize, and modularize the implementation. The use of these methods allows very dissimilar file-system types to be implemented within the same structure, including network file systems, such as NFS. Users can access files that are contained within multiple file systems on the local disk or even on file systems available across the network. Data structures and procedures are used to isolate the basic system call functionality from the implementation details. Thus, the file-system implementation consists of three major layers, as depicted schematically in Figure (1) The first layer is the file-system interface, based on the open(), read(), write(), and close() calls and on file descriptors. (2) The second layer is called the virual File System (VFS) layer. The VFS layer serves two important functions: It separates file-system-generic operations from their implementation by defining a clean VFS interface. Several implementations for the VFS interface may coexist on the same machine, allowing transparent access to different types of file systems mounted locally. It provides a mechanism for uniquely representing a file throughout a network. The VFS is based on a file-representation structure, called a vnode that contains a numerical designator for a network-wide unique file. (UNIX inodes are unique within only a single file system.) This network-wide Prof.Manoj S. Kavedia ( 8 3

84 uniqueness is required for support of network file systems. The kernel maintains one vnode structure for each active node (file or directory). Fig A Virtual File System Thus, the VFS distinguishes local files from remote ones, and local files are further distinguished according to their file-system types. The VFS activates file-system-specific operations to handle local requests according to their file-system types and calls the NFS protocol procedures for remote requests. File handles are constructed from the relevant vnodes and are passed as arguments to these procedures. The layer implementing the file-system type or the remote-file-system protocol is the third layer of the architecture. Fig B Detial Look in Virtual File System Let's briefly examine the VFS architecture in Linux. The four main object types defined by the Linux VFS are: The inode object, which represents an individual file The file object, which represents an open file Prof.Manoj S. Kavedia ( 8 4

Operating Systems. Figure: Process States. 1 P a g e

Operating Systems. Figure: Process States. 1 P a g e 1. THE PROCESS CONCEPT A. The Process: A process is a program in execution. A process is more than the program code, which is sometimes known as the text section. It also includes the current activity,

More information

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31 CPU scheduling CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In a single-processor

More information

Lecture 2 Process Management

Lecture 2 Process Management Lecture 2 Process Management Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks The terms job and process may be interchangeable

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

Course: Operating Systems Instructor: M Umair. M Umair

Course: Operating Systems Instructor: M Umair. M Umair Course: Operating Systems Instructor: M Umair Process The Process A process is a program in execution. A program is a passive entity, such as a file containing a list of instructions stored on disk (often

More information

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs: OPERATING SYSTEMS UNIT II Sections A, B & D PREPARED BY ANIL KUMAR PRATHIPATI, ASST. PROF., DEPARTMENT OF CSE. PROCESS CONCEPT An operating system executes a variety of programs: Batch system jobs Time-shared

More information

CHAPTER 2: PROCESS MANAGEMENT

CHAPTER 2: PROCESS MANAGEMENT 1 CHAPTER 2: PROCESS MANAGEMENT Slides by: Ms. Shree Jaswal TOPICS TO BE COVERED Process description: Process, Process States, Process Control Block (PCB), Threads, Thread management. Process Scheduling:

More information

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Chapter 3: Processes. Operating System Concepts 8 th Edition, Chapter 3: Processes, Silberschatz, Galvin and Gagne 2009 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin and Gagne 2009

More information

CHAPTER 3 - PROCESS CONCEPT

CHAPTER 3 - PROCESS CONCEPT CHAPTER 3 - PROCESS CONCEPT 1 OBJECTIVES Introduce a process a program in execution basis of all computation Describe features of processes: scheduling, creation, termination, communication Explore interprocess

More information

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009 CS211: Programming and Operating Systems Lecture 17: Threads and Scheduling Thursday, 05 Nov 2009 CS211 Lecture 17: Threads and Scheduling 1/22 Today 1 Introduction to threads Advantages of threads 2 User

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2

More information

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems Chapter 5: Processes Chapter 5: Processes & Threads Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems, Silberschatz, Galvin and

More information

Operating Systems Unit 3

Operating Systems Unit 3 Unit 3 CPU Scheduling Algorithms Structure 3.1 Introduction Objectives 3.2 Basic Concepts of Scheduling. CPU-I/O Burst Cycle. CPU Scheduler. Preemptive/non preemptive scheduling. Dispatcher Scheduling

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept Silberschatz, Galvin and Gagne 2013! Chapter 3: Process Concept Process Concept" Process Scheduling" Operations on Processes" Inter-Process Communication (IPC)" Communication

More information

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.)

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.) Process Concept Chapter 3 Processes Computers can do several activities at a time Executing user programs, reading from disks writing to a printer, etc. In multiprogramming: CPU switches from program to

More information

Ch 4 : CPU scheduling

Ch 4 : CPU scheduling Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,

More information

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection Part V Process Management Sadeghi, Cubaleska RUB 2008-09 Course Operating System Security Memory Management and Protection Roadmap of Chapter 5 Notion of Process and Thread Data Structures Used to Manage

More information

UNIT - II PROCESS MANAGEMENT

UNIT - II PROCESS MANAGEMENT UNIT - II PROCESS MANAGEMENT Processes Process Concept A process is an instance of a program in execution. An operating system executes a variety of programs: o Batch system jobs o Time-shared systems

More information

SMD149 - Operating Systems

SMD149 - Operating Systems SMD149 - Operating Systems Roland Parviainen November 3, 2005 1 / 45 Outline Overview 2 / 45 Process (tasks) are necessary for concurrency Instance of a program in execution Next invocation of the program

More information

TDIU25: Operating Systems II. Processes, Threads and Scheduling

TDIU25: Operating Systems II. Processes, Threads and Scheduling TDIU25: Operating Systems II. Processes, Threads and Scheduling SGG9: 3.1-3.3, 4.1-4.3, 5.1-5.4 o Process concept: context switch, scheduling queues, creation o Multithreaded programming o Process scheduling

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

Chapter 3: Processes. Operating System Concepts Essentials 2 nd Edition

Chapter 3: Processes. Operating System Concepts Essentials 2 nd Edition Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

ICS Principles of Operating Systems

ICS Principles of Operating Systems ICS 143 - Principles of Operating Systems Lectures 3 and 4 - Processes and Threads Prof. Nalini Venkatasubramanian nalini@ics.uci.edu Some slides adapted from http://www-inst.eecs.berkeley.edu/~cs162/

More information

Table of Contents 1. OPERATING SYSTEM OVERVIEW OPERATING SYSTEM TYPES OPERATING SYSTEM SERVICES Definition...

Table of Contents 1. OPERATING SYSTEM OVERVIEW OPERATING SYSTEM TYPES OPERATING SYSTEM SERVICES Definition... Table of Contents 1. OPERATING SYSTEM OVERVIEW... 1 Definition... 1 Memory Management... 2 Processor Management... 2 Device Management... 2 File Management... 2 Other Important Activities... 3. OPERATING

More information

Operating System - Overview

Operating System - Overview Unit 37. Operating System Operating System - Overview An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is a software which performs all the basic

More information

Chapter 3: Processes. Operating System Concepts 8th Edition

Chapter 3: Processes. Operating System Concepts 8th Edition Chapter 3: Processes Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server Systems 3.2 Objectives

More information

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far. Midterm Exam Reviews ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far. Particular attentions on the following: System call, system kernel Thread/process, thread vs process

More information

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - III Processes. Louisiana State University. Processes. September 1 st, 2009

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - III Processes. Louisiana State University. Processes. September 1 st, 2009 CSC 4103 - Operating Systems Fall 2009 Lecture - III Processes Tevfik Ko!ar Louisiana State University September 1 st, 2009 1 Roadmap Processes Basic Concepts Process Creation Process Termination Context

More information

Chapter 3: Processes. Operating System Concepts 9 th Edition

Chapter 3: Processes. Operating System Concepts 9 th Edition Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Process Description and Control

Process Description and Control Process Description and Control 1 Process:the concept Process = a program in execution Example processes: OS kernel OS shell Program executing after compilation www-browser Process management by OS : Allocate

More information

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou (  Zhejiang University Operating Systems (Fall/Winter 2018) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review Motivation to use threads

More information

Chapter 3: Processes

Chapter 3: Processes Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Module 4: Processes Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Operating System Concepts 4.1 Process Concept An operating system executes

More information

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Module 4: Processes Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication 4.1 Process Concept An operating system executes a variety of programs: Batch

More information

Processes and More. CSCI 315 Operating Systems Design Department of Computer Science

Processes and More. CSCI 315 Operating Systems Design Department of Computer Science Processes and More CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those accompanying the textbook Operating Systems Concepts,

More information

1 PROCESSES PROCESS CONCEPT The Process Process State Process Control Block 5

1 PROCESSES PROCESS CONCEPT The Process Process State Process Control Block 5 Process Management A process can be thought of as a program in execution. A process will need certain resources such as CPU time, memory, files, and I/O devices to accomplish its task. These resources

More information

Chapter 4: Processes

Chapter 4: Processes Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne

More information

UNIT 3. PROCESS MANAGEMENT

UNIT 3. PROCESS MANAGEMENT This document can be downloaded from www.chetanahegde.in with most recent updates. 1 UNIT 3. PROCESS MANAGEMENT 3.1 PROCESS A process can be defined in several ways: A program in execution An instance

More information

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science CPU Scheduling Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS, Lahore Pakistan Operating System Concepts Objectives To introduce CPU scheduling, which is the

More information

! The Process Control Block (PCB) " is included in the context,

! The Process Control Block (PCB)  is included in the context, CSE 421/521 - Operating Systems Fall 2012 Lecture - III Processes Tevfik Koşar Roadmap Processes Basic Concepts Process Creation Process Termination Context Switching Process Queues Process Scheduling

More information

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017 CS 471 Operating Systems Yue Cheng George Mason University Fall 2017 Outline o Process concept o Process creation o Process states and scheduling o Preemption and context switch o Inter-process communication

More information

Lecture 5: Process Description and Control Multithreading Basics in Interprocess communication Introduction to multiprocessors

Lecture 5: Process Description and Control Multithreading Basics in Interprocess communication Introduction to multiprocessors Lecture 5: Process Description and Control Multithreading Basics in Interprocess communication Introduction to multiprocessors 1 Process:the concept Process = a program in execution Example processes:

More information

Processes and Threads

Processes and Threads TDDI04 Concurrent Programming, Operating Systems, and Real-time Operating Systems Processes and Threads [SGG7] Chapters 3 and 4 Copyright Notice: The lecture notes are mainly based on Silberschatz s, Galvin

More information

Chapter 4: Processes. Process Concept

Chapter 4: Processes. Process Concept Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne

More information

Some popular Operating Systems include Linux Operating System, Windows Operating System, VMS, OS/400, AIX, z/os, etc.

Some popular Operating Systems include Linux Operating System, Windows Operating System, VMS, OS/400, AIX, z/os, etc. Operating System Quick Guide https://www.tutorialspoint.com/operating_system/os_quick_guide.htm Copyright tutorialspoint.com Operating System Overview An Operating System OS is an interface between a computer

More information

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song CSCE 313 Introduction to Computer Systems Instructor: Dezhen Song Programs, Processes, and Threads Programs and Processes Threads Programs, Processes, and Threads Programs and Processes Threads Processes

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling COP 4610: Introduction to Operating Systems (Fall 2016) Chapter 5: CPU Scheduling Zhi Wang Florida State University Contents Basic concepts Scheduling criteria Scheduling algorithms Thread scheduling Multiple-processor

More information

CSCE 313: Intro to Computer Systems

CSCE 313: Intro to Computer Systems CSCE 313 Introduction to Computer Systems Instructor: Dr. Guofei Gu http://courses.cse.tamu.edu/guofei/csce313/ Programs, Processes, and Threads Programs and Processes Threads 1 Programs, Processes, and

More information

CHAPTER-1: INTRODUCTION TO OPERATING SYSTEM:

CHAPTER-1: INTRODUCTION TO OPERATING SYSTEM: CHAPTER-1: INTRODUCTION TO OPERATING SYSTEM: TOPICS TO BE COVERED 1.1 Need of Operating System 1.2 Evolution of os 1.3 operating system i. Batch ii. iii. iv. Multiprogramming Time sharing Real time v.

More information

LECTURE 3:CPU SCHEDULING

LECTURE 3:CPU SCHEDULING LECTURE 3:CPU SCHEDULING 1 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation 2 Objectives

More information

Chapter 3: Processes. Operating System Concepts Essentials 8 th Edition

Chapter 3: Processes. Operating System Concepts Essentials 8 th Edition Chapter 3: Processes Silberschatz, Galvin and Gagne 2011 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

UNIT 2 PROCESSES 2.0 INTRODUCTION

UNIT 2 PROCESSES 2.0 INTRODUCTION UNIT 2 PROCESSES Processes Structure Page Nos. 2.0 Introduction 25 2.1 Objectives 26 2.2 The Concept of Process 26 2.2.1 Implicit and Explicit Tasking 2.2.2 Processes Relationship 2.2.3 Process States

More information

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part I: Operating system overview: Processes and threads 1 Overview Process concept Process scheduling Thread

More information

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Week 05 Lecture 18 CPU Scheduling Hello. In this lecture, we

More information

Some popular Operating Systems include Linux, Unix, Windows, MS-DOS, Android, etc.

Some popular Operating Systems include Linux, Unix, Windows, MS-DOS, Android, etc. 1.1 Operating System Definition An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is a software which performs all the basic tasks like file management,

More information

Frequently asked questions from the previous class survey

Frequently asked questions from the previous class survey CS 370: OPERATING SYSTEMS [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University L14.1 Frequently asked questions from the previous class survey Turnstiles: Queue for threads blocked

More information

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems Processes CS 475, Spring 2018 Concurrent & Distributed Systems Review: Abstractions 2 Review: Concurrency & Parallelism 4 different things: T1 T2 T3 T4 Concurrency: (1 processor) Time T1 T2 T3 T4 T1 T1

More information

Process. Program Vs. process. During execution, the process may be in one of the following states

Process. Program Vs. process. During execution, the process may be in one of the following states What is a process? What is process scheduling? What are the common operations on processes? How to conduct process-level communication? How to conduct client-server communication? Process is a program

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 10 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Chapter 6: CPU Scheduling Basic Concepts

More information

Diagram of Process State Process Control Block (PCB)

Diagram of Process State Process Control Block (PCB) The Big Picture So Far Chapter 4: Processes HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection,

More information

Chapter 3: Processes

Chapter 3: Processes Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin and Gagne 2013

More information

UNIT 1 JAGANNATH UNIVERSITY UNIT 2. Define Operating system and its functions. Explain different types of Operating System

UNIT 1 JAGANNATH UNIVERSITY UNIT 2. Define Operating system and its functions. Explain different types of Operating System JAGANNATH UNIVERSITY BCAII OPERATING SYSTEM MODEL TEST PAPER (SOLVED) UNIT 1 Q1 Q2 Q3 Q4 Q5 Define Operating system and its functions Explain different types of Operating System Describe different types

More information

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5) Announcements Reading Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) 1 Relationship between Kernel mod and User Mode User Process Kernel System Calls User Process

More information

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8

PROCESSES AND THREADS THREADING MODELS. CS124 Operating Systems Winter , Lecture 8 PROCESSES AND THREADS THREADING MODELS CS124 Operating Systems Winter 2016-2017, Lecture 8 2 Processes and Threads As previously described, processes have one sequential thread of execution Increasingly,

More information

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 370: SYSTEM ARCHITECTURE & SOFTWARE [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University OpenMP compiler directives

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 Lecture 8 Threads and Scheduling Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ How many threads

More information

Operating Systems Overview. Chapter 2

Operating Systems Overview. Chapter 2 Operating Systems Overview Chapter 2 Operating System A program that controls the execution of application programs An interface between the user and hardware Masks the details of the hardware Layers and

More information

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100 GATE- 2016-17 Postal Correspondence 1 Operating Systems Computer Science & Information Technology (CS) 20 Rank under AIR 100 Postal Correspondence Examination Oriented Theory, Practice Set Key concepts,

More information

Announcements/Reminders

Announcements/Reminders Announcements/Reminders Class news group: rcfnews.cs.umass.edu::cmpsci.edlab.cs377 CMPSCI 377: Operating Systems Lecture 5, Page 1 Last Class: Processes A process is the unit of execution. Processes are

More information

Part Two - Process Management. Chapter 3: Processes

Part Two - Process Management. Chapter 3: Processes Part Two - Process Management Chapter 3: Processes Chapter 3: Processes 3.1 Process Concept 3.2 Process Scheduling 3.3 Operations on Processes 3.4 Interprocess Communication 3.5 Examples of IPC Systems

More information

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation The given code is as following; boolean flag[2]; int turn; do { flag[i]=true; turn=j; while(flag[j] && turn==j); critical

More information

Process Description and Control. Chapter 3

Process Description and Control. Chapter 3 Process Description and Control Chapter 3 Contents Process states Process description Process control Unix process management Process From processor s point of view execute instruction dictated by program

More information

Processes, PCB, Context Switch

Processes, PCB, Context Switch THE HONG KONG POLYTECHNIC UNIVERSITY Department of Electronic and Information Engineering EIE 272 CAOS Operating Systems Part II Processes, PCB, Context Switch Instructor Dr. M. Sakalli enmsaka@eie.polyu.edu.hk

More information

Processes and Threads

Processes and Threads Processes and Threads Giuseppe Anastasi g.anastasi@iet.unipi.it Pervasive Computing & Networking Lab. () Dept. of Information Engineering, University of Pisa Based on original slides by Silberschatz, Galvin

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept DM510-14 Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server

More information

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Processes and Non-Preemptive Scheduling. Otto J. Anshus Processes and Non-Preemptive Scheduling Otto J. Anshus Threads Processes Processes Kernel An aside on concurrency Timing and sequence of events are key concurrency issues We will study classical OS concurrency

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Chapter 5: CPU Scheduling

More information

Department of Computer applications. [Part I: Medium Answer Type Questions]

Department of Computer applications. [Part I: Medium Answer Type Questions] Department of Computer applications BBDNITM, Lucknow MCA 311: OPERATING SYSTEM [Part I: Medium Answer Type Questions] UNIT 1 Q1. What do you mean by an Operating System? What are the main functions of

More information

Unit I. Chapter 2: Process and Threads

Unit I. Chapter 2: Process and Threads Unit I Chapter 2: Process and Threads Introduction: Processes are one of the oldest and most important abstractions that operating systems provide. They support the ability to have (pseudo) simultaneous

More information

Chapter 3: Processes. Chapter 3: Processes. Process in Memory. Process Concept. Process State. Diagram of Process State

Chapter 3: Processes. Chapter 3: Processes. Process in Memory. Process Concept. Process State. Diagram of Process State Chapter 3: Processes Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 3.2 Silberschatz,

More information

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms Operating System Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts Scheduling Criteria Scheduling Algorithms OS Process Review Multicore Programming Multithreading Models Thread Libraries Implicit

More information

Process Concept Process in Memory Process State new running waiting ready terminated Diagram of Process State

Process Concept Process in Memory Process State new running waiting ready terminated Diagram of Process State Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks Textbook uses the terms job and process almost interchangeably Process a

More information

Chapter 3: Processes

Chapter 3: Processes Operating Systems Chapter 3: Processes Silberschatz, Galvin and Gagne 2009 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication (IPC) Examples of IPC

More information

Process Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB)

Process Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB) Chapter 4: Processes Process Concept Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems An operating system

More information

Chapter 4: Processes

Chapter 4: Processes Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2019 Lecture 8 Scheduling Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ POSIX: Portable Operating

More information

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according

More information

Processes. CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC]

Processes. CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC] Processes CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC] 1 Outline What Is A Process? Process States & PCB Process Memory Layout Process Scheduling Context Switch Process Operations

More information

The Big Picture So Far. Chapter 4: Processes

The Big Picture So Far. Chapter 4: Processes The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

(MCQZ-CS604 Operating Systems)

(MCQZ-CS604 Operating Systems) command to resume the execution of a suspended job in the foreground fg (Page 68) bg jobs kill commands in Linux is used to copy file is cp (Page 30) mv mkdir The process id returned to the child process

More information

Announcement. Exercise #2 will be out today. Due date is next Monday

Announcement. Exercise #2 will be out today. Due date is next Monday Announcement Exercise #2 will be out today Due date is next Monday Major OS Developments 2 Evolution of Operating Systems Generations include: Serial Processing Simple Batch Systems Multiprogrammed Batch

More information

Processes and Threads

Processes and Threads OPERATING SYSTEMS CS3502 Spring 2018 Processes and Threads (Chapter 2) Processes Two important types of dynamic entities in a computer system are processes and threads. Dynamic entities only exist at execution

More information

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD OPERATING SYSTEMS #5 After A.S.Tanenbaum, Modern Operating Systems, 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD General information GENERAL INFORMATION Cooperating processes

More information

Chapter 4: Processes. Process Concept

Chapter 4: Processes. Process Concept Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating

More information

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition

More information

* What are the different states for a task in an OS?

* What are the different states for a task in an OS? * Kernel, Services, Libraries, Application: define the 4 terms, and their roles. The kernel is a computer program that manages input/output requests from software, and translates them into data processing

More information

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University CS 571 Operating Systems Midterm Review Angelos Stavrou, George Mason University Class Midterm: Grading 2 Grading Midterm: 25% Theory Part 60% (1h 30m) Programming Part 40% (1h) Theory Part (Closed Books):

More information

The Big Picture So Far. Chapter 4: Processes

The Big Picture So Far. Chapter 4: Processes The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt

More information