PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -560100 Department of Electronics and Communication Engineering Faculty: Richa Sharma Subject: Operating System SCHEME & SOLUTION INTERNAL ASSESSMENT TEST 2 Semester: VI-B Sub. Code: 10EC65 1 Explain in detail OS view of processes 10 2 a Write a short note on signal handling 5 b Explain various states of processes 5 3 What do you mean by threads? Explain various levels of threads with a neat diagram 4 Explain i) Static and dynamic memory allocation ii) Levels of managing memory hierarchy 10 5 5 5 Explain various memory allocation preliminaries 10 6 Explain in detail Contiguous and non-contiguous memory allocation 10 7 a What is a child process? Explain various benefits of child processes. 6 b Explain with the help of diagram transformation and execution of programs 4 8 Explain in detail various techniques of memory allocation 10
Solution Ans-1. To OS, a process is a unit of computational work. Kernel s primary task is to control operation of processes to provide effective utilization of the computer system. Process states and state transitions Process state is an indicator that describes the nature of the current activity of a process. A state transition for a process is a change in its state caused by the occurrence of some event such as the start or end of an I/O operation.
Causes of fundamental state transitions for a process
Example: Suspended Processes A kernel needs additional states to describe processes suspended due to swapping. Process Context and Process Control Block Kernel allocates resources to a process and schedules it for use of the CPU. The kernel s view of a process is comprised of the process context and the process control block.
Context save, Scheduling and Dispatching Context save function: Saves CPU state in PCB, and saves information concerning context
Changes process state from running to ready Scheduling function: Uses process state information from PCBs to select a ready process for execution and passes its id to dispatching function Dispatching function: Sets up context of process, changes its state to running, and loads saved CPU state from PCB into CPU Event Handling Events that occur during the operation of an OS: 1. Process creation event 2. Process termination event 3. Timer event 4. Resource request event 5. Resource release event 6. I/O initiation request event 7. I/O completion event 8. Message send event 9. Message receive event 10. Signal send event 11. Signal receive event 12. A program interrupt 13. A hardware malfunction event When an event occurs, the kernel must find the process whose state is affected by it OSs use various schemes to speed this up E.g., event control blocks (ECBs) Ans-2 (a) A signal is used to notify an exceptional situation to a process and enable it to attend to it immediately Situations and signal names/numbers defined in OS CPU conditions like overflows Conditions related to child processes Resource utilization Emergency communications from a user to a process Kernel sends a signal to a process when some unexceptional situation occurs and it can be synchronous or asynchronous
Handled by process-defined signal handler through a system call (register_handler) or OS provided default handler. Example: 2. (b) States of Processes Process state is an indicator that describes the nature of the current activity of a process. A state transition for a process is a change in its state caused by the occurrence of some event such as the start or end of an I/O operation.
State Transitions
Ans-3 Threads An execution of a program that uses the resources of a process. A thread is an alternative model of program execution A process creates a thread through a system call Thread operates within process context Use of threads effectively splits the process state into two parts Resource state remains with process CPU state is associated with thread Switching between threads incurs less overhead than switching between processes
Advantages of threads over processes Coding for use of threads Use thread safe libraries to ensure correctness of data sharing Signal handling: which thread should handle a signal? Choice can be made by kernel or by application A synchronous signal should be handled by the thread itself An asynchronous signal can be handled by any thread of the process Ideally highest priority thread should handle it POSIX Threads The ANSI/IEEE Portable Operating System Interface (POSIX) standard defines pthreads API For use by C language programs Provides 60 routines that perform the following: Thread management Assistance for data sharing mutual exclusion Assistance for synchronization condition variables
A pthread is created through the call pthread_create(< data structure >,< attributes >, < start routine >,< arguments >) Parent-child synchronization is through pthread_join A thread terminates pthread_exit call Kernel level, User level and Hybrid threads Kernel-Level Threads Threads are managed by the kernel User-Level Threads Threads are managed by thread library Hybrid Threads Combination of kernel-level and user-level threads Kernel Level A kernel-level thread is like a process except that it has a smaller amount of state information Switching between threads of same process incurs the overhead of event handling User Level Fast thread switching because kernel is not involved Blocking of a thread blocks all threads of the process
Hybrid Thread Models Ans-4 (i) Static and Dynamic memory allocation Memory allocation is an aspect of a more general action in software operation known as binding Static Binding A binding performed before the execution of a program is set in motion. Dynamic Binding A binding performed during the execution of a program. Static allocation performed by compiler, linker, or loader Sizes of data structures must be known a priori Dynamic allocation provides flexibility Memory allocation actions constitute an overhead during operation A program has to be transformed before it can be executed Many of these transformations perform memory bindings Accordingly, an address is called compiled address, linked address, etc
(ii) Levels of memory hierarchy Ans-5 Memory allocation preliminaries Reuse of Memory(Speed of memory allocator and efficient use of memory are important concepts) Maintaining a Free List Performing Fresh Allocations by Using a Free List Memory Fragmentation Merging of Free Memory Areas Buddy System and Power-of-2 Allocators Comparing Memory Allocators
Reuse of memory Maintaining a free list For each memory area in free list, kernel maintains: Size of the memory area Pointers used for forming the list Kernel stores this information in the first few bytes of a free memory area itself
Performing fresh allocations by using a free list Three techniques can be used: First-fit technique: uses first large-enough area Best-fit technique: uses smallest large-enough area Next-fit technique: uses next large-enough area Memory fragmentation The existence of unused areas in memory of the computer system. Fragmentation leads to poor memory utilization Forms of fragmentation Merging of free memory areas External fragmentation can be countered by merging free areas of memory Two generic techniques: Boundary tags Memory compaction
A tag is a status descriptor for a memory area When an area of memory becomes free, kernel checks the boundary tags of its neighboring areas If a neighbor is free, it is merged with newly freed area A 50 percent rule holds when merging is performed
Memory compaction is achieved by packing all allocated areas toward one end of the memory Possible only if a relocation register is provided Buddy System and Power of 2 Allocators These allocators perform allocation of memory in blocks of a few standard sizes Leads to internal fragmentation Enables the allocator to maintain separate free lists for blocks of different block sizes Avoids expensive searches in a free list Leads to fast allocation and deallocation Buddy system allocator performs restricted merging Power-of-2 allocator does not perform merging Buddy System
Power of 2 Allocator Sizes of memory blocks are powers of 2 Separate free lists are maintained for blocks of different sizes Each block contains a header element Contains address of free list to which it should be added when it becomes free An entire block is allocated to a request No splitting of blocks takes place No effort is made to coalesce adjoining blocks When released, a block is returned to its free list Comparing memory allocators Compared on the basis of speed of allocation and efficient use of memory Buddy and power-of-2 allocators are faster than first-fit, best-fit, and next-fit allocators Power-of-2 allocator is faster than buddy allocator To compare memory usage efficiency: First-fit, best-fit, or next-fit allocators do not incur internal fragmentation Buddy allocator achieved 95% efficiency in simulation Ans-6 Contiguous memory allocation In contiguous memory allocation each process is allocated a single contiguous area in memory Faces the problem of memory fragmentation Apply techniques of memory compaction and reuse Compaction requires a relocation register Lack of this register is also a problem for swapping Non-Contiguous memory allocation Portions of a process address space are distributed among different memory areas Reduces external fragmentation
Logical address, physical address and address translation Logical address: address of an instruction or data byte as used in a process Viewed as a pair (comp i, byte i ) Physical address: address in memory where an instruction or data byte exists
Comparison of contiguous and non- contiguous memory allocation Approaches to non- contiguous memory allocation Two approaches: Paging Process consists of fixed-size components called pages Eliminates external fragmentation Internal fragmentation arises in this case Segmentation Instead of fixed size components, segments of different sizes are used As sizes are different, kernel has to reuse memory using techniques such as first-fit or best-fit. External fragmentation is a problem Hybrid approach: segmentation with paging Avoids external fragmentation Ans-7 (a) Child Process Kernel initiates an execution of a program by creating a process for it Primary process may make system calls to create other processes
Child processes and parents create a process tree Typically, a process creates one or more child processes and delegates some of its work to each Multitasking within an application Example: The real time data logging application receives data samples from a satellite at the rate of 10,000 samples per second and stores them in a database on the disk. The primary process of the application, which we will call the data-logger process, has to perform the following three functions: 1. Copy the sample from the special register into memory. 2. Write the sample into a database file on the disk. 3. Creating the backup file of samples as housekeeping operation into another file for analysis Benefits (b) Transformation and execution of programs A program has to be transformed before it can be executed Many of these transformations perform memory bindings Accordingly, an address is called compiled address, linked address, etc
Ans-8 Memory allocation techniques Stacks Stack: LIFO allocations/deallocations (push and pop) Memory is allocated when a function, procedure or block is entered and is deallocated when it is exited A contiguous area of memory is reserved for stack. A pointer called SB(stack base) points to the first entry of the stack. Another pointer called TOS(top of stack) points to the last entry allocated in the stack. During execution of a program, a stack is used to support function calls. The group of stack entries that pertain to one function call is called a Stack frame A stack frame is pushed on the stack when a function is called. Stack frame contains either addresses or values of the function s parameters and the return address i.e; address of the instruction to which a control should be returned after completing the function s execution. A local data of the function is created within the stack frame. At the end of the function s execution, the entire stack frame is popped out and the return address contained in it is used to pass control back to the calling program. The first entry in a stack frame is a pointer to the previous stack frame on the stack which is responsible for popping off a stack frame and is known as FB(frame base) Frame base will be used to point to the start of the topmost stack frame. It helps in accessing various stack entries in the stack frame
Heap A heap permits random allocation/deallocation Used for program-controlled dynamic data (PCD data)(programming language features like calloc /malloc) An allocation request by a process returns with a pointer to the allocated memory area in the heap and the process accesses the allocated memory area through this heap. A deallocation request must present a pointer to the memory area to be deallocated. Buddy System and Power of 2 Allocators These allocators perform allocation of memory in blocks of a few standard sizes Leads to internal fragmentation Enables the allocator to maintain separate free lists for blocks of different block sizes Avoids expensive searches in a free list Leads to fast allocation and deallocation Buddy system allocator performs restricted merging Power-of-2 allocator does not perform merging Power of 2 Allocator Sizes of memory blocks are powers of 2 Separate free lists are maintained for blocks of different sizes Each block contains a header element Contains address of free list to which it should be added when it becomes free An entire block is allocated to a request No splitting of blocks takes place No effort is made to coalesce adjoining blocks When released, a block is returned to its free list Comparing memory allocators Compared on the basis of speed of allocation and efficient use of memory
Buddy and power-of-2 allocators are faster than first-fit, best-fit, and next-fit allocators Power-of-2 allocator is faster than buddy allocator To compare memory usage efficiency: First-fit, best-fit, or next-fit allocators do not incur internal fragmentation Buddy allocator achieved 95% efficiency in simulation