PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru Department of Electronics and Communication Engineering

Similar documents
Threads. Threads (continued)

Processes and Threads

Process Description and Control. Chapter 3

Process Description and Control

Threads. CS3026 Operating Systems Lecture 06

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Module 1. Introduction:

Memory Management Basics

Today s Topics. u Thread implementation. l Non-preemptive versus preemptive threads. l Kernel vs. user threads

CS 475. Process = Address space + one thread of control Concurrent program = multiple threads of control

Main Memory (Part I)

3.1 Introduction. Computers perform operations concurrently

12: Memory Management

آنستیتیوت تکنالوجی معلوماتی و مخابراتی ICTI

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4

Chap 4, 5: Process. Dongkun Shin, SKKU

CS399 New Beginnings. Jonathan Walpole

Requirements, Partitioning, paging, and segmentation

Process Concepts. CSC400 - Operating Systems. 3. Process Concepts. J. Sumey

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

CS510 Operating System Foundations. Jonathan Walpole

Requirements, Partitioning, paging, and segmentation

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

CS 326: Operating Systems. Process Execution. Lecture 5

In multiprogramming systems, processes share a common store. Processes need space for:

Memory: Overview. CS439: Principles of Computer Systems February 26, 2018

CS420: Operating Systems

Operating Systems Process description and control

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 4: MULTITHREADED PROGRAMMING

Chapter 9: Virtual Memory

Process. Memory Management

Process. One or more threads of execution Resources required for execution. Memory (RAM) Others

Chapter 8: Virtual Memory. Operating System Concepts

Memory Management. Memory Management

Memory Management. q Basic memory management q Swapping q Kernel memory allocation q Next Time: Virtual memory

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Chapter 9: Virtual-Memory Management. Operating System Concepts 8 th Edition,

Chapter 9: Virtual Memory

Announcement. Exercise #2 will be out today. Due date is next Monday

Memory management. Johan Montelius KTH

Inter-Process Communication and Synchronization of Processes, Threads and Tasks: Lesson-1: PROCESS

4. Concurrency via Threads

6. Which of the following operating systems was the predecessor of Microsoft MS-DOS? a. Dynix b. Minix c. CP/M

1 PROCESSES PROCESS CONCEPT The Process Process State Process Control Block 5

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Chapter 5: Threads. Overview Multithreading Models Threading Issues Pthreads Windows XP Threads Linux Threads Java Threads

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Process and Its Image An operating system executes a variety of programs: A program that browses the Web A program that serves Web requests

Today s Topics. u Thread implementation. l Non-preemptive versus preemptive threads. l Kernel vs. user threads

CSE 380 Computer Operating Systems. Instructor: Insup Lee. University of Pennsylvania Fall 2003

Process. One or more threads of execution Resources required for execution

Operating System Concepts

Virtual Memory Outline

Processes. Dr. Yingwu Zhu

Processes & Threads. Recap of the Last Class. Microkernel System Architecture. Layered Structure

Operating Systems Unit 6. Memory Management

Chapter 9: Virtual Memory. Chapter 9: Virtual Memory. Objectives. Background. Virtual-address address Space

Memory management: outline

Memory management: outline

Chapter 4: Multithreaded Programming

pthreads CS449 Fall 2017

Chapter 9: Virtual Memory

Background. Contiguous Memory Allocation

Major Requirements of an OS

System Call. Preview. System Call. System Call. System Call 9/7/2018

Slide 6-1. Processes. Operating Systems: A Modern Perspective, Chapter 6. Copyright 2004 Pearson Education, Inc.

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

Distributed Systems Operation System Support

Binding and Storage. COMP 524: Programming Language Concepts Björn B. Brandenburg. The University of North Carolina at Chapel Hill

16 Sharing Main Memory Segmentation and Paging

Chapter 8: Main Memory

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

CSE 4/521 Introduction to Operating Systems. Lecture 29 Windows 7 (History, Design Principles, System Components, Programmer Interface) Summer 2018

Where are we in the course?


Virtual Memory COMPSCI 386

Process Description and Control

Chapter 8: Memory-Management Strategies

Announcements Processes: Part II. Operating Systems. Autumn CS4023

CS333 Intro to Operating Systems. Jonathan Walpole

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Operating Systems (2INC0) 2017/18

ECE 598 Advanced Operating Systems Lecture 10

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

Course: Operating Systems Instructor: M Umair. M Umair

Operating Systems. II. Processes

Operating System(16MCA24)

Agenda Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2

First-In-First-Out (FIFO) Algorithm

Process- Concept &Process Scheduling OPERATING SYSTEMS

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

Operating System Support

Transcription:

PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -560100 Department of Electronics and Communication Engineering Faculty: Richa Sharma Subject: Operating System SCHEME & SOLUTION INTERNAL ASSESSMENT TEST 2 Semester: VI-B Sub. Code: 10EC65 1 Explain in detail OS view of processes 10 2 a Write a short note on signal handling 5 b Explain various states of processes 5 3 What do you mean by threads? Explain various levels of threads with a neat diagram 4 Explain i) Static and dynamic memory allocation ii) Levels of managing memory hierarchy 10 5 5 5 Explain various memory allocation preliminaries 10 6 Explain in detail Contiguous and non-contiguous memory allocation 10 7 a What is a child process? Explain various benefits of child processes. 6 b Explain with the help of diagram transformation and execution of programs 4 8 Explain in detail various techniques of memory allocation 10

Solution Ans-1. To OS, a process is a unit of computational work. Kernel s primary task is to control operation of processes to provide effective utilization of the computer system. Process states and state transitions Process state is an indicator that describes the nature of the current activity of a process. A state transition for a process is a change in its state caused by the occurrence of some event such as the start or end of an I/O operation.

Causes of fundamental state transitions for a process

Example: Suspended Processes A kernel needs additional states to describe processes suspended due to swapping. Process Context and Process Control Block Kernel allocates resources to a process and schedules it for use of the CPU. The kernel s view of a process is comprised of the process context and the process control block.

Context save, Scheduling and Dispatching Context save function: Saves CPU state in PCB, and saves information concerning context

Changes process state from running to ready Scheduling function: Uses process state information from PCBs to select a ready process for execution and passes its id to dispatching function Dispatching function: Sets up context of process, changes its state to running, and loads saved CPU state from PCB into CPU Event Handling Events that occur during the operation of an OS: 1. Process creation event 2. Process termination event 3. Timer event 4. Resource request event 5. Resource release event 6. I/O initiation request event 7. I/O completion event 8. Message send event 9. Message receive event 10. Signal send event 11. Signal receive event 12. A program interrupt 13. A hardware malfunction event When an event occurs, the kernel must find the process whose state is affected by it OSs use various schemes to speed this up E.g., event control blocks (ECBs) Ans-2 (a) A signal is used to notify an exceptional situation to a process and enable it to attend to it immediately Situations and signal names/numbers defined in OS CPU conditions like overflows Conditions related to child processes Resource utilization Emergency communications from a user to a process Kernel sends a signal to a process when some unexceptional situation occurs and it can be synchronous or asynchronous

Handled by process-defined signal handler through a system call (register_handler) or OS provided default handler. Example: 2. (b) States of Processes Process state is an indicator that describes the nature of the current activity of a process. A state transition for a process is a change in its state caused by the occurrence of some event such as the start or end of an I/O operation.

State Transitions

Ans-3 Threads An execution of a program that uses the resources of a process. A thread is an alternative model of program execution A process creates a thread through a system call Thread operates within process context Use of threads effectively splits the process state into two parts Resource state remains with process CPU state is associated with thread Switching between threads incurs less overhead than switching between processes

Advantages of threads over processes Coding for use of threads Use thread safe libraries to ensure correctness of data sharing Signal handling: which thread should handle a signal? Choice can be made by kernel or by application A synchronous signal should be handled by the thread itself An asynchronous signal can be handled by any thread of the process Ideally highest priority thread should handle it POSIX Threads The ANSI/IEEE Portable Operating System Interface (POSIX) standard defines pthreads API For use by C language programs Provides 60 routines that perform the following: Thread management Assistance for data sharing mutual exclusion Assistance for synchronization condition variables

A pthread is created through the call pthread_create(< data structure >,< attributes >, < start routine >,< arguments >) Parent-child synchronization is through pthread_join A thread terminates pthread_exit call Kernel level, User level and Hybrid threads Kernel-Level Threads Threads are managed by the kernel User-Level Threads Threads are managed by thread library Hybrid Threads Combination of kernel-level and user-level threads Kernel Level A kernel-level thread is like a process except that it has a smaller amount of state information Switching between threads of same process incurs the overhead of event handling User Level Fast thread switching because kernel is not involved Blocking of a thread blocks all threads of the process

Hybrid Thread Models Ans-4 (i) Static and Dynamic memory allocation Memory allocation is an aspect of a more general action in software operation known as binding Static Binding A binding performed before the execution of a program is set in motion. Dynamic Binding A binding performed during the execution of a program. Static allocation performed by compiler, linker, or loader Sizes of data structures must be known a priori Dynamic allocation provides flexibility Memory allocation actions constitute an overhead during operation A program has to be transformed before it can be executed Many of these transformations perform memory bindings Accordingly, an address is called compiled address, linked address, etc

(ii) Levels of memory hierarchy Ans-5 Memory allocation preliminaries Reuse of Memory(Speed of memory allocator and efficient use of memory are important concepts) Maintaining a Free List Performing Fresh Allocations by Using a Free List Memory Fragmentation Merging of Free Memory Areas Buddy System and Power-of-2 Allocators Comparing Memory Allocators

Reuse of memory Maintaining a free list For each memory area in free list, kernel maintains: Size of the memory area Pointers used for forming the list Kernel stores this information in the first few bytes of a free memory area itself

Performing fresh allocations by using a free list Three techniques can be used: First-fit technique: uses first large-enough area Best-fit technique: uses smallest large-enough area Next-fit technique: uses next large-enough area Memory fragmentation The existence of unused areas in memory of the computer system. Fragmentation leads to poor memory utilization Forms of fragmentation Merging of free memory areas External fragmentation can be countered by merging free areas of memory Two generic techniques: Boundary tags Memory compaction

A tag is a status descriptor for a memory area When an area of memory becomes free, kernel checks the boundary tags of its neighboring areas If a neighbor is free, it is merged with newly freed area A 50 percent rule holds when merging is performed

Memory compaction is achieved by packing all allocated areas toward one end of the memory Possible only if a relocation register is provided Buddy System and Power of 2 Allocators These allocators perform allocation of memory in blocks of a few standard sizes Leads to internal fragmentation Enables the allocator to maintain separate free lists for blocks of different block sizes Avoids expensive searches in a free list Leads to fast allocation and deallocation Buddy system allocator performs restricted merging Power-of-2 allocator does not perform merging Buddy System

Power of 2 Allocator Sizes of memory blocks are powers of 2 Separate free lists are maintained for blocks of different sizes Each block contains a header element Contains address of free list to which it should be added when it becomes free An entire block is allocated to a request No splitting of blocks takes place No effort is made to coalesce adjoining blocks When released, a block is returned to its free list Comparing memory allocators Compared on the basis of speed of allocation and efficient use of memory Buddy and power-of-2 allocators are faster than first-fit, best-fit, and next-fit allocators Power-of-2 allocator is faster than buddy allocator To compare memory usage efficiency: First-fit, best-fit, or next-fit allocators do not incur internal fragmentation Buddy allocator achieved 95% efficiency in simulation Ans-6 Contiguous memory allocation In contiguous memory allocation each process is allocated a single contiguous area in memory Faces the problem of memory fragmentation Apply techniques of memory compaction and reuse Compaction requires a relocation register Lack of this register is also a problem for swapping Non-Contiguous memory allocation Portions of a process address space are distributed among different memory areas Reduces external fragmentation

Logical address, physical address and address translation Logical address: address of an instruction or data byte as used in a process Viewed as a pair (comp i, byte i ) Physical address: address in memory where an instruction or data byte exists

Comparison of contiguous and non- contiguous memory allocation Approaches to non- contiguous memory allocation Two approaches: Paging Process consists of fixed-size components called pages Eliminates external fragmentation Internal fragmentation arises in this case Segmentation Instead of fixed size components, segments of different sizes are used As sizes are different, kernel has to reuse memory using techniques such as first-fit or best-fit. External fragmentation is a problem Hybrid approach: segmentation with paging Avoids external fragmentation Ans-7 (a) Child Process Kernel initiates an execution of a program by creating a process for it Primary process may make system calls to create other processes

Child processes and parents create a process tree Typically, a process creates one or more child processes and delegates some of its work to each Multitasking within an application Example: The real time data logging application receives data samples from a satellite at the rate of 10,000 samples per second and stores them in a database on the disk. The primary process of the application, which we will call the data-logger process, has to perform the following three functions: 1. Copy the sample from the special register into memory. 2. Write the sample into a database file on the disk. 3. Creating the backup file of samples as housekeeping operation into another file for analysis Benefits (b) Transformation and execution of programs A program has to be transformed before it can be executed Many of these transformations perform memory bindings Accordingly, an address is called compiled address, linked address, etc

Ans-8 Memory allocation techniques Stacks Stack: LIFO allocations/deallocations (push and pop) Memory is allocated when a function, procedure or block is entered and is deallocated when it is exited A contiguous area of memory is reserved for stack. A pointer called SB(stack base) points to the first entry of the stack. Another pointer called TOS(top of stack) points to the last entry allocated in the stack. During execution of a program, a stack is used to support function calls. The group of stack entries that pertain to one function call is called a Stack frame A stack frame is pushed on the stack when a function is called. Stack frame contains either addresses or values of the function s parameters and the return address i.e; address of the instruction to which a control should be returned after completing the function s execution. A local data of the function is created within the stack frame. At the end of the function s execution, the entire stack frame is popped out and the return address contained in it is used to pass control back to the calling program. The first entry in a stack frame is a pointer to the previous stack frame on the stack which is responsible for popping off a stack frame and is known as FB(frame base) Frame base will be used to point to the start of the topmost stack frame. It helps in accessing various stack entries in the stack frame

Heap A heap permits random allocation/deallocation Used for program-controlled dynamic data (PCD data)(programming language features like calloc /malloc) An allocation request by a process returns with a pointer to the allocated memory area in the heap and the process accesses the allocated memory area through this heap. A deallocation request must present a pointer to the memory area to be deallocated. Buddy System and Power of 2 Allocators These allocators perform allocation of memory in blocks of a few standard sizes Leads to internal fragmentation Enables the allocator to maintain separate free lists for blocks of different block sizes Avoids expensive searches in a free list Leads to fast allocation and deallocation Buddy system allocator performs restricted merging Power-of-2 allocator does not perform merging Power of 2 Allocator Sizes of memory blocks are powers of 2 Separate free lists are maintained for blocks of different sizes Each block contains a header element Contains address of free list to which it should be added when it becomes free An entire block is allocated to a request No splitting of blocks takes place No effort is made to coalesce adjoining blocks When released, a block is returned to its free list Comparing memory allocators Compared on the basis of speed of allocation and efficient use of memory

Buddy and power-of-2 allocators are faster than first-fit, best-fit, and next-fit allocators Power-of-2 allocator is faster than buddy allocator To compare memory usage efficiency: First-fit, best-fit, or next-fit allocators do not incur internal fragmentation Buddy allocator achieved 95% efficiency in simulation