Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307

Similar documents
MC7204 OPERATING SYSTEMS

Department of Computer applications. [Part I: Medium Answer Type Questions]

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Operating Systems Comprehensive Exam. Fall Student ID # 10/31/2013

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Operating Systems Comprehensive Exam. Spring Student ID # 2/17/2011

CHAPTER NO - 1 : Introduction:

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013

SYED AMMAL ENGINEERING COLLEGE CS6401- OPERATING SYSTEM

Synchronization. Peter J. Denning CS471/CS571. Copyright 2001, by Peter Denning

SNS COLLEGE OF ENGINEERING

1.1 CPU I/O Burst Cycle

CLASS: II YEAR / IV SEMESTER CSE SUBJECT CODE AND NAME: CS6401 OPERATING SYSTEMS UNIT I OPERATING SYSTEMS OVERVIEW

Course Syllabus. Operating Systems

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

CSE 4/521 Introduction to Operating Systems

CSE 421/521 - Operating Systems Fall Lecture - XXV. Final Review. University at Buffalo

QUESTION BANK UNIT I

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

INSTITUTE OF AERONAUTICAL ENGINEERING Dundigal, Hyderabad COMPUTER SCIENCE AND ENGINEERING QUESTION BANK OPERATING SYSTEMS

Chapter 5: CPU Scheduling

Unit 3 : Process Management

Ch 4 : CPU scheduling

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Chendu College of Engineering & Technology

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

CS370 Operating Systems

PESIT SOUTHCAMPUS. Question Bank

2. The shared resource(s) in the dining philosophers problem is(are) a. forks. b. food. c. seats at a circular table.

Operating Systems. Scheduling

CPU Scheduling Algorithms

INSTITUTE OF AERONAUTICAL ENGINEERING

Scheduling. The Basics

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

Operating Systems: Quiz2 December 15, Class: No. Name:

(b) External fragmentation can happen in a virtual memory paging system.

MARUTHI SCHOOL OF BANKING (MSB)

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

Interprocess Communication By: Kaushik Vaghani

CHAPTER 6: PROCESS SYNCHRONIZATION

CSI3131 Final Exam Review

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

CMPS 111 Spring 2003 Midterm Exam May 8, Name: ID:

Process Management And Synchronization

FORTH SEMESTER DIPLOMA EXAMINATION IN ENGINEERING/ TECHNOLIGY- OCTOBER, 2012

EECE.4810/EECE.5730: Operating Systems Spring 2017 Homework 3 Solution

Midterm Exam. October 20th, Thursday NSC

8: Scheduling. Scheduling. Mark Handley

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

King Fahd University of Petroleum and Minerals. Write clearly, precisely, and briefly!!

COMP 3361: Operating Systems 1 Final Exam Winter 2009

Last Class: Processes

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Exam Guide COMPSCI 386

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Properties of Processes

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

FCM 710: Architecture of Secure Operating Systems

CS3502 OPERATING SYSTEMS

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

CHAPTER 2: PROCESS MANAGEMENT

OPERATING SYSTEMS ECSE 427/COMP SECTION 01 WEDNESDAY, DECEMBER 7, 2016, 9:00 AM

Subject Teacher: Prof. Sheela Bankar

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING UNIT I

OS Process Synchronization!

Main Points of the Computer Organization and System Software Module

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Lecture 9: Midterm Review

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

Operating System Design

OPERATING SYSTEMS. Sharafat Ibn Mollah Mosharraf TOUCH-N-PASS EXAM CRAM GUIDE SERIES. Students. Special Edition for CSEDU

Process Synchronization

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

OPERATING SYSTEMS FINAL EXAMINATION APRIL 30, 2006 ANSWER ALL OF THE QUESTIONS BELOW.

Lecture 2 Process Management

CSL373: Lecture 6 CPU Scheduling

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

3. CPU Scheduling. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CS370 Operating Systems

UNIT II PROCESS SCHEDULING AND SYNCHRONIZATION

Operating Systems (1DT020 & 1TT802)

1. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.

Operating Systems Unit 3

PESIT Bangalore South Campus

Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS

Scheduling of processes

Last Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples

Process Coordination

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Some popular Operating Systems include Linux, Unix, Windows, MS-DOS, Android, etc.

Transcription:

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Class: MCA Semester: II Year:2013 Paper Title: Principles of Operating Systems Max Marks: 60 Section A: (All 10 questions are compulsory) 10X2=20 (Key points of answers are given here, Where ever mentioned, figures are expected in answers) Very Short Answer Questions: Write very short answers to following questions. 1. Why an operating system called the resource manager of a system? There are various hardware and software components (also called resources) in a computer system including keyboard, mouse, printers, monitors, internal buses, CPU, memory(primary, secondary), drivers, networking devices and so on. In order to run the computer efficiently and conveniently, all these components should get the cooperation of each other. Fortunately, operating system is such a system program which manages these resources. Therefore it is also called the resource manager of a system. 2. Write any two facts about multi programmed systems. (a) A multi-programmed system utilizes different resources of a system to a high degree among various jobs. When a job is leaving a resource (say CPU) for another resource to acquire (say disk), the former is given to other aspirant job rather than keeping it idle. (b) The operating system also has to decide about context switch, priority of the job leaving a resource and queuing for another resource. 3. Write any four states of a process. (a) New (b) Running (c ) waiting (d) ready (e) terminated: figure/a brief may also be given. 4. What is a CPU schedule? The jobs entering a system are kept in a queue or pool. The long term scheduler maintains this information. Which job is to be given CPU is decided by OS based on CPU scheduling algorithms e.g. FCFS, SJF, priority round robin etc. After this decision is made, CPU scheduler hands the CPU to the selected job. 5. What is a critical section in process synchronization? In order to maximize the utilization of resources of a system or otherwise, different processes work on cooperative manner. For common variables, data, temporary files etc to share among these processes, there has to be good synchronization among such processes. In this process, it is mandatory that when a common variable is being updated by one process, all other processes should refrain to interfere the process and hence the variable at that time. To ensure this, few sections of code of every process necessarily contain a restriction that when that section is being run, other processes should wait unless that section is completely executed. This section is called critical section of a process, due to sensitivity of its code. 1

6. Define the wait and signal operations of a semaphore. If S is a semaphore then wait(s) { while(s<=0); // No operation S--; } and signal(s) { S++; } 7. What is meant by the physical address of memory? As we know when a program is compiled, object code is generated by compiler. CPU generates logical address of the code. But in order that program runs on a system, this code should reside on main memory during execution. The location where the code should be placed in main memory is the physical address of memory. Thus the logical address should be mapped to the physical address using a relocation register usually by adding base address to logical address. A figure (like Fig. in answer to Q.6 in this paper) showing connection among logical address, relocation register and physical address would add to understanding, it is expected. 8. What is external fragmentation problem in memory? In contiguous allocation of memory, the sequence of contiguous free blocks is maintained in the form of holes in memory. Sometimes it happens that a job requires n blocks but cannot find a hole of size n. Although total memory available due to all holes is m>n. This problem is called external fragmentation problem in memory management. Figure can be used to represent it. 9. Write any four types of files..doc,.exe,.c,.jpg,.pdf 10. Define the term seek time? On a disk, the data is ultimately read /written on to sectors of the disks. The disk is divided into cylinders. Whenever a request to read/write data from/to a particular sector is issued, the disk arm containing disk head needs to move to that particular sector. The seek time is the time for the disk arm to move to the cylinder containing the desired sector. Section B: (Attempt any 4 questions out of 7 questions) 4X10=40 Descriptive Questions: 2. What are the batch systems? Discuss their advantages and disadvantages. Batch processing is execution of a series of jobs on a system without manual intervention. Jobs are set up so they can be run to completion without manual intervention. So, all input data are preselected through punch cards, card readers or other such modes. A program takes a set of data files as input, processes the data, and produces a set of output data files. This operating environment is termed as "batch processing" because the input data are collected into groups 2

or batches and are processed in batches by the program. Batch processing is usually associated with primitive huge/mainframe since the earliest days of electronic computing in the 1950s. Mostly due to accounting, business or similar purpose computing. Even new systems usually contain one or more batch applications for updating information at the end of the day, generating reports, printing documents, and other non-interactive tasks that must complete reliably within certain business deadlines. For batch processing from the point of operating system, the jobs once entered in the system are batched and processed turn by turn without interfering a job in any manner. Advantages: i. No interference when a process is running. ii. No starvation as every job gets resources in a non preemptive mode. iii. The job entering early will be processed early (FIFO). iv. No waiting due to availability of all resources to running process. v. Much less management of resources required. Disadvantages i. Misutilization of resources which are idle when a process does not require them at present. ii. The error is informed at the end of completion of batch only, causing delay. iii. No user/programmer interaction and for a small input requirement/ editing, one has to wait till the end of batch. iv. High priority or urgent jobs will have to wait against low priority or less urgent jobs. v. Jobs requiring fraction of time on resources will wait due to jobs consuming huge time. 3. Using a table, compare real time systems with time sharing systems. Property Real time systems sharing Utilization of resources Good (e.g. CPU is never left idle) Good (e.g. CPU is never left idle). It is time shared among jobs. limits Highly expected. Hard real time will crash if time is exceeded. No such urgency. A program will get a quantum of time only along with other programs running. Context switch Usually not required as the resource is not preempted Mostly preempted if the job is not completed in fixed time slot. Application areas Where process should complete For common data entry, in a limited time, reservation, missile, space craft launching etc computation where time is not a constraint. User interaction Depends on requirement Usually yes User impression User does not notice any delay Due to small amount of time quantum (in milliseconds or less), user does not notice any delay 3

4. Using an example, explain shortest job first CPU scheduling algorithm. Compare its turnaround time with a first come first serve algorithm for the same example. CPU can be scheduled to various jobs running in a system by a CPU scheduler. The main algorithms to hand over CPU to a process include FCFS, Shortest job first Round Robin, Multilevel feedback queues, priority algorithms. Each algorithm has its own merits and limitations. Depending on the requirement, an algorithm can be chosen for a particular case. Shortest job first SJF algorithm is a better alternative to a FCFS algorithm. The job which has shortest CPU burst time will be processed. There can be two cases: (i) when all jobs arrive together (ii) when jobs arrive at different times. In first case, take following example Process Arrival Service P1 0 8 P2 0 4 P3 0 9 P4 0 5 The jobs will run in sequence P2,P4,P1,P3 The turn around (TA) time for P2 is 4, for P4: 9, for P1: 17, for P3: 26 Total TA time= 4+9+17+26 = 56, Average TA time = 56/4 =14 In second case, take same example Process Arrival Service P1 0 8 P2 1 4 P3 2 9 P4 3 5 0 1 5 10 17 26 TA time for P1 = 17, P2 =4, P3=24, P4=7, total TA time = 52, Average TA time = 52/4 = 13 For FCSF Process Arrival Service P1 0 8 P2 0 4 P3 0 9 P4 0 5 4

Job sequence will be P1,P2,P3,P4 with TA time for P1=8,P2=12,P3=21,P4=26, Total TA time=67 and average TA = 67/4 = 16.75 (may elaborate and take other examples with figures) 5. What are three classical problems of synchronizations? Write and explain the code for a bounded buffer problem for a producer and a consumer. Synchronization or process synchronization refers to synchronize various cooperating processes running in a system. These processes can compete for common variables, data, files and resources. Three classical problems of synchronization can be taken as follows (A) Producer Consumer problem (B) Reader Writer Problem (C) Dining philosopher problem (A) Producer Consumer problem: In this problem, two processes, one called the producer and the other called the consumer, run concurrently and share a common buffer. The producer produces items that it must pass to the consumer, who is to consume them. The producer passes items to the consumer through the buffer. However, the producer must be certain that it does not deposit an item into the buffer when the buffer is full, and the consumer must not extract an item from an empty buffer. The two processes also must not access the buffer at the same time, for if the consumer tries to extract an item from the slot into which the producer is depositing an item, the consumer might get only part of the item. Any solution to this problem must ensure none of the above three events occur. (B) Reader Writer Problem: In this problem, a number of concurrent processes require access to some object (such as a file.) Some processes extract information from the object and are called readers; others change or insert information in the object and are called writers. The Bernstein conditions state that many readers may access the object concurrently, but if a writer is accessing the object, no other processes (readers or writers) may access the object. There are two possible policies for doing this: First Readers-Writers Problem. Readers have priority over writers; that is, unless a writer has permission to access the object, any reader requesting access to the object will get it. Note this may result in a writer waiting indefinitely to access the object. Second Readers-Writers Problem. Writers have priority over readers; that is, when a writer wishes to access the object, only readers which have already obtained permission to access the object are allowed to complete their access; any readers that request access after the writer has done so must wait until the writer is done. Note this may result in readers waiting indefinitely to access the object. (C) Dining philosopher problem: In this problem, five philosophers sit around a circular table eating noodles and thinking. In front of each philosopher is a plate and to the left of each plate is a fork (so there are five forks, one to the right and one to the left of each philosopher's plate). When a philosopher wishes to eat, he picks up the forks to the right and to the left of his plate. When done, he puts both forks back on the table. The problem is to ensure that no philosopher will be allowed to starve because he cannot ever pick up both forks. 5

Code for a producer consumer problem This problem can be solved using semaphores empty, mutex, full. The role of empty is to count number of empty buffers (initialized to n), role of full is to count number of full buffers (initialized to 0), and role of a mutex (initialized to 1) semaphore is to ensure that when a process (producer or consumer) is using the buffer, the other process does not enter it. Code for a producer: do { Produce an item in nextp Wait(empty); Wait(mutex); Add nextp to buffer Signal(mutex); Signal(full); } while(1); Code for a Consumer: do { Wait(full); Wait(mutex); Remove an item from buffer to nextc Signal(mutex); Signal(empty); Consume the item in nextc; } while(1); 6

6. What is a relocation register in memory? Show the dynamic relocation using a relocation register. Explain dynamic loading. Relocation register is a hardware element that holds a constant to be added to the address of each memory location in a computer program running in a multiprogramming system, as determined by the location of the area in memory assigned to the program. This gives the physical address of memory for a variable or page. The logical address generated by CPU is added to relocation address and produces physical address in memory as shown in following figure. Dynamic Loading: It means go on loading the pages into memory when these are actually required. It increases memory utilization in a better way than doing it by statically loading of pages. It is particularly useful when large amounts of code/date are needed to accommodate into memory. Also no special support from OS is required. 7. What is page replacement? Explain page replacement algorithms with examples. When a process is executed, it must reside on the primary memory. It is usually achieved by paging technique. The pages containing required coed/data should be available in main memory when it is executed. In virtual memory with demand paging, the pages are not loaded into primary from secondary until they are demanded or a page fault occurs. When the frames are available in primary memory, the free frames can be given to demanding process. But when there is no free frame available, and then the frame can be given from the set of frames allotted 7

to available / running processes. In global PRA, the frames can be given from a set of all frames allotted to demanding and other processes. In local PRA, the pages which are allotted to the demanding process only are replaced. Which existing frames should be replaced by new frames is decided by popular page replacement algorithms (PRA). The three PRA are v(i) FIFO (ii) OPR (iii) LRU (i) FIFO First In First Out: The page which arrived first (earliest) in the system should be removed at the cost of new page. Suppose the reference strings (the sequence of frames requested) is given as follows (ii) OPR Optical Page Replacement: In this algorithm, the page that is not going to be used for a longest period of time among the remaining frames. This will reduce frequency of page faults. iii. LRU Least recently used: Replace the page that was least recently used (means used oldest). 8

8. What do you mean by disk scheduling? Illustrate any two disk scheduling algorithms with figures and examples. As we know, a disk is a secondary storage of storing huge data. From time to time, several requests for data at different locations of disk are issued. The disk arm has to move to these locations to read/write data to fulfill the requests. The seek time is the prominent time to reach specified cylinder by disk arm. Thus movement of cylinders involve majority of seek time. Therefore the seek time is taken as the base for deciding which request should be satisfied by disk head first. This draws the need for disk scheduling algorithms. Following algorithms can be used to satisfy requests from different processes. (i) FCFS First Come First Serve: The cylinders will be reached in the order, they arrived. Wherever the current disk arm be, it has to reach the cylinder requested first. Suppose the work queue be 23,89,132,42, 189 in a 200 cylinders series. The current location of disk head is 100. Total time estimated by total arm motion = (100-23)+(89-23)+(132-89)+(132-42)+ (42-189) = 333 9

(ii). SSTF Shortest Seek First : When the disk moves to the cylinder having shortest seek time, i.e the movement of cylinders, it becomes SSTF. In the figure, we note that from starting position 100, position 89 is nearest so 89 will be served first followed by 132, 187, 42 and 23. Total head movement = (100-89) + (132-89) + (187-132) + (187-42) + (42-23) =218, a substantial reduction in head movement. But it may lead to starvation for the processes which request cylinders which are scattered farther. 10