CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, Review

Similar documents
Main Points of the Computer Organization and System Software Module

CSI3131 Final Exam Review

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

IT 540 Operating Systems ECE519 Advanced Operating Systems

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

UNIT:2. Process Management

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

Module 1. Introduction:

Following are a few basic questions that cover the essentials of OS:

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Operating System Review Part

ECE519 Advanced Operating Systems

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

QUESTION BANK UNIT I

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

Solution for Operating System

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Course Description: This course includes the basic concepts of operating system

CS 450 Fall xxxx Final exam solutions. 2) a. Multiprogramming is allowing the computer to run several programs simultaneously.

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year

Department of Computer applications. [Part I: Medium Answer Type Questions]

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1

2. The system of... generally ran one job at a time. These were called single stream batch processing.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Operating System. Operating System Overview. Layers of Computer System. Operating System Objectives. Services Provided by the Operating System

Operating System Overview. Operating System

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Chendu College of Engineering & Technology

Student Name:.. Student ID... Course Code: CSC 227 Course Title: Semester: Fall Exercises Cover Sheet:

Sistemas Operacionais I. Valeria Menezes Bastos

(b) External fragmentation can happen in a virtual memory paging system.

Unit 3 : Process Management

CHAPTER NO - 1 : Introduction:

Introduction. CS3026 Operating Systems Lecture 01


CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Chapters 5 and 6 Concurrency

COMP 3361: Operating Systems 1 Final Exam Winter 2009

CSC 716 Advanced Operating System Fall 2007 Exam 1. Answer all the questions. The maximum credit for each question is as shown.

Subject Name:Operating system. Subject Code:10EC35. Prepared By:Remya Ramesan and Kala H.S. Department:ECE. Date:

SNS COLLEGE OF ENGINEERING

CMPS 111 Spring 2003 Midterm Exam May 8, Name: ID:

Computer System Overview

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Operating System Third Year CSE( Sem:I) 2 marks Questions and Answers UNIT I

Scheduling of processes

GUJARAT TECHNOLOGICAL UNIVERSITY MASTER OF COMPUTER APPLICATION SEMESTER: III

CSE 153 Design of Operating Systems

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Operating Systems Comprehensive Exam. Fall Student ID # 10/31/2013

Chapter 5 Asynchronous Concurrent Execution

Operating Systems Overview. Chapter 2

CS370 Operating Systems Midterm Review

Q.1 Explain Computer s Basic Elements

COMP 2320 Operating Systems 1st Semester, Mid-term Examination

Today: Protection! Protection!

Operating Systems : Overview

MC7204 OPERATING SYSTEMS

OPERATING SYSTEM : RBMCQ0402. Third RavishBegusarai.

Operating Systems: William Stallings. Starvation. Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Fall COMP3511 Review

CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

CSE 120 PRACTICE FINAL EXAM, WINTER 2013

Operating Systems Overview. Chapter 2

CHAPTER 2: PROCESS MANAGEMENT

Unit 2 : Computer and Operating System Structure

UNIT I PROCESSES AND THREADS

I/O Management and Disk Scheduling. Chapter 11

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013

Today: Protection. Sermons in Computer Science. Domain Structure. Protection

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

Input/Output Management

INSTITUTE OF AERONAUTICAL ENGINEERING Dundigal, Hyderabad COMPUTER SCIENCE AND ENGINEERING QUESTION BANK OPERATING SYSTEMS

CSE 421/521 - Operating Systems Fall Lecture - XXV. Final Review. University at Buffalo

Midterm Exam Solutions Amy Murphy 28 February 2001

Processes and Threads. Processes: Review

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

Computer System Overview. Chapter 1

CLASS: II YEAR / IV SEMESTER CSE SUBJECT CODE AND NAME: CS6401 OPERATING SYSTEMS UNIT I OPERATING SYSTEMS OVERVIEW

SYED AMMAL ENGINEERING COLLEGE CS6401- OPERATING SYSTEM

Concurrency: Mutual Exclusion and Synchronization. Concurrency

Lecture 2 Process Management

Q1. What is Deadlock? Explain essential conditions for deadlock to occur?

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

Uniprocessor Scheduling. Aim of Scheduling

Uniprocessor Scheduling. Aim of Scheduling. Types of Scheduling. Long-Term Scheduling. Chapter 9. Response time Throughput Processor efficiency

MARUTHI SCHOOL OF BANKING (MSB)

a. A binary semaphore takes on numerical values 0 and 1 only. b. An atomic operation is a machine instruction or a sequence of instructions

Announcements. Final Exam. December 10th, Thursday Patrick Taylor Hall. Chapters included in Final. 8.

Chapter 6 Concurrency: Deadlock and Starvation

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING UNIT I

SAZ4B/SAE5A Operating System Unit : I - V

REAL-TIME MULTITASKING KERNEL FOR IBM-BASED MICROCOMPUTERS

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Last Class: Deadlocks. Where we are in the course

Process Description and Control. Chapter 3

Design of Operating System

Transcription:

CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, 2003 Review 1 Overview 1.1 The definition, objectives and evolution of operating system An operating system exploits and manages all kinds of computer hardware to provide a set of services directly or indirectly to the users. The definition gives two objectives of an operating system: managing efficiently all kinds of resources providing a friendly interface to the end-users To better understand the requirements for an operating system, it is useful to consider how operating systems have evolved over the years, from serial processing systems involving many manual operations to automatic multiprogramming batch processing systems and time-sharing systems. 1.2 Computer hardware All the hardware components in a computer system may fall into several categories: microprocessor, main memory, I/O modules and system bus. For exchanging data to or from the main memory and various I/O modules and computation, the microprocessor contains all kinds of registers, including data registers, address registers, control and status registers. Different address registers may use different addressing schemes, including segmented addressing and stack addressing based on push and pop operations. The program counter register (PC) contains the address of an instruction to be fetched from main memory and the instruction register (IR) contains the instruction most recently fetched. 1

A register called program status word (PSW) otherwise records the condition codes as part of the result of computations and other status information as well. The dynamic side of the microprocessor is how it executes instructions. For the execution of each single instruction, there are two cycles, the first of which is called fetch cycle and the second execute cycle. They together makes an instruction cycle. The microprocessor repeats instruction cycles until it halts. 1.3 I/O communication techniques There are three common I/O device access techniques: Programmed I/O, Interrupt-driven I/O, and DMA and the latters are developed to avoid problems in the former approaches. Interrupt is a mechanism by which computer components may interrupt the normal processing of the processor and request the processor to perform a specific action. Whenever the execution of an instruction is finished, the processor will check the availability of any interrupt signal. 1.4 Memory hierarchy Storage components all have some kinds of advantages and disadvantages. It is possible to combine them hierarchically so as to enjoy high speed, large capacity and low price. Typically a cache may be used between the processor and the main memory to have a speedup based on the principle of locality. 2 Process management 2.1 The definition of process A process is an execution of a program. The term is coined to refer to an instance of a program in a multiprogramming environment. Each process contains data, code, and its own stack. The operating system allocates a process control block to record various attributes of each process. All the above together make the process image. 2

2.2 State transition The state transition model is used to describe the dynamic behavior of a single process, and the process queue model may depict the structure of the process management subsystem in the operating system. 3 Thread 3.1 The introduction of thread The characteristics of processes actually fall into two categories: Resource ownership: Processes may be allocated control or ownership of various resources. Scheduling/execution: Processes may execute and move from one state to another. The two categories are independent in fact and thus may be treated separately. Multiple flows of control may coexist within one single process, each called a thread. Similar to process, each thread is associated with execution state and various data structures, including a thread control block containing thread context, a stack and space for local variables. 3.2 Benefits of multithreading Multithreading has the following advantages over multiprogramming: It takes far less time to create and terminate a thread, and do control switching between threads. It is easier to accomplish the communication among execution traces. It helps to reflect the real world in a straightforward way. Two examples, a web server and RPC service, clearly shows the above benefits of multithreading. 3

3.3 Implementation of thread Threads may be supported in two different ways: At the user-level: All thread management work is done by the user application, and the operating system is not aware of the existence of threads and typically a thread library is available providing routines for manipulating threads and communication between threads as well. At the kernel-level: The operating system kernel provides direct support for thread management. Both approaches have their advantages and disadvantages. 4 Concurrency 4.1 Mutual exclusion and synchronization 4.1.1 Introduction Besides offering benefits, multiprogramming and multithreading also mean more efforts to deal with the interaction among processes, including mutual exclusion and synchronization, which are required by processes either competing for or cooperating with each other by shared resources or communications. Mutual exclusion has four requirements: Only one process at a time is allowed into its critical section, among the processes that have critical sections for the same resource or shared object. When no process is in a critical section, any process that requests entry to its critical section must be permitted to enter without delay. A process remains inside its critical section for a finite time only. No assumptions should be made about relative process speeds or number of processors. 4.1.2 Software approaches Based on the assumption that only one access to a memory location can be made at a time, Dekker s algorithm and Peterson s algorithm were developed, so that a user program may implement mutual exclusion and synchronization itself without any support from the operating system or programming languages. 4

4.1.3 Hardware approaches Since it is interrupts that lead to the interleaving execution of multiple processes, mutual exclusion can be enforced by disabling interrupt temporarily. Some special machine instructions, which combines two memory accesses, may meet the requirement, such as test and set instruction and exchange instruction. 4.1.4 Programming languages or operating system support Since either software or hardware approaches have all kinds of disadvantages, e.g. high complexity for the former and busy waiting for the latter, some advanced facilities supporting mutual exclusion and synchronization have been developed: Semaphores A semaphore can be viewed as an integer variable, whose initial value indicates the number of resources of concern available. Processes may invoke two atomic primitives: wait and signal. And each semaphore is associated with a queue, which is used to hold processes waiting on the semaphore. The implementation of semaphores has to be based on either software approaches or hardware approaches, but semaphores provide an easier interface to the programmers. Monitors The monitor method is proposed based on the development of structural programming and object orientation theory. A monitor is actually a software module consisting of: Local data: the shared resources or the access points leading to those resources Procedures: the interface to the above local data, which is invisible to any external procedure Initialization: code to initialize local data At any time, at most one process is allowed to own the monitor and invoke one of its procedures, thus mutual exclusion is enforced. Condition variables and two primitives, cwait() and csignal(), may be used inside monitor procedures to provide synchronization support. 4.1.5 Message passing Message passing, common for processes that communicate with each other, may also be used to support mutual exclusion and synchronization. 5

Message passing is actually similar to semaphore signaling except that a mailbox is used for relaying messages. 4.2 Deadlock In a multiprogramming environment, a deadlock may happen if and only if: Mutual exclusion: Only one process may use the shared resource at a time. Hold and wait: More than one resource is involved and they are to be obtained one by one. That is processes may hold allocated resources while awaiting assignment of others. No preemption: Once a resource is held by a process, it cannot be forcibly removed from the process. Circular wait: A closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain. In terms of graph theory, there is a directed circuit, in which processes and resources alternate exactly one by one. Deadlock problems may be solved in various ways: Prevention: The occurrence of one of the above four conditions is prevented in advance. Avoidance: The progress of resource allocation in the operating system is monitored dynamically and whenever a deadlock is going to happen, some measure is taken to avoid it. The banker s algorithm may be used for determining the possibility of a deadlock. Detection and recovery: Instead of avoiding a deadlock, the operation system may performs an algorithm periodically to detect the existence of circular wait. If a deadlock has indeed occurred, effort may be taken for recovery. 5 Memory management Each process is associated with two address spaces: a logical address space and a physical address space. The former can be translated into the latter in different ways, either in compile time, load time, or execution time. Besides satisfying the address mapping, memory management subsystem also need to provide protection and sharing support, and facilitate application organization. Memory may be allocated to processes in the forms of partitions, pages, or/and segments: 6

Partitioning may be fixed or dynamic, and internal and external fragments may be caused respectively. Process spaces and the main memory may be divided into fixed-size blocks, called pages or frames so that a process image may reside in uncontiguous sections of memory space. A page table helps to track the location of each section and do the address translation in run-time. A process may be organized in form of segments as well and a segment table is used instead. 6 Virtual memory Virtual memory is needed to support the execution of a program whose size is beyond the capacity of the main memory so that only part of a process instead the whole thing may be physically reside in the main memory for execution. To support virtual memory, additional information is needed for either paging or segmentation mechanisms. The two methods may be also combined to benefit the advantages of both. With virtual memory where paging is involved, various policies should be considered: fetch policy, placement policy, replacement policy, and cleaning policy. Various replacement policies, including optimal, LRU, FIFO and clock, have been discussed to reduce the occurrence of page faults. 7 Uniprocessor scheduling Three types of scheduling are involved in an operating system: long-term scheduling, mediumterm scheduling, and short-term scheduling. The last one is the most common use of scheduling. With short-term scheduling, various policies, FCFS, round robin, shortest process next, shortest remaining time, highest response ratio next, and feedback, may be used to meet criteria including turnaround time, response time, throughput, processor utilization, and fairness. 8 I/O management The diversity of I/O devices makes the I/O management the messiest part in an operating system, however it is still possible to deal with the various I/O devices in a uniform way by 7

organizing the I/O routines hierarchically. To improve the efficiency of the I/O subsystem in general, the buffering mechanism may be used so that the level of parallelism will be increased and swapping will not be interfered any more. Disks, as the most often used peripheral devices, may be dealt with elegantly by deploying various scheduling algorithms for disk I/O requests, including FIFO, shortest service time first, SCAN, and C-SCAN. 8