Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

Similar documents
I/O Management and Disk Scheduling. Chapter 11

Chapter 11. I/O Management and Disk Scheduling

Chapter 11 I/O Management and Disk Scheduling

I/O Systems and Storage Devices

I/O Device Controllers. I/O Systems. I/O Ports & Memory-Mapped I/O. Direct Memory Access (DMA) Operating Systems 10/20/2010. CSC 256/456 Fall

Input/Output Management

Lecture 9. I/O Management and Disk Scheduling Algorithms

UC Santa Barbara. Operating Systems. Christopher Kruegel Department of Computer Science UC Santa Barbara

I/O Handling. ECE 650 Systems Programming & Engineering Duke University, Spring Based on Operating Systems Concepts, Silberschatz Chapter 13

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke

Operating Systems 2010/2011

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

CSE 380 Computer Operating Systems

Input/Output. Today. Next. Principles of I/O hardware & software I/O software layers Disks. Protection & Security

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CSE 120. Overview. July 27, Day 8 Input/Output. Instructor: Neil Rhodes. Hardware. Hardware. Hardware

Module 13: Secondary-Storage Structure

CIS Operating Systems I/O Systems & Secondary Storage. Professor Qiang Zeng Fall 2017

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

CIS Operating Systems I/O Systems & Secondary Storage. Professor Qiang Zeng Spring 2018

I/O CANNOT BE IGNORED

Input Output (IO) Management

CS330: Operating System and Lab. (Spring 2006) I/O Systems

Disk Scheduling. Based on the slides supporting the text

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010

Part IV I/O System. Chapter 12: Mass Storage Structure

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

Hard Disk Drives (HDDs) Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Introduction. Operating Systems. Outline. Hardware. I/O Device Types. Device Controllers. (done)

Hard Disk Drives (HDDs)

Main Points of the Computer Organization and System Software Module

I/O 1. Devices and I/O. key concepts device registers, device drivers, program-controlled I/O, DMA, polling, disk drives, disk head scheduling

Disk Scheduling. Chapter 14 Based on the slides supporting the text and B.Ramamurthy s slides from Spring 2001

Outline. Operating Systems: Devices and I/O p. 1/18

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

Disk scheduling Disk reliability Tertiary storage Swap space management Linux swap space management

Chapter 14: Mass-Storage Systems. Disk Structure

Operating Systems. V. Input / Output

Mass-Storage Structure

Input/Output. Chapter 5: I/O Systems. How fast is I/O hardware? Device controllers. Memory-mapped I/O. How is memory-mapped I/O done?

Introduction. Operating Systems. Outline. Hardware. I/O Device Types. Device Controllers. One OS function is to control devices

Mass-Storage Structure

Introduction. Operating Systems. Outline. Hardware. I/O Device Types. Device Controllers. One OS function is to control devices

Chapter 10: Mass-Storage Systems

Today: Secondary Storage! Typical Disk Parameters!

Principles of Operating Systems CS 446/646

V. Mass Storage Systems

Chapter 13: Mass-Storage Systems. Disk Scheduling. Disk Scheduling (Cont.) Disk Structure FCFS. Moving-Head Disk Mechanism

Chapter 13: Mass-Storage Systems. Disk Structure

I/O Systems. Jo, Heeseung

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College

CPS104 Computer Organization and Programming Lecture 18: Input-Output. Outline of Today s Lecture. The Big Picture: Where are We Now?

Operating Systems, Fall

UNIT 4 Device Management

Block Device Driver. Pradipta De

Answer to exercises chap 13.2

Input/Output. Today. Next. ! Principles of I/O hardware & software! I/O software layers! Secondary storage. ! File systems

Operating Systems 2010/2011

CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

Operating System: Chap13 I/O Systems. National Tsing-Hua University 2016, Fall Semester

I/O CANNOT BE IGNORED

Chapter 12: Mass-Storage

2. Which of the following resources is not one which can result in deadlocking processes? a. a disk file b. a semaphore c. the central processor (CPU)

Chapter 13: I/O Systems

CS3600 SYSTEMS AND NETWORKS

Chapter 10: Mass-Storage Systems

Chapter 11. I/O Management and Disk Scheduling

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 13: I/O Systems

Chapter 12: Secondary-Storage Structure. Operating System Concepts 8 th Edition,

Input-Output (I/O) Input - Output. I/O Devices. I/O Devices. I/O Devices. I/O Devices. operating system must control all I/O devices.

Tape pictures. CSE 30341: Operating Systems Principles

Silberschatz, et al. Topics based on Chapter 13

I/O Design, I/O Subsystem, I/O-Handler Device Driver, Buffering, Disks, RAID January WT 2008/09

by I.-C. Lin, Dept. CS, NCTU. Textbook: Operating System Concepts 8ed CHAPTER 13: I/O SYSTEMS

CHAPTER 12 AND 13 - MASS-STORAGE STRUCTURE & I/O- SYSTEMS

Chapter 12: I/O Systems

Chapter 13: I/O Systems

Chapter 12: I/O Systems. Operating System Concepts Essentials 8 th Edition

QUESTION BANK UNIT I

CSE380 - Operating Systems. Communicating with Devices

Input/Output. Today. Next. Principles of I/O hardware & software I/O software layers Secondary storage. File systems

I/O Systems. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Common Computer-System and OS Structures

I/O Systems. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Principles of Operating Systems CS 446/646

Device-Functionality Progression

Chapter 12: I/O Systems. I/O Hardware

Lecture 23: Storage Systems. Topics: disk access, bus design, evaluation metrics, RAID (Sections )

Operating Systems. Operating Systems Professor Sina Meraji U of T

Chapter 13: I/O Systems

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

CSE 120. Operating Systems. March 27, 2014 Lecture 17. Mass Storage. Instructor: Neil Rhodes. Wednesday, March 26, 14

Operating System 1 (ECS-501)

Operating Systems Peter Pietzuch

Module 12: I/O Systems

Transcription:

Chapter-6 SUBJECT:- Operating System TOPICS:- I/O Management Created by : - Sanjay Patel

Disk Scheduling Algorithm 1) First-In-First-Out (FIFO) 2) Shortest Service Time First (SSTF) 3) SCAN 4) Circular-SCAN (C-SCAN) 5) LOOK Created by : - Sanjay Patel 2

Disk Scheduling Algorithm In this algorithm, the vertical axis corresponds to the tracks on the disk. The horizontal access corresponds to time or, equivalently, the number tracks traversed. Created by : - Sanjay Patel 3

FIFO Process requests as they come Fair (no starvation) Good for a few processes with clustered requests Deteriorates to random if there are many processes. We also assume that a disk with 200 tracks, in order received by the disk scheduler, are 55, 58, 39, 18, 90, 160, 150, 38, 184 We assume that the disk head is initially located tracks 100. Created by : - Sanjay Patel 4

Created by : - Sanjay Patel 5

FIFO (Starting at track 100) Next track accessed Numbers of tracks traversed 55 45 58 3 39 19 18 21 90 72 160 70 150 10 38 112 184 146 Average Seek Length 55.3 Created by : - Sanjay Patel 6

SSTF The SSTF policy is to select the disk I/O request that requires the least movement of the disk arm from its current position. Thus, we always choose to incur the minimum seek time. Of course, always choosing the minimum seek time does not guarantee that the average seek time over a number of arm movements will be minimum. However, this should provide better performance than FIFO. Because arm can move in two direction, a random tiebreaking algorithm may be used to resolve cases of equal distances, Created by : - Sanjay Patel 7

Created by : - Sanjay Patel 8

SSTF (Starting at track 100) Next track accessed Numbers of tracks traversed 90 10 58 32 55 3 39 16 38 1 18 20 150 132 160 10 184 24 Average Seek Length 27.5 Created by : - Sanjay Patel 9

SCAN (Elevator Algorithm) With exception FIFO, all of the polices described so far can leave some request unfulfilled until the entire queue is emptied. That is, there may always be new requests arriving that will be chosen before an existing request. A simple alternative that prevents this sort of starvation is the SCAN algorithm, also known as elevator algorithm because it operates much the way an elevator does. Created by : - Sanjay Patel 10

Created by : - Sanjay Patel 11

SCAN (Starting at track 100, In the direction of increasing track number) Next track accessed Numbers of tracks traversed 150 50 160 10 184 24 90 94 58 32 55 3 39 16 38 1 18 20 Average Seek Length 27.8 Created by : - Sanjay Patel 12

Circular SCAN (C-SCAN) The C-SCAN policy restricts scanning to one direction only. Thus, when the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again. This reduces the maximum delay experienced by new requests. Like elevator, but reads sectors in only one direction; When reaching last track, go back to first track non-stop Better locality on sequential reads Better use of read ahead cache on controller Reduces max delay to read a particular sector Created by : - Sanjay Patel 13

Created by : - Sanjay Patel 14

C-SCAN (Starting at track 100, In the direction of increasing track number) Next track accessed Numbers of tracks traversed 150 50 160 10 184 24 18 166 38 20 39 1 55 16 58 3 90 32 Average Seek Length 35.8 Created by : - Sanjay Patel 15

Look Scheduling Algorithm Both SCAN and C-SCAN move the disk arm across the full width of the disk. In practice, neither algorithm is implemented in this way. More commonly, the arm goes only as far as the final request in each direction. Then,it direction immediately,without going all the way to the end of the disk. These version of SCAN and C-SCAN are called LOOK and C-LOOK scheduling. Created by : - Sanjay Patel 16

Disk Scheduling Algorithm Name Description Remarks SSTF Shortest service time first(#) High utilization, small queues SCAN C-SCAN Back and forth over disk(#) One way with fast return(#) Better service distribution Lower service variability FIFO First in first out(*) Fairest of them all *: - Selection according to requestor # : - Selection according to request item Created by : - Sanjay Patel 17

Example The disk request queue contains a set of reference for blocks on tracks 98, 183, 37, 122, 14, 124, 65, 67. And head pointer at 53, draw and count average seek length for FIFO, STF, SCAN, C-SCAN Created by : - Sanjay Patel 18

Example The disk request queue contains a set of reference for blocks on tracks 76, 124, 17, 269, 201, 29, 137, 12. And head pointer at 76, draw and count average seek length for FIFO, STF, SCAN, C-SCAN Created by : - Sanjay Patel 19

I/O SYSTEM-OVERVIEW There are mainly two jobs for computer those are I/O processing For example if we are making sum of two numbers process then first we have to read two number and then after processing result should be displayed on the screen which involves the I/O. Created by : - Sanjay Patel 20

Principles of Input Output Hardware I/O devices can be roughly divided into two categories: Block Devices:- A BlockDevice is one that stores information in fixed-size blocks. Commonly the block size in block device is 512 bytes to 32,768 bytes. Disk is most common block device. Character Devices :-A CharacterDevice read or writes a stream of character. Network interface, mouse, keyboard, etc. are character device. Created by : - Sanjay Patel 21

Device Controller Input output units typically consists of a mechanical component and an electronic component. The electronic component is called the device controller or adapter. The mechanical component is the device itself. The interface between the controllers and the device is often a very low level interface. The controller job is to convert the serial bit stream into a block of bytes and perform any error correction necessary. Created by : - Sanjay Patel 22

Created by : - Sanjay Patel 23

Memory-Mapped Mapped I/O An input output device is managed by having software read/ write information from/to controller s registers. The computer designers must decide what instructions will be included in the machine repertoire to manipulate each controller s registers. Traditionally, the machine instruction set includes special input output instruction to accomplish this task. Each I/O controller has a few registers that are used for communicating with the CPU. By writing into these registers, the operating system can send command or data to device. Similarly by reading from these registers, the operating system can accept the request, status or data from the device. Created by : - Sanjay Patel 24

How CPU can communicate with these control registers? Each control register has assigned an I/O port number. During reading or writing from these controls registers CPU uses the port number. But in this scheme the address given by the CPU and the address of I/O control register is different so we have to map the memory address given by CPU with control register address. To solve this mapping problem we have to use memory-mapped I/O technique Created by : - Sanjay Patel 25

Memory-Mapped Mapped I/O In this technique control register has assigned unique memory address inside the computer memory. For these purpose some upper memory region is reserved. In Pentium system approximately 640 KB to 1 MB memory reserved for this purpose. In other words,each control register is assigned a unique memory address to which no memory is assigned. This system is called memory mapped input output Created by : - Sanjay Patel 26

Advantages Of memory mapped I/O There is no special protection mechanism required for the memory because of each control register has assigned fixed memory location. Every instruction that can reference memory can also reference control registers. Device control registers are just variables in memory and can be addressed in normal way Created by : - Sanjay Patel 27

Disadvantages Of memory mapped I/O If there is only one address space, then all memory modules and all input output devices must examine all memory references. Caching a device control register would be disastrous. So solution to this problem is that we have to disable the caching. Created by : - Sanjay Patel 28

Direct Memory Access (DMA) A special control unit may be provided to allow transfer of a block of data directly between an external device and the main memory, without continuous intervention by the processor. This approach is called Direct Memory Access (DMA). DMA can be used with either interrupt software Figure shows the typical DMA block diagram. DMA is particularly useful on devices like disks, where many bytes of information can be transferred in single I/O operations Created by : - Sanjay Patel 29

Block diagram of DMA Created by : - Sanjay Patel 30

DMA DMA mechanism can be configured in a variety of ways 1) Single bus, detached DMA 2) I/O Bus When used in conjunction with an interrupt, the CPU is notified only after the entire block of data has been transferred. For each byte or word transferred, it must provide the memory address and all the bus signals that control the data transfer Created by : - Sanjay Patel 31

Single bus, detached DMA All the modules use same system bus. This configuration is inefficient but inexpensive. It uses programmed I/O to exchange data between memory and an I/O module through the DMA module. Created by : - Sanjay Patel 32

I/O Bus I/O Bus provide easily expandable configuration. It reduces number of I/O interface in the DMA module. Exchange of data between the DMA and I/O module takes place off the system bus Created by : - Sanjay Patel 33

DMA Data Transfer operation. Program => P Device => D 1) Program makes a DMA setup request. 2) Program deposits the address value A and the data count (d). 3) Program also indicates the virtual memory address of the data on disk. 4) DMA controller records the receipt of relevant information and acknowledges the DMA complete. 5) Device communicates the data to the controller buffer. Created by : - Sanjay Patel 34

DMA Data Transfer operation 6) The controller grabs the address bus and data bus to store the data, one word at a time. 7) Data count is decremented. 8). The above cycle is repeated till the desired data transfer is accomplished Created by : - Sanjay Patel 35

Terms Interrupt: A suspension of process, such as the execution of a computer program, caused by an event external to that process and performed in such a way that the process can be resumed. Device Driver : - an operating system module (usually in the kernel) that deals with directly with a device or I/O module. Interrupt Handler : - a routine, generally part of OS. When an interrupt occurs, control is transferred to the corresponding interrupt handler, which take some action in response to the condition that caused the interrupt. Created by : - Sanjay Patel 36

Principles of I/O software A key concept in the design of I/O software is known as device independence. What it means is that is should be possible to write programs that can access any I/O device without having to specify the device in advance. uniform naming:-the name of a file or a device should simply be a string or an integer not depend on the device in any way. Created by : - Sanjay Patel 37

Error handling:-errors should be handled as close to the hardware as possible. synchronous (blocking) or Asynchronous (interrupt-driven):-most physical I/O is asynchronous-the CPU starts the transfer and goes off to do something else until the interrupt arrives. Buffering:-Often data come off a device can not be stored directly in its final destination. Buffering involves considerable copying. For example, when a packet comes in off the network, the operating system does not know where to put it until it has stored the packet and examined it. Created by : - Sanjay Patel 38

Interrupt Driven I/O. Whenever a data transfer to or from the managed hardware might be delayed for any reason, the driver writer should implement buffering. Data buffers help to detach data transmission and reception from the write and read system calls, and overall system performance benefits. A good buffering mechanism leads to interruptdriven I/O, in which an input buffer is filled at interrupt time and is emptied by processes that read the device; an output buffer is filled by processes that write to the device and is emptied at interrupt time. Created by : - Sanjay Patel 39

Interrupt Driven I/O For interrupt-driven data transfer to happen successfully, the hardware should be able to generate interrupts with the following semantics: For input, the device interrupts the processor when new data has arrived and is ready to be retrieved by the system processor. The actual actions to perform depend on whether the device uses I/O ports, memory mapping, or DMA. For output, the device delivers an interrupt either when it is ready to accept new data or to acknowledge a successful data transfer. Memorymapped and DMA-capable devices usually generate interrupts to tell the system they are done with the buffer. Created by : - Sanjay Patel 40

Interrupt Handlers The address of the interrupt handlers is stored as an indirectaddress in memory when the machine is started. The interrupt handler is a part of the OS that will be executed when any device completes its operations. So the application software need not continuously poll the device to detect when it has completed. When then interrupt handler begins execution, the CPU register will obtain values being used by the interrupted process. The interrupted handler must immediately perform a context switch to save all the general and status registers of the interrupted process and to install its own values for every CPU registers so that it can handle the completion of the input output operation. Created by : - Sanjay Patel 41

Steps are performed in software after the hardware interrupt has completed. 1) Save any registers that have not already been saved by the interrupt handler. 2) Set up a context for the interrupt service procedure. 3) Set up a stack for the interrupt service procedure. 4) Acknowledge the interrupt controller. If there is no centralized interrupt controller. 5) Copy the registers from where they were saved to the process table. 6) Run the interrupt service procedure. 7) Choose which process to run next. 8) Set the MMU context for the process to run next. 9) Load the new process registers. 10) Start running the new process. Created by : - Sanjay Patel 42

Device Drivers Each input output device attached to a computer needs some device specific code for controlling it. This code, called the device driver, is generally written by the device s manufacture and delivered along with the device. Each device driver normally handles one device type, or at most, one class of closely related devices. Device drivers are normally positioned below the rest of the operating system Drivers are not allowed to make system calls, but they often need to interact with the rest of the kernel. Created by : - Sanjay Patel 43

Created by : - Sanjay Patel 44

Buffering Buffering is a technique by which the device manager can keep slower I/O devices busy during times when a process is not requiring I/O operations. Types of I/O buffering schemes 1) Single buffering 2) Double buffering 3) Circular buffering 4) No buffering Created by : - Sanjay Patel 45

Single Buffer Operating system assigns a buffer in the system portion of main memory. Block oriented device Input transfers are made to the system buffer. After transferring, the process moves the block into user space and request for another block. User process can be processing one block of data while the next block is being read in. OS is able to swap the process out. OS must keep track of the assignment of system buffers to user processes. Created by : - Sanjay Patel 46

Double Buffer There are two buffers in the system One buffer is for the driver or controller to store data while waiting for it to be retrieved by higher level of the hierarchy. Other buffer is to store data from the lower level module. Double buffer is also called buffer swapping. Double buffering improvement comes at the cost of increased complexity. Double buffering may be inadequate if the process performs rapid burst of I/O. Created by : - Sanjay Patel 47

Circular Buffer When more than two buffers are used, the collection of buffers is itself referred to as a circular buffer. In this, the procedure can not pass the consumer because it would overwrite buffers before they had been consumed Created by : - Sanjay Patel 48

RAID Redundant array of independent disks may be used to increase disk reliability. RAID is a storage technology that combines multiple disk drive components into a logical unit. Data is distributed across the drives in one of several ways called "RAID levels", depending on what level of redundancy and performance In a RAID system, a single large file is stored in several separated disk units by breaking the file up to into a number of smaller pieces and storing these piece on different disks. When a file is accessed for a read, all disks deliver their data in parallel. RAID may be implemented in hardware or in the operating system. Created by : - Sanjay Patel 49

RAID level 0 It creates one large virtual disk from a number of smaller disks. Storage is grouped into logical unit called strips. The strips are mapped round-robin to consecutive array members. The virtual storage is a sequence of strips interleaved among the disks in the array. RAID level 0 architecture achieves the parallism but it does not include redundancy to improve reliability. Created by : - Sanjay Patel 50

RAID level 0 Benefit: - create a large disks. Limitation: - files tend to get scattered over a number of disks, even after a disk failure, some file data may be retrievable. Created by : - Sanjay Patel 51

RAID level 1 (mirrored) Redundancy is achieved by just duplicating all the data. The data stripping is used, same as RAID level 0. RAID level 1 stores duplicate copies of each strip, with each copy on a different disks. Created by : - Sanjay Patel 52

RAID level 2 (Error correcting code) Single copies of each strip are maintained. Error correcting code such as hamming code is calculated for the corresponding bits on each data disk. The bits of code are stored in the corresponding bit positions on multiple parity disks. The strips are very small, so when a block is read, all disks are accessed in parallel. Created by : - Sanjay Patel 53

RAID level 3 (Bit parity) In RAID level 3, single parity bit is used instead of an error correcting code. A parity bit is a bit that is added to ensure that the number of bits with the value one in a set of bits is even or odd. Parity bits are used as the simplest form of error detecting code. It requires just one extra disk. The data stripping is used, similar to the other RAID levels. If any disk in the array fails, its data can be determined from the data on the remaining disks. Created by : - Sanjay Patel 54

RAID level-4 (Block Level Parity) RAID level 4 is similar to RAID level 3, except strips are larger. Operation to read a block involves only a single disk. Parity bits are stored in corresponding strip on the parity disk. A bit by bit parity strip is calculated across corresponding data blocks on each data disk. Created by : - Sanjay Patel 55

RAID level 5 (Block level Distributed Parity) It eliminates the potential bottleneck found in RAID-4. RAID-4 distributes the parity strips across all disks. Created by : - Sanjay Patel 56

Disk Formatting Computers must be able to access needed information on command; however, even the smallest hard disk can store millions and millions of bits. How does the computer know where to look for the information it needs? To solve this problem, The most basic form of disk organization is called formatting. Formatting prepares the hard disk so that files can be written to the platters and then quickly retrieved when needed. Hard disks must be formatted in two ways: Physical disk formatting Logical disk formatting Created by : - Sanjay Patel 57

Created by : - Sanjay Patel 58

Physical Formatting A hard disk must be physically formatted before it can be logically formatted. A hard disk's physical formatting (also called low-level formatting) is usually performed by the manufacturer. Disk must be formatted before storing a data. Physical formatting (as per picture) divides the hard disk's platters into their basic physical elements: tracks, sectors, and cylinders. These elements define the way in which data is physically recorded on and read from the disk. Disk must be divided into sectors that the disk controller can read/write Created by : - Sanjay Patel 59

Logical Formatting After a hard disk has been physically formatted, it must also be logically formatted. Logical formatting places a file system on the disk, allowing an operating system (such as DOS, Windows, or Linux) to use the available disk space to store and retrieve files. After disk is partitioned, logical formatting is used. Created by : - Sanjay Patel 60

Spool A Spool is a buffer that holds output for a device, such as a printer, that cannot accept interleaved data streams. Although a printer can serve only one job at a time, several applications may wish to print their output concurrently, without having their output mixed together. The operating system solves this problem by intercepting all output to the printer. Each application s output is spooled to a separate disk file. When an application finishes printing, the spooling system queues the corresponding spool file to the printer one a time. In some operating system, spooling is managed by a system daemon process, in other operating systems, spooling is managed by in-kernel thread. Created by : - Sanjay Patel 61

Disk Scheduling What is disk scheduling? Servicing the disk I/O requests Why disk Scheduling? Use hardware efficiently Includes Fast access time (seek time+ rotational latency) Large disk bandwidth Created by : - Sanjay Patel 62

Created by : - Sanjay Patel 63

Example Ex: a disk queue with requests for I/O to blocks on cylinders 23, 89, 132, 42, 187 With disk head initially at 100 Created by : - Sanjay Patel 64

FIFO(23, 89, 132, 42, 187) Total Distance Traversed 77+66+43+90+145=421 Created by : - Sanjay Patel 65

SSTF(23, 89, 132, 42, 187) Total Distance Traversed 11+43+55+145+19=273 Created by : - Sanjay Patel 66

Scan (Head Move to decreasing Position or towards 0) 23, 89, 132, 42, 187 Total Distance Traversed 11+47+19+23+132+55=287 Created by : - Sanjay Patel 67

C-Scan Head Move to decreasing Position Total Distance Traversed 11+47+19+23+199+12+55=366 Head movement can be reduced if the request for cylinder 187 is serviced directly after request at 23 without going to the disk 0 Created by : - Sanjay Patel 68

LOOK(23, 89, 132, 42, 187) Total Distance Traversed 11+47+19+109+55=241 Compared to Scan, LOOK saves going from 23 to 0 and then back. Most efficient for this sequence o SCAN, of requests Created by : - Sanjay Patel 69

Thank You! Created by :- : S A N J A Y P A T E L Assistant Professor (I.T.) Shankersinh Vaghela Bapu Institute of Technology, Gandhinagar