I/O Management and Disk Scheduling. Chapter 11

Similar documents
Chapter 11 I/O Management and Disk Scheduling

Chapter 11. I/O Management and Disk Scheduling

Input/Output Management

Lecture 9. I/O Management and Disk Scheduling Algorithms

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

Chapter 11. I/O Management and Disk Scheduling

Solaris I/O Management and Disk Scheduling

Outline. Operating Systems: Devices and I/O p. 1/18

CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

CSE 380 Computer Operating Systems

Mass-Storage Structure

I/O Design, I/O Subsystem, I/O-Handler Device Driver, Buffering, Disks, RAID January WT 2008/09

QUESTION BANK UNIT I

Chapter 10: Mass-Storage Systems

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

Main Points of the Computer Organization and System Software Module

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

Disk Scheduling. Based on the slides supporting the text

Input/Output. Today. Next. Principles of I/O hardware & software I/O software layers Disks. Protection & Security

I/O Device Controllers. I/O Systems. I/O Ports & Memory-Mapped I/O. Direct Memory Access (DMA) Operating Systems 10/20/2010. CSC 256/456 Fall

Computer Organization ECE514. Chapter 5 Input/Output (9hrs)

Operating Systems 2010/2011

I/O Handling. ECE 650 Systems Programming & Engineering Duke University, Spring Based on Operating Systems Concepts, Silberschatz Chapter 13

Module 13: Secondary-Storage Structure

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access

I/O MANAGEMENT CATEGORIES OF I/O DEVICES 1. Slide 1. Slide 3. Controller: Three classes of devices (by usage):

Chapter 5: Input Output Management. Slides by: Ms. Shree Jaswal

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Input Output (IO) Management

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1

CSE380 - Operating Systems. Communicating with Devices

CS370 Operating Systems

Chapter 12: Secondary-Storage Structure. Operating System Concepts 8 th Edition,

UNIT 4 Device Management

CS3600 SYSTEMS AND NETWORKS

Disk Scheduling. Chapter 14 Based on the slides supporting the text and B.Ramamurthy s slides from Spring 2001

Chapter 14: Mass-Storage Systems. Disk Structure

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010

Disks and I/O Hakan Uraz - File Organization 1

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

I/O Systems and Storage Devices

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke

Comp 204: Computer Systems and Their Implementation. Lecture 18: Devices

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

Part IV I/O System. Chapter 12: Mass Storage Structure

Chapter 13: I/O Systems

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University

Devices. Today. Comp 104: Operating Systems Concepts. Operating System An Abstract View 05/01/2017. Devices. Devices

Computer System Overview

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

V. Mass Storage Systems

by I.-C. Lin, Dept. CS, NCTU. Textbook: Operating System Concepts 8ed CHAPTER 13: I/O SYSTEMS

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Disk scheduling Disk reliability Tertiary storage Swap space management Linux swap space management

Unit In a time - sharing operating system, when the time slot given to a process is completed, the process goes from the RUNNING state to the

Operating System: Chap13 I/O Systems. National Tsing-Hua University 2016, Fall Semester

I/O Systems. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Following are a few basic questions that cover the essentials of OS:

I/O CANNOT BE IGNORED

I/O Design, I/O Subsystem, I/O-Handler Device Driver, Buffering, Disks, RAID January WT 2008/09

2. The system of... generally ran one job at a time. These were called single stream batch processing.

CIS Operating Systems I/O Systems & Secondary Storage. Professor Qiang Zeng Spring 2018

I/O & Storage. Jin-Soo Kim ( Computer Systems Laboratory Sungkyunkwan University

Chapter 11: File System Implementation. Objectives

CSE 153 Design of Operating Systems Fall 2018

Database Systems. November 2, 2011 Lecture #7. topobo (mit)

CSE 120. Overview. July 27, Day 8 Input/Output. Instructor: Neil Rhodes. Hardware. Hardware. Hardware

Operating Systems. No. 9 ศร ณย อ นทโกส ม Sarun Intakosum

Hard Disk Drives (HDDs) Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

UC Santa Barbara. Operating Systems. Christopher Kruegel Department of Computer Science UC Santa Barbara

Computer System Overview

I/O CANNOT BE IGNORED

CS420: Operating Systems. Mass Storage Structure

Hard Disk Drives (HDDs)

Page Replacement Algorithms

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

Today: I/O Systems. Architecture of I/O Systems

Chapter 1 Computer System Overview

Today: Secondary Storage! Typical Disk Parameters!

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Chapter 13: Mass-Storage Systems. Disk Scheduling. Disk Scheduling (Cont.) Disk Structure FCFS. Moving-Head Disk Mechanism

Chapter 13: Mass-Storage Systems. Disk Structure

Outlines. Chapter 2 Storage Structure. Structure of a DBMS (with some simplification) Structure of a DBMS (with some simplification)

Computer System Overview. Chapter 1

Operating Systems. V. Input / Output

Operating Systems. Operating Systems Professor Sina Meraji U of T

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition,

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

CS370 Operating Systems

Computer System Overview OPERATING SYSTEM TOP-LEVEL COMPONENTS. Simplified view: Operating Systems. Slide 1. Slide /S2. Slide 2.

CHAPTER 12 AND 13 - MASS-STORAGE STRUCTURE & I/O- SYSTEMS

I/O Systems. Jo, Heeseung

Chapter 5 The Memory System

Virtual Memory COMPSCI 386

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

[08] IO SUBSYSTEM 1. 1

Principles of Operating Systems CS 446/646

Chapter 13: I/O Systems

Transcription:

I/O Management and Disk Scheduling Chapter 11

Categories of I/O Devices Human readable used to communicate with the user video display terminals keyboard mouse printer

Categories of I/O Devices Machine readable used to communicate with electronic equipment disk drives tape drives controllers actuators

Categories of I/O Devices Communication used to communicate with remote devices digital line drivers modems

Differences in I/O Devices Data Transfer Rate Application-specific disk used to store files must have file-management software disk used to store virtual memory pages depends on virtual memory hardware; I/O ops may be scheduled differently than for disks used for file storage terminal used by system administrator may have a higher priority

Differences in I/O Devices Complexity of control Unit of transfer data may be transferred as a stream of bytes for a terminal or in larger blocks for a disk Data representation encoding schemes: character encoding, parity may be different Error conditions devices respond to errors differently

Techniques for Performing I/O Programmed I/O process is busy-waiting for the operation to complete Interrupt-driven I/O I/O command is issued processor continues executing instructions I/O module sends an interrupt when done

Techniques for Performing I/O Direct Memory Access (DMA) DMA module controls exchange of data between main memory and the I/O device processor interrupted only after entire block has been transferred

Evolution of the I/O Function Processor directly controls a peripheral device Controller or I/O module is added processor uses programmed I/O without interrupts processor does not need to handle details of external devices

Evolution of the I/O Function Controller or I/O module with interrupts processor does not spend time waiting for an I/O operation to be performed Direct Memory Access blocks of data are moved into memory without involving the processor processor involved at beginning and end only

Evolution of the I/O Function I/O channel I/O module is a separate processor Uses computer s main memory I/O processor I/O module is a processor with its own local memory It s a computer in its own right

Direct Memory Access Takes control of the system form the CPU to transfer data to and from memory over the system bus Cycle stealing is used to transfer data on the system bus The instruction cycle is suspended so data can be transferred The CPU pauses one bus cycle No interrupts occur does not need to save context

Typical DMA Block Diagram Data Count Data Lines Address Lines DMA Request DMA Acknowledge Interrupt Read Write Data Register Address Register Control Logic Main system Device

Direct Memory Access Cycle stealing causes the CPU to execute more slowly Number of required busy cycles can be cut by integrating the DMA and I/O functions Try to use path between DMA module and I/O module that does not include the system bus

DMA and Interrupt Breakpoints Time Instruction Cycle Processor Cycle Processor Cycle Processor Cycle Processor Cycle Processor Cycle Processor Cycle Fetch Instruction Decode Instruction Fetch Operand Execute Instruction Store Result Process Interrupt DMA Breakpoints Interrupt Breakpoint Breakpoints where CPU can be suspended to let the DMA module use the buss

Single-bus, Detached DMA Processor DMA I/O... I/O Memory

Single-bus, Integrated DMA-I/O Processor DMA DMA Memory I/O I/O I/O

I/O Bus System Bus Processor DMA Memory I/O Bus I/O I/O I/O

Operating System Design Objectives I/O is extremely slow compared to main memory Use of multiprogramming allows that some processes will be waiting on I/O while another process executes I/O cannot keep up with processor speed Swapping is used to bring in additional Ready processes, which is an I/O operation Efficiency of I/O is an important issue, since this is a bottleneck

Operating System Design Objectives Desirable to handle all I/O devices in a uniform manner Hide most of the details of device I/O in lower-level routines so that processes and upper levels see devices in general terms such as Read, Write, Open, and Close Generality is an important issue

A Model of I/O Organization User Processes User Processes User Processes Directory Management Logical I/O Comm. Architecture File System Physical Organization Local peripheral device Device I/O Device I/O Device I/O Device with a File System Scheduling & Control Scheduling & Control Scheduling & Control Hardware Hardware Hardware Communications port

I/O Buffering Reasons for buffering: to find a solution to these problems: Processes must wait for I/O to complete before proceeding Certain pages must remain in main memory during I/O interferes with page replacement

I/O Buffering Block-oriented information is stored in fixed sized blocks transfers are made a block at a time used for disks and tapes Stream-oriented transfer information as a stream of bytes used for terminals, printers, communication ports, mouse, and most other devices that are not secondary storage

No Buffering Operating System User Process I/O Device In No buffering

Single Buffer Operating system assigns a buffer in main memory for an I/O request Block-oriented input transfers are made to buffer block moved to user space when needed another block is moved into the buffer o read ahead Operating System User Process I/O Device In Move Single buffering

Single Buffer Block-oriented I/O: user process can work on one block of data while next block is being read in process waiting for I/O can be swapped out, since input is taking place in system memory, not user memory operating system keeps track of assignment of system buffers to user processes output is accomplished by the user process writing a block to the buffer and later actually written out

Single Buffer Stream-oriented: used one line at a time user input from a terminal is one line at a time with carriage return signaling the end of the line output to the terminal is one line at a time

Double Buffer Use two system buffers instead of one A process can transfer data to or from one buffer while the operating system empties or fills the other buffer I/O Device In Operating System Move User Process Double buffering

Circular Buffer More than two buffers are used Each individual buffer is one unit in a circular buffer Used when I/O operation must keep up with process Operating System User Process I/O Device In. Move Circular buffering

Disk Data Layout Sectors Tracks Inter-sector gap Inter-track gap

Disk Layout Using Constant Angular Velocity Track 2, Sector 7 Track 0, Sector 0

Disk Performance Parameters To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sector Seek time time it takes to position the head at the desired track Rotational delay or rotational latency time its takes until desired sector is rotated to line up with the head

Disk Performance Parameters Access time sum of seek time and rotational delay the time it takes to get in position to read or write Data transfer occurs as the sector moves under the head Data transfer for an entire file is faster when the file is stored in the same cylinder and in adjacent sectors

Disk Scheduling Policies Seek time is the main reason for differences in performance For a single disk there can be a number of outstanding I/O requests If requests are selected randomly, we will get the worst possible performance The goal of disk scheduling is to process these requests so as to lower seek time

Disk Scheduling Policies First-in, first-out (FIFO) process requests sequentially fair to all processes approaches random scheduling in performance, if there are many processes

Disk Scheduling Policies Priority goal is not to optimize disk use but to meet other objectives short batch jobs may have higher priority provide good interactive response time

Disk Scheduling Policies Last-in, first-out good for transaction processing systems o the device is given to the most recent user so there should be little arm movement possibility of starvation since a job may never regain the head of the line

Disk Scheduling Policies Shortest Service Time First (SSTF) select the disk I/O request that requires the least movement of the disk arm from its current position always choose the minimum Seek time

Disk Scheduling Policies SCAN arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction direction is reversed

Disk Scheduling Policies C-SCAN restricts scanning to one direction only when the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again

Disk Scheduling Policies N-step-SCAN segments the disk request queue into subqueues of length N subqueues are processed one at a time, using SCAN new requests added to other queues when the current queue is processed FSCAN two queues one queue is used for new requests

RAID 0 (non-redundant) strip 0 strip 1 strip 2 strip 3 Performance: strip 4 strip 8 strip 12 strip 5 strip 9 strip 13 strip 6 strip 10 strip 14 strip 7 strip 11 strip 15 I/O request rate: Excellent Data transfer rate: Excellent Several strips can be transferred in 1 I/O request

RAID 1 (mirrored) strip 0 strip 1 strip 2 strip 3 strip 4 strip 8 strip 12 strip 5 strip 9 strip 13 strip 6 strip 10 strip 14 strip 7 strip 11 strip 15 Mirroring improves Fault Tolerance strip 0 strip 4 strip 8 strip 12 strip 1 strip 5 strip 9 strip 13 strip 2 strip 6 strip 10 strip 14 strip 3 strip 7 strip 11 strip 15 Performance: I/O request rate: good Data transfer rate: good

RAID 2 (redundancy through Hamming code) Hamming code corrects 1-bit errors; detects 2 bit errors b 0 b 2 f 0 (b) f 1 (b) f 2 (b) b 1 b 2 Disk heads are synchronized, so that they are at the same place on each disk. All disks are working on the same I/O request, so 1 I/O at a time!

RAID 3 (bit-interleaved parity) b 0 b 1 b 2 b 2 P(b) Similar to RAID 2, but parity bit is used. Can correct 1-bit errors RAID 2 & 3: only one I/O request can be performed at a time, hence poor I/O request rate. Very good data transfer rate!

RAID 4 (block-level parity) block 0 block 1 block 2 block 3 P(0-3) block 4 block 8 block 12 block 5 block 9 block 13 block 6 block 10 block 14 block 7 block 11 block 15 P(4-7) P(8-11) P(12-15) Parity, like RAID 3 However, each disk works independently, so multiple I/O requests can be processed at the same time

RAID 5 (block-level distributed parity) block 0 block 1 block 2 block 3 P(0-3) block 4 block 5 block 6 P(4-7) block 7 block 8 block 12 block 9 P(12-15) P(8-11) block 13 block 10 block 14 block 11 block 15 P(16-19) block 16 block 17 block 18 block 19 Like RAID 4, but parity info is distributed across all disks. Presumably, avoids bottleneck presented by a single parity disk.

Data Mapping for RAID Level 0 Array strip 0 Physical Disk 0 Physical Disk 1 Physical Disk 2 Physical Disk 3 strip 1 strip 2 strip 3 strip 4 strip 5 strip 0 strip 4 strip 8 strip 12 strip 1 strip 5 strip 9 strip 13 strip 2 strip 6 strip 10 strip 14 strip 3 strip 7 strip 11 strip 15 strip 6 strip 7 strip 8 strip 9 strip 10 strip11 strip 12 strip 13 strip 14 strip 15 Array Management Software

Disk Cache Buffer in main memory for disk sectors Contains a copy of some of the sectors on the disk

Least Recently Used The block that has been in the cache the longest with no reference to it is replaced The cache consists of a stack of blocks Most recently referenced block is on the top of the stack When a block is referenced or brought into the cache, it is placed on the top of the stack

Least Recently Used The block on the bottom of the stack is removed when a new block is brought in Blocks don t actually move around in main memory A stack of pointers is used

Least Frequently Used The block that has experienced the fewest references is replaced A counter is associated with each block Counter is incremented each time block is accessed Problem: Some blocks may be referenced many times in a short period of time and then not needed any more

Frequency-based Replacement New Section Old Section MRU...... LRU Re-reference: count unchanged Re-reference: count := count + 1 Miss (new block brought in) count := 1 New Section Middle Section Old Section MRU......... LRU Refinement: Use of three sections

UNIX I/O Structure File Subsystem Buffer Cache Character Dev Device Drivers Block Dev

Windows NT 4.0 I/O Manager I/O Manager Cache Manager File System Drivers Network Drivers Hardware Device Drivers