COT 4600 Operating Systems Fall Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM

Similar documents
Chapter 13: I/O Systems

Module 12: I/O Systems

Chapter 13: I/O Systems

Chapter 13: I/O Systems. Chapter 13: I/O Systems. Objectives. I/O Hardware. A Typical PC Bus Structure. Device I/O Port Locations on PCs (partial)

Device-Functionality Progression

Chapter 12: I/O Systems. I/O Hardware

by I.-C. Lin, Dept. CS, NCTU. Textbook: Operating System Concepts 8ed CHAPTER 13: I/O SYSTEMS

Module 12: I/O Systems

Chapter 13: I/O Systems

Chapter 13: I/O Systems

I/O Systems. 04/16/2007 CSCI 315 Operating Systems Design 1

Chapter 12: I/O Systems

Chapter 13: I/O Systems

Chapter 12: I/O Systems. Operating System Concepts Essentials 8 th Edition

Lecture 13 Input/Output (I/O) Systems (chapter 13)

Chapter 13: I/O Systems

Chapter 13: I/O Systems

The control of I/O devices is a major concern for OS designers

Chapter 13: I/O Systems. Operating System Concepts 9 th Edition

Input/Output Systems

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1

Chapter 13: I/O Systems

I/O Systems. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

I/O AND DEVICE HANDLING Operating Systems Design Euiseong Seo

CSE 4/521 Introduction to Operating Systems. Lecture 24 I/O Systems (Overview, Application I/O Interface, Kernel I/O Subsystem) Summer 2018

Chapter 13: I/O Systems

Module 11: I/O Systems

Lecture 15: I/O Devices & Drivers

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Silberschatz and Galvin Chapter 12

操作系统概念 13. I/O Systems

Input/Output Systems

Operating System: Chap13 I/O Systems. National Tsing-Hua University 2016, Fall Semester

CSC Operating Systems Spring Lecture - XIX Storage and I/O - II. Tevfik Koşar. Louisiana State University.

RAID Structure. RAID Levels. RAID (cont) RAID (0 + 1) and (1 + 0) Tevfik Koşar. Hierarchical Storage Management (HSM)

[08] IO SUBSYSTEM 1. 1

NWI-IBC019: Operating systems

CS370 Operating Systems

CHAPTER 12 AND 13 - MASS-STORAGE STRUCTURE & I/O- SYSTEMS

I/O SYSTEMS. Sunu Wibirama

CS370 Operating Systems

I/O Handling. ECE 650 Systems Programming & Engineering Duke University, Spring Based on Operating Systems Concepts, Silberschatz Chapter 13

Operating Systems. Steven Hand. Michaelmas / Lent Term 2008/ lectures for CST IA. Handout 4. Operating Systems

Comp 204: Computer Systems and Their Implementation. Lecture 18: Devices

Input-Output (I/O) Input - Output. I/O Devices. I/O Devices. I/O Devices. I/O Devices. operating system must control all I/O devices.

CS420: Operating Systems. Kernel I/O Subsystem

Operating Systems 2010/2011

I/O Systems. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Devices. Today. Comp 104: Operating Systems Concepts. Operating System An Abstract View 05/01/2017. Devices. Devices

Today: I/O Systems. Architecture of I/O Systems

I/O Systems. Jo, Heeseung

UNIT-7. Overview of Mass Storage Structure

Operating Systems 2010/2011

Outline. Operating Systems: Devices and I/O p. 1/18

CS330: Operating System and Lab. (Spring 2006) I/O Systems

Input Output (IO) Management

Chapter 7: Mass-storage structure & I/O systems. Operating System Concepts 8 th Edition,

CS162 Operating Systems and Systems Programming Lecture 17. Disk Management and File Systems

Main Points of the Computer Organization and System Software Module

Operating Systems. V. Input / Output

CS162 Operating Systems and Systems Programming Lecture 17. Disk Management and File Systems

I/O. CS 416: Operating Systems Design Department of Computer Science Rutgers University

CSE 451: Operating Systems Winter I/O System. Gary Kimura

VIII. Input/Output Operating Systems Prof. Dr. Marc H. Scholl DBIS U KN Summer Term

CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

I/O. Fall Tore Larsen. Including slides from Pål Halvorsen, Tore Larsen, Kai Li, and Andrew S. Tanenbaum)

I/O. Fall Tore Larsen. Including slides from Pål Halvorsen, Tore Larsen, Kai Li, and Andrew S. Tanenbaum)

COT 4600 Operating Systems Fall 2009

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Last class: Today: Course administration OS definition, some history. Background on Computer Architecture

SAZ4B/SAE5A Operating System Unit : I - V

I/O CANNOT BE IGNORED

Introduction I/O 1. I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec

CS162 Operating Systems and Systems Programming Lecture 16. I/O Systems. Page 1

CSI3131 Final Exam Review

I/O Management and Disk Scheduling. Chapter 11

Storage. Hwansoo Han

CIS Operating Systems I/O Systems & Secondary Storage. Professor Qiang Zeng Spring 2018

COP 5611 Operating Systems Spring Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM

COP 5611 Operating Systems Spring Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM

CSE 120. Overview. July 27, Day 8 Input/Output. Instructor: Neil Rhodes. Hardware. Hardware. Hardware

CSCI-GA Operating Systems I/O. Hubertus Franke

7/20/2008. What Operating Systems Do Computer-System Organization

Chapter 1: Introduction. Operating System Concepts 9 th Edit9on

Chapter 1: Introduction

CIS 21 Final Study Guide. Final covers ch. 1-20, except for 17. Need to know:

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Chapter 6. Storage and Other I/O Topics

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

Chapter 8. A Typical collection of I/O devices. Interrupts. Processor. Cache. Memory I/O bus. I/O controller I/O I/O. Main memory.

UC Santa Barbara. Operating Systems. Christopher Kruegel Department of Computer Science UC Santa Barbara

Lecture 23: Storage Systems. Topics: disk access, bus design, evaluation metrics, RAID (Sections )

CS450/550 Operating Systems

Computer System Overview

Chapter 1: Introduction

Operating Systems. Introduction & Overview. Outline for today s lecture. Administrivia. ITS 225: Operating Systems. Lecture 1

Chapter 1: Introduction. Chapter 1: Introduction

I/O CANNOT BE IGNORED

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Transcription:

COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM

Lecture 23 Attention: project phase 4 due Tuesday November 24 Final exam Thursday December 10 4-6:50 PM Last time: Performance Metrics (Chapter 5) Random variables Elements of queuing theory Today: Methods to diminish the effect of bottlenecks: batching, dallying, speculation I/O systems; the I/O bottleneck Multi-level memories Memory characterization Multilevel memories management using virtual memory Adding multi-level memory management to virtual memory Next Time: Scheduling 2

Methods to diminish the effect of bottlenecks Batching perform several requests as a group to avoid the setup overhead of doing one at a time f: fixed delay v: variable delay n: the number of items n (f + v) versus f+nv Dallying delay a request on the chance that it will not be needed or that one could perform batching. Speculation perform an operation in advance of receiving the request. speculative execution of branch instructions speculative page fetching rather than on demand All increase complexity due to concurrency.

The I/O bottleneck An illustration of the principle of incommensurate scaling CPU and memory speed increase at a faster rate than those of mechanical I/O devices limited by the laws of Physics. Example: hard drives The average seek time (AST): AST = 8 msec average rotation latency (ARL): rotation speed: 7200 rotation/minute 120 rotations /second (8.33 msec/rotation) ARL =4.17 msec A typical 400 Gbyte disk 16,383 cylinders 24 Mbyte/cylinder 8 two-sided platters 16 tracks/cylinder 24/16 MBytes/track 1.5 Mbyte/track The maximum rate transfer rate of the disk drive is: 120 revolutions/sec x 1.5 Mbyte/track=180 Mbyte/sec The bus transfer rates (BTR): ATA3 bus 3 Gbytes/sec IDE bus 66 Mbyte/sec. This is the bottleneck!! The average time to read a 4 Kbyte block: AST+ARL+4 /180 = 8 + 4.17 + 0.02 = 12.19 msec The throughput: 328 Kbytes/sec.

Lecture 23 5

I/O bottleneck If the application consists of a loop: (read a block of data, compute for 1 msec, write back) and if the block are stored sequentially on the disk thus we can read a full track at once ( speculative execution of the I/O) we have a write-though buffer so that we can write a full track at one (batching) then the execution time can be considerably reduced. The time per iteration: read time + compute time + write time Initially: 12.19 + 1 + 12.19 = 25.38 msec With speculative reading of an entire track and overlap of reading and writing Read an entire track of 1.5 Mbyte reads the data for 384=1,500/4 iterations The time for 384 iterations: Fixed delay: average seek time + 1 rotational delay: 8 + 8.33 msec= 16.33 msec Variable delay: 384(compute time + data transfer time)= 384(1+12.19)= 5065 msec Total time: 16.33 +5,065= 5,081 msec Lecture 23 6

Lecture 23 7

Disk writing strategies Keep in mind that buffering data before writing to the disk has implications; if the system fails then the data is lost. Strategies: Write-through write to the disk before the write system call returns to the user application User-controlled write through a force call. At the time the file is closed After a predefined number of write calls or after a pre-defined time. Lecture 23 8

Communication among asynchronous sub-systems: polling versus interrupts Polling periodically checking the status of an I/O device Interrupt deliver data or status information when status information immediately. Intel Pentium Vector Table 9

Interrupts: used for I/O and for exceptions CPU Interrupt-request line triggered by I/O device Interrupt handler receives interrupts To mask an interrupt ignore or delay some interrupts Interrupt vector to dispatch interrupt to correct handler Based on priority Some non-maskable 10

DMA Bypasses CPU to transfer data directly between I/O device and memory; it allows subsystems within the computer to access system memory for reading and/or writing independently of CPU: disk controller, graphics cards, network cards, sound cards, GPUs (graphics processors), also used for intra-chip data transfer in multi-core processors,. Avoids programmed I/O for large data movement Requires DMA controller 11

12

Device drivers and I/O system calls Multitude of I/O devices Character-stream or block Sequential or random-access Sharable or dedicated Speed of operation Read-write, read only, or write only Device-driver layer hides differences among I/O controllers from kernel: I/O system calls encapsulate device behaviors in generic classes 13

Block and Character Devices Block devices (e.g., disk drives, tapes) Commands e.g., read, write, seek Raw I/O or file-system access Memory-mapped file access possible Character devices (e.g., keyboards, mice, serial ports) Commands e.g., get, put Libraries allow line editing 14

Network Devices and Timers Network devices Own interface different from bloc or character devices Unix and Windows NT/9x/2000 include socket interface Separates network protocol from network operation Includes select functionality Approaches vary widely (pipes, FIFOs, streams, queues, mailboxes) Timers Provide current time, elapsed time, timer Programmable interval timer for timings, periodic interrupts ioctl (on UNIX) covers odd aspects of I/O such as clocks and timers 15

Blocking and non-blocking I/O Blocking process suspended until I/O completed Easy to use and understand Insufficient for some needs Non-blocking I/O call returns control to the process immediately User interface, data copy (buffered I/O) Implemented via multi-threading Returns quickly with count of bytes read or written Asynchronous process runs while I/O executes I/O subsystem signals process when I/O completed 16

Synchronous Asynchronous 17

Kernel I/O Subsystem Scheduling Some I/O request ordering using per-device queue Some OSs try fairness Buffering store data in memory while transferring to I/O device. To cope with device speed mismatch or transfer size mismatch To maintain copy semantics 18

19

Kernel I/O Subsystem and Error Handling Caching fast memory holding copy of data Always just a copy Key to performance Spooling holds output for a device that can serve only one request at a time (e.g., printer). Device reservation provides exclusive access to a device System calls for allocation and de-allocation Possibility of deadlock Error handling: OS can recover from disk read, device unavailable, transient write failures When I/O request fails error code. System error logs hold problem reports 20

I/O Protection I/O instructions are priviledged Users make system calls 21

Kernel Data Structures for I/O handling Kernel keeps state info for I/O components, including open file tables, network connections, device control blocs Complex data structures to track buffers, memory allocation, dirty blocks Some use object-oriented methods and message passing to implement I/O 22

23

Hardware Operations Operation for reading a file: Determine device holding file Translate name to device representation Physically read data from disk into buffer Make data available to the process Return control to process 24

STREAMS in Unix STREAM a full-duplex communication channel between a user-level process and a device in Unix System V and beyond A STREAM consists of: - STREAM head interfaces with the user process - driver end interfaces with the device - zero or more STREAM modules between them. Each module contains a read queue and a write queue Message passing is used to communicate between queues 25

26

I/O major factor in system performance: Execute device driver, kernel I/O code Context switches Data copying Network traffic stressful 27

Improving Performance Reduce number of context switches Reduce data copying Reduce interrupts by using large transfers, smart controllers, polling Use DMA Balance CPU, memory, bus, and I/O performance for highest throughput 28

Memory characterization Capacity Latency of random access memory: Access time time the data is available Cycle time (> access time) time when the next READ operation can be carried out (READ can be destructive) Cost: for random access memory cents/mbyte For disk storage: dollars/gbyte Cell size the number of bits transferred in a single READ or WRITE operation Throughput Gbits/sec Lecture 23 29

Lecture 23 30

Multi-level memories In the following hierarchy the amount of storage and the access time increase at the same time CPU registers L1 cache L2 cache Main memory Magnetic disk Mass storage systems Remote storage Memory management schemes where the data is placed through this hierarchy Manual left to the user Automatic based on memory virtualization More effective Easier to use Lecture 23 31

Other forms of memory virtualization Memory-mapped files in UNIX mmap Copy on write when several threads use the same data map the page holding the data and store the data only once in memory. This works as long all the threads only READ the data. If one of the threads carries out a WRITE then the virtual memory handling should generate an exception and data pages to be remapped so that each thread gets its only copy of the page. On-demand zero filled pages Instead of allocating zero-filled pages on RAM or on the disk the VM manager maps these pages without READ or WRITE permissions. When a thread attempts to actually READ or WRITE to such pages then an exception is generated and the VM manager allocates the page dynamically. Virtual-shared memory Several threads on multiple systems share the same address space. When a thread references a page that is not in its local memory the local VM manager fetches the page over the network and the remote VM manager un-maps the page. Lecture 23 32

Multi-level memory management and virtual memory Two level memory system: RAM + disk READ and WRITE from RAM controlled by the VM manager GET and PUT from disk controlled by a multi-level memory manager Old design philosophy: integrate the two to reduce the instruction count New approach modular organization Implement the VM manager (VMM) in hardware Implement the multi-level memory manager (MLMM) in the kernel in software. It transfers pages back and forth between RAM and the disk How it works: VM attempts to translate the virtual memory address to a physical memory address If the page is not in main memory VM generates a page-fault exception. The exception handler uses a SEND to send to an MLMM port the page number The SEND invokes ADVANCE which wakes up a thread of MLMM The MMLM invokes AWAIT on behalf of the thread interrupted due to the page fault. The AWAIT releases the processor to the SCHEDULER thread. The new approach leads to implicit I/O Lecture 23 33

Lecture 23 34

Lecture 23 35