Answer to exercises chap 13.2

Similar documents
I/O Systems. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS330: Operating System and Lab. (Spring 2006) I/O Systems

I/O Handling. ECE 650 Systems Programming & Engineering Duke University, Spring Based on Operating Systems Concepts, Silberschatz Chapter 13

Chapter 13: I/O Systems

CSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018

Chapter 12: I/O Systems

Chapter 13: I/O Systems

Chapter 12: I/O Systems. Operating System Concepts Essentials 8 th Edition

Device-Functionality Progression

Chapter 12: I/O Systems. I/O Hardware

Chapter 13: I/O Systems. Operating System Concepts 9 th Edition

by I.-C. Lin, Dept. CS, NCTU. Textbook: Operating System Concepts 8ed CHAPTER 13: I/O SYSTEMS

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1

Silberschatz and Galvin Chapter 12

I/O Systems. Jo, Heeseung

Module 12: I/O Systems

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

I/O Systems. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Chapter 5 Input/Output. I/O Devices

Input Output (IO) Management

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Chapter 13: I/O Systems

Chapter 13: I/O Systems

Input/Output Systems

Operating Systems. V. Input / Output

CS 134. Operating Systems. April 8, 2013 Lecture 20. Input/Output. Instructor: Neil Rhodes. Monday, April 7, 14

Comp 204: Computer Systems and Their Implementation. Lecture 18: Devices

Chapter 13: I/O Systems

Operating Systems 2010/2011

Devices. Today. Comp 104: Operating Systems Concepts. Operating System An Abstract View 05/01/2017. Devices. Devices

Chapter 13: I/O Systems

Chapter 13: I/O Systems. Chapter 13: I/O Systems. Objectives. I/O Hardware. A Typical PC Bus Structure. Device I/O Port Locations on PCs (partial)

Operating Systems 2010/2011

Module 12: I/O Systems

CS370 Operating Systems

Operating System: Chap13 I/O Systems. National Tsing-Hua University 2016, Fall Semester

Input / Output. School of Computer Science G51CSA

The control of I/O devices is a major concern for OS designers

[08] IO SUBSYSTEM 1. 1

-Device. -Physical or virtual thing that does something -Software + hardware to operate a device (Controller runs port, Bus, device)

I/O AND DEVICE HANDLING Operating Systems Design Euiseong Seo

Operating system Dr. Shroouq J.

Today: I/O Systems. Architecture of I/O Systems

Computer Systems Assignment 4: Scheduling and I/O

Chapter 13: I/O Systems

Operating Systems. V. Input / Output. Eurecom

Chapter 8. A Typical collection of I/O devices. Interrupts. Processor. Cache. Memory I/O bus. I/O controller I/O I/O. Main memory.

Module 11: I/O Systems

Systems Architecture II

I/O Management and Disk Scheduling. Chapter 11

操作系统概念 13. I/O Systems

EE108B Lecture 17 I/O Buses and Interfacing to CPU. Christos Kozyrakis Stanford University

CSE 120. Overview. July 27, Day 8 Input/Output. Instructor: Neil Rhodes. Hardware. Hardware. Hardware

CIS Operating Systems I/O Systems & Secondary Storage. Professor Qiang Zeng Fall 2017

CSCI-GA Operating Systems I/O. Hubertus Franke

CS370 Operating Systems

QUESTION BANK UNIT I

Input-Output (I/O) Input - Output. I/O Devices. I/O Devices. I/O Devices. I/O Devices. operating system must control all I/O devices.

INPUT/OUTPUT ORGANIZATION

I/O SYSTEMS. Sunu Wibirama

Outline. Operating Systems: Devices and I/O p. 1/18

Computer System Overview

Computer System Overview OPERATING SYSTEM TOP-LEVEL COMPONENTS. Simplified view: Operating Systems. Slide 1. Slide /S2. Slide 2.

INPUT/OUTPUT ORGANIZATION

NWI-IBC019: Operating systems

INPUT/OUTPUT ORGANIZATION

CS162 Operating Systems and Systems Programming Lecture 17. Disk Management and File Systems

Input/Output Management

Lecture 13 Input/Output (I/O) Systems (chapter 13)

Modes of Transfer. Interface. Data Register. Status Register. F= Flag Bit. Fig. (1) Data transfer from I/O to CPU

I/O Systems. 04/16/2007 CSCI 315 Operating Systems Design 1

Chapter 13: I/O Systems

Chapter 11. I/O Management and Disk Scheduling

Chapter 8: I/O functions & socket options

1 What is an operating system?

I/O. CS 416: Operating Systems Design Department of Computer Science Rutgers University

CIS Operating Systems I/O Systems & Secondary Storage. Professor Qiang Zeng Spring 2018

CSC Operating Systems Spring Lecture - XIX Storage and I/O - II. Tevfik Koşar. Louisiana State University.

RAID Structure. RAID Levels. RAID (cont) RAID (0 + 1) and (1 + 0) Tevfik Koşar. Hierarchical Storage Management (HSM)

Chapter 13: I/O Systems

Multiprocessor and Real- Time Scheduling. Chapter 10

19: I/O. Mark Handley. Direct Memory Access (DMA)

Lecture 15: I/O Devices & Drivers

QUIZ Ch.6. The EAT for a two-level memory is given by:

COSC 243. Input / Output. Lecture 13 Input/Output. COSC 243 (Computer Architecture)

Operating- System Structures

I/O CANNOT BE IGNORED

CPS104 Computer Organization and Programming Lecture 18: Input-Output. Outline of Today s Lecture. The Big Picture: Where are We Now?

Introduction. CS3026 Operating Systems Lecture 01

CSE 380 Computer Operating Systems

Input/Output. Today. Next. Principles of I/O hardware & software I/O software layers Disks. Protection & Security

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

Section I Section Real Time Systems. Processes. 1.6 Input/Output Management. (Textbook: A. S. Tanenbaum Modern OS - ch. 5)

Hard Disk Drives (HDDs) Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Main Points of the Computer Organization and System Software Module

Chapter 1 Computer System Overview

I/O and Device Drivers

COT 4600 Operating Systems Fall Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM

Hard Disk Drives (HDDs)

Computer Organization ECE514. Chapter 5 Input/Output (9hrs)

Transcription:

Answer to exercises chap 13.2 The advantage of supporting memory-mapped I/O to device-control registers is that it eliminates the need for special I/O instructions from the instruction set and therefore also does not require the enforcement of protection rules that prevent user programs from executing these I/O instructions. The disadvantage is that the resulting flexibility needs to be handled with care; the memory translation units need to ensure that the memory addresses associated with the device control registers are not accessible by user programs in order to ensure protection. 1

Answer to exercises chap 13.3 The purpose of this strategy is to ensure that the most critical aspect of the interrupt handling code is performed first and the less critical portions of the code are delayed for the future. For instance, when a device finishes an I/O operation, the devicecontrol operations corresponding to declaring the device as no longer being busy are more important in order to issue future operations. However, the task of copying the data provided by the device to the appropriate user or kernel memory regions can be delayed for a future point when the CPU is idle. In such a scenario, a lower-priority interrupt handler is used to perform the copy operation. 2

Answer to exercises chap 13.6 Three pros of the UNIX method: Very efficient, low overhead and low amount of data movement. Fast implementation no coordination needed with other kernel components. Simple, so less chance of data loss. Three cons: No data protection, and more possible side effects from changes so more difficult to debug. Difficult to implement new I/O methods: new data structures needed rather than just new objects. Complicated kernel I/O subsystem, full of data structures, access routines, and locking mechanisms 3

Answer to exercises chap 13.8 A hybrid approach could switch between polling and interrupts depending on the length of the I/O operation wait. For example, we could poll and loop N times, and if the device is still busy at N+1, we could set an interrupt and sleep. This approach would avoid long busy-waiting cycles. This method would be best for very long or very short busy times. It would be inefficient it the I/O completes at N+T (where T is a small number of cycles) due to the overhead of polling plus setting up and catching interrupts. Pure polling is best with very short wait times. Interrupts are best with known long wait times. 4

Answer to exercises chap 13.9 a. A mouse used with a graphical user interface Buffering may be needed to record mouse movement during times when higher-priority operations are taking place. Spooling and caching are inappropriate. Interrupt-driven I/O is most appropriate. b. A tape drive on a multitasking operating system (assume no device preallocation is available) Buffering may be needed to manage throughput difference between the tape drive and the source or destination of the I/O. Caching can be used to hold copies of data that resides on the tape, for faster access. Spooling could be used to stage data to the device when multiple users desire to read from or write to it. Interrupt driven I/O is likely to allow the best performance. 5

Answer to exercises chap 13.9 (cont.) c. A disk drive containing user files Buffering can be used to hold data while in transit from user space to the disk, and visa versa. Caching can be used to hold disk-resident data for improved performance. Spooling is not necessary because disks are shared-access devices. Interrupt-driven I/O is best for devices such as disks that transfer data at slow rates. d. A graphics card with direct bus connection, accessible through memorymapped I/O Buffering may be needed to control multiple access and for performance (double-buffering can be used to hold the next screen image while displaying the current one). Caching and spooling are not necessary due to the fast and shared-access natures of the device. Polling and interrupts are useful only for input and for I/O completion detection, neither of which is needed for a memory-mapped device. 6

Answer to exercises chap 13.10 It is possible, using the following algorithm. Let s assume we simply use the busy-bit (or the command-ready bit; this answer is the same regardless). When the bit is off, the controller is idle. The host writes to data-out and sets the bit to signal that an operation is ready (the equivalent of setting the command-ready bit). When the controller is finished, it clears the busy bit. The host then initiates the next operation. This solution requires that both the host and the controller have read and write access to the same bit, which can complicate circuitry and increase the cost of the controller. 7

Answer to exercises chap 13.12 Direct virtual memory access allows a device to perform a transfer from two memory-mapped devices without the intervention of the CPU or the use of main memory as a staging ground; the device simply issues memory operations to the memory-mapped addresses of a target device and the ensuing virtual address translation guarantees that the data is transferred to the appropriate device. This functionality, however, comes at the cost of having to support virtual address translation on addresses accessed by a DMA controller and requires the addition of an address-translation unit to the DMA controller. The address translation results in both hardware and software costs and might also result in coherence problems between the data structures maintained by the CPU for address translation and corresponding structures used by the DMA controller. These coherence issues would also need to be dealt with and results in further increase in system complexity. 8

Answer to exercises chap 13.15 Generally, blocking I/O is appropriate when the process will be waiting only for one specific event. Examples include a disk, tape, or keyboard read by an application program. Non-blocking I/O is useful when I/O may come from more than one source and the order of the I/O arrival is not predetermined. Examples include network daemons listening to more than one network socket, window managers that accept mouse movement as well as keyboard input, and I/O-management programs, such as a copy command that copies data between I/O devices. In the last case, the program could optimize its performance by buffering the input and output and using non-blocking I/O to keep both devices fully occupied. Non-blocking I/O is more complicated for programmers, because of the asynchronous rendezvous that is needed when an I/O occurs. Also, busy waiting is less efficient than interrupt-driven I/O so the overall system performance would decrease. 9

Answer to exercises chap 12.8 a. The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130. The total seek distance is 7081. b. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750, 1774. The total seek distance is 1745. c. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 130, 86. The total seek distance is 9769. d. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 130, 86. The total seek distance is 3319. e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 86, 130. The total seek distance is 9813. f. The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130. The total seek distance is 3363. 10

Answer to the question on the website We make the following simplifying assumption: we ignore the fact that some files will cross a track boundary and thus requires an additional seek. Given the long track size relative to the file size, this seek will be infrequent; furthermore, seeking to an adjacent track is much faster than the average seek time. With this assumption we estimate the total compaction time as follows: Moving a file requires the time 2(s + r + 5t), where s is the seek time, r is the rotational delay, and t is the transfer time for one block. s is given as 50ms. r is derived as follows: 5000rpm corresponds to 12 ms for one rotation. r is half that time, or 6ms. t is derived as follows: it takes 12ms for one rotation. There are 100 blocks per track. Thus the transfer time for one block is 0.12ms. Substituting these values into the formula yields 2(50 + 6 +5 * 0.12) = 113ms. To move 1000 files, we need 113 seconds or abut 2 minutes. 11