STORAGE SYSTEMS. Operating Systems 2015 Spring by Euiseong Seo

Similar documents
I/O & Storage. Jin-Soo Kim ( Computer Systems Laboratory Sungkyunkwan University

Hard Disk Drives (HDDs) Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Hard Disk Drives (HDDs)

Computer Systems Laboratory Sungkyunkwan University

CSE 451: Operating Systems Winter Secondary Storage. Steve Gribble. Secondary storage

CSE 451: Operating Systems Winter Lecture 12 Secondary Storage. Steve Gribble 323B Sieg Hall.

CSE 451: Operating Systems Spring Module 12 Secondary Storage

CSE 451: Operating Systems Spring Module 12 Secondary Storage. Steve Gribble

CSE 153 Design of Operating Systems

Ref: Chap 12. Secondary Storage and I/O Systems. Applied Operating System Concepts 12.1

Operating Systems. Operating Systems Professor Sina Meraji U of T

CSE 153 Design of Operating Systems Fall 2018

CSE506: Operating Systems CSE 506: Operating Systems

Block Device Driver. Pradipta De

Secondary storage. CS 537 Lecture 11 Secondary Storage. Disk trends. Another trip down memory lane

Hard Disk Drives. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau)

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Disks: Structure and Scheduling

Mass-Storage. ICS332 - Fall 2017 Operating Systems. Henri Casanova

Chapter 12: Mass-Storage

CHAPTER 12: MASS-STORAGE SYSTEMS (A) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

COS 318: Operating Systems. Storage Devices. Vivek Pai Computer Science Department Princeton University

CS420: Operating Systems. Mass Storage Structure

V. Mass Storage Systems

CS3600 SYSTEMS AND NETWORKS

Mass-Storage Structure

u Covered: l Management of CPU & concurrency l Management of main memory & virtual memory u Currently --- Management of I/O devices

COS 318: Operating Systems. Storage Devices. Jaswinder Pal Singh Computer Science Department Princeton University

Block Device Scheduling. Don Porter CSE 506

Block Device Scheduling

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition,

Block Device Scheduling. Don Porter CSE 506

Chapter 10: Mass-Storage Systems

Disk Scheduling COMPSCI 386

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

Chapter 10: Mass-Storage Systems

CSE 120 Principles of Operating Systems

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke

Module 13: Secondary-Storage Structure

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University

Disks October 22 nd, 2010

[537] I/O Devices/Disks. Tyler Harter

Mass-Storage. ICS332 Operating Systems

EECS 482 Introduction to Operating Systems

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Today: Secondary Storage! Typical Disk Parameters!

Chapter 12: Secondary-Storage Structure. Operating System Concepts 8 th Edition,

Chapter 10: Mass-Storage Systems

Mass-Storage Structure

I/O Device Controllers. I/O Systems. I/O Ports & Memory-Mapped I/O. Direct Memory Access (DMA) Operating Systems 10/20/2010. CSC 256/456 Fall

Improving Performance using the LINUX IO Scheduler Shaun de Witt STFC ISGC2016

UNIT 4 Device Management

EIDE, ATA, SATA, USB,

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

CSE 120 Principles of Operating Systems

Disks March 15, 2010

Disk Scheduling. Based on the slides supporting the text

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CSC369 Lecture 9. Larry Zhang, November 16, 2015

Overview of Mass Storage Structure

Chapter 14: Mass-Storage Systems. Disk Structure

Mass-Storage Systems

Chapter 11. I/O Management and Disk Scheduling

Tape pictures. CSE 30341: Operating Systems Principles

Free Space Management

Storage. CS 3410 Computer System Organization & Programming

Input/Output. Today. Next. Principles of I/O hardware & software I/O software layers Disks. Protection & Security

Disks March 26 th, 2007

Disks and RAID. CS 4410 Operating Systems. [R. Agarwal, L. Alvisi, A. Bracy, E. Sirer, R. Van Renesse]

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

16:30 18:00, June 20 (Monday), 2011 # (even student IDs) # (odd student IDs) Scope

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University

STORING DATA: DISK AND FILES

Input/Output. Chapter 5: I/O Systems. How fast is I/O hardware? Device controllers. Memory-mapped I/O. How is memory-mapped I/O done?

Introduction to I/O and Disk Management

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Mass-Storage Systems. Mass-Storage Systems. Disk Attachment. Disk Attachment

Disk Scheduling. Chapter 14 Based on the slides supporting the text and B.Ramamurthy s slides from Spring 2001

Chapter 13: Mass-Storage Systems. Disk Scheduling. Disk Scheduling (Cont.) Disk Structure FCFS. Moving-Head Disk Mechanism

Chapter 13: Mass-Storage Systems. Disk Structure

Introduction to I/O and Disk Management

CSE 120. Overview. July 27, Day 8 Input/Output. Instructor: Neil Rhodes. Hardware. Hardware. Hardware

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

Part IV I/O System. Chapter 12: Mass Storage Structure

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

I/O Systems and Storage Devices

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010

CS370 Operating Systems

Chapter 14: Mass-Storage Systems

I/O Management and Disk Scheduling. Chapter 11

EECS 482 Introduction to Operating Systems

Chapter 12: Mass-Storage

Transcription:

STORAGE SYSTEMS Operating Systems 2015 Spring by Euiseong Seo

Today s Topics HDDs (Hard Disk Drives) Disk scheduling policies Linux I/O schedulers

Secondary Storage Anything that is outside of primary memory Does not permit direct execution of instructions or data retrieval via load / store instructions Characteristics It s large: 100GB and more It s cheap: 2TB SATA2 disk costs 80,000 It s persistent: data survives power loss It s slow: milliseconds to access

HDD Physical Structure Electromechanical Rota/ng disks Arm assembly Electronics Disk controller Buffer Host interface

HDD Parameters Example Seagate Barracuda ST31000528AS (1TB) 4 Heads, 2 Discs Max. recording density: 1413K BPI (bits/inch) Avg. track density: 236K TPI (tracks/inch) Avg. areal density: 329 Gbits/sq.inch Spindle speed: 7200rpm (8.3ms/rotation) Average seek time: < 8.5ms (read), < 9.5ms (write) Max. internal data transfer rate: 1695 Mbits/sec Max. I/O data transfer rate: 300MB/sec (SATA-2) Max. sustained data transfer rate: 125MB/sec Internal cache buffer: 32MB Max power-on to ready: < 10.0 sec

HDD Internal Dynamics Our Boeing 747 will fly at the altitude of only a few mm at the speed of approximately 65 mph periodically landing and taking off And still the surface of the runway, which consists of a few mm-thick layers, will stay intact for years

Managing Disks Disks and the OS Disks are messy physical devices: n Errors, bad blocks, missed seeks, etc. The job of the OS is to hide this mess from higherlevel software. n Low-level device drivers (initiate a disk read, etc) n Higher-level abstractions (files, databases, etc.) The OS may provide different levels of disk access to different clients. n Physical disk block (surface, cylinder, sector) n Disk logical block (disk block #) n Logical file (filename, block or record or byte #)

Managing Disks Interacting with disks Specifying disk requests requires a lot of info n Cylinder #, surface #, track #, sector #, transfer size, etc Older disks required the OS to specify all of this n The OS needs to know all disk parameters Modern disks are more complicated n Not all sectors are the same size, sectors are remapped, etc Current disks provide a higher-level interface (e.g., SCSI) n The disks exports its data as a logical array of blocks [0..N-1] n Disk maps logical blocks to cylinder/surface/track/sector n Only need to specify the logical block # to read/write n As a result, physical parameters are hidden from OS

Disk Performance Performance depends on a number of steps Seek: moving the disk arm to the correct cylinder depends on how fast disk arm can move (increasing very slowly) Rotation: waiting for the sector to rotate under head depends on rotation rate of disk (increasing, but slowly) Transfer: transferring data from surface into disk controller, sending it back to the host. depends on density of bytes on disk (increasing, and very quickly) Disk scheduling Because seeks are so expensive, the OS tries to schedule disk requests that are queued waiting for the disk

FCFS FCFS (= do nothing) Reasonable when load is low. Long waiting times for long request queues.

SSTF Shortest seek time first Minimizes arm movement (seek time) Maximizes request rate Unfairly favors middle blocks May cause starvation of some requests

SCAN Elevator algorithm Service requests in one direction until done, then reverse Skews wait times non-uniformly

C-SCAN Circular SCAN Like SCAN, but only go in one direction (e.g. typewritters) Uniform wait times

LOOK / C-LOOK LOOK / C-LOOK Similar to SCAN/C-SCAN, but the arm goes only as far as the final request in each direction C- LOOK

Disk Scheduling Algorithm Selection SSTF is common and has a natural appeal SCAN and C-SCAN perform better for systems that place a heavy load on the disk Either SSTF or LOOK is a reasonable choice for the default algorithm Performance depends on the number and types of requests Requests for disk service can be influenced by the file allocation method In general, unless there are request queues, disk scheduling does not have much impact Important for servers, less so for PCs Modern disks often do the disk scheduling themselves Disks know their layout better than OS, can optimize better Ignores, undoes any scheduling done by OS

Modern Disks Intelligent controllers A small CPU + many kilobytes of memory They run a program written by the controller manufacturer to process I/O requests from the CPU and satisfy them Intelligent features n Read-ahead: the current track n Caching: frequently-used blocks n Command queueing n Request reordering: for seek and/or rotational optimality n Request retry on hardware failure n Bad block/track identification n Bad block/track remapping: onto spare blocks and/or tracks

I/O Schedulers I/O scheduler s job Improve overall disk throughput n Merging requests to reduce the number of requests n Reordering and sorting requests to reduce disk seek time Prevent starvation n Submit requests before deadline n Avoid read starvation by write Provide fairness among different processes Guarantee quality-of-service (QoS) requirement

Linux I/O Schedulers Linus elevator scheduler Merges adjacent I/O requests Sorts I/O requests in ascending block order Writes-starving-reads n Stop insertion-sorting if there is a sufficiently old request in the queue Trades fairness for improved global throughput Not really an elevator: puts requests with a low sector number at the top of the queue regardless of the current head position Real-time? Was the default I/O scheduler in Linux 2.4

Linux I/O Schedulers Deadline scheduler Two standard Read/Write sorted queues (by LBA) + Two Read/Write FIFO queues (by submission time) Each FIFO queue is assigned an expiration value. n Read: 500 msec n Write: 5 sec Normally, send I/O requests from the head of the standard sorted queue. If the request at the head of one of the FIFO queues expires, services the FIFO queue Emphasizes average read request response time No strict guarantees over request latency

Linux I/O Schedulers (3) Anticipatory scheduler Considers deceptive idleness n Process A is about to issue next request, but scheduler hastily assumes that process A has no further requests! Adds an anticipation heuristic: n Sometimes wait for process whose request was last serviced. Was the default I/O scheduler in the 2.6 kernel Dropped in the Linux Kernel 2.6.33 in favor of the CFQ scheduler

Linux I/O Schedulers Noop scheduler Minimal overhead I/O scheduler Only performs merging For random-access devices such as RAM disks and solid state drives (SSDs) For storage with intelligent HBA or externally attached controller (RAID, TCQ drives)

Linux I/O Schedulers CFQ (Complete Fair Queuing) scheduler Assigns incoming I/O requests to specific queues based on the process originating the I/O request n Within each queue, requests are coalesced with adjacent requests and insertion sorted n Service the queues round robin, plucking a configurable number of requests (by default, four) from each queue before continuing on to the next Fairness at a per-process level Initially for multimedia applications, but works well across many workloads Subsumes anticipatory scheduling New default scheduler for Linux 2.6 (from 2.6.18)