Chapter 12: Mass-Storage

Similar documents
Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition,

Chapter 14: Mass-Storage Systems. Disk Structure

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems

CS3600 SYSTEMS AND NETWORKS

V. Mass Storage Systems

Tape pictures. CSE 30341: Operating Systems Principles

Module 13: Secondary-Storage Structure

MASS-STORAGE STRUCTURE

EIDE, ATA, SATA, USB,

Chapter 14: Mass-Storage Systems

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition

Chapter 11: Mass-Storage Systems

Chapter 13: Mass-Storage Systems. Disk Scheduling. Disk Scheduling (Cont.) Disk Structure FCFS. Moving-Head Disk Mechanism

Chapter 13: Mass-Storage Systems. Disk Structure

Module 13: Secondary-Storage

Chapter 12: Secondary-Storage Structure. Operating System Concepts 8 th Edition,

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition,

Mass-Storage Systems. Mass-Storage Systems. Disk Attachment. Disk Attachment

Chapter 10: Mass-Storage Systems

Chapter 12: Mass-Storage Systems. Operating System Concepts 9 th Edition

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Overview of Mass Storage Structure

Chapter 14: Mass-Storage Systems

Free Space Management

CHAPTER 12: MASS-STORAGE SYSTEMS (A) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CS370 Operating Systems

Chapter 5: Input Output Management. Slides by: Ms. Shree Jaswal

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

Chapter 12: Mass-Storage Systems

Disk Scheduling. Based on the slides supporting the text

Mass-Storage Structure

Silberschatz, et al. Topics based on Chapter 13

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

CS420: Operating Systems. Mass Storage Structure

UNIT-7. Overview of Mass Storage Structure

CS370 Operating Systems

Disk Scheduling. Chapter 14 Based on the slides supporting the text and B.Ramamurthy s slides from Spring 2001

Mass-Storage Systems

Overview of Mass Storage Structure

Chapter 12: Mass-Storage

Chapter 12: Mass-Storage

CS370 Operating Systems

Chapter 14 Mass-Storage Structure

Chapter 7: Mass-storage structure & I/O systems. Operating System Concepts 8 th Edition,

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Mass-Storage Structure

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

CHAPTER 12 AND 13 - MASS-STORAGE STRUCTURE & I/O- SYSTEMS

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

Today: Secondary Storage! Typical Disk Parameters!

Mass-Storage. ICS332 - Fall 2017 Operating Systems. Henri Casanova

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College

Operating Systems. Mass-Storage Structure Based on Ch of OS Concepts by SGG

CSE Opera+ng System Principles

Discussion Week 10. TA: Kyle Dewey. Tuesday, November 29, 11

Disk scheduling Disk reliability Tertiary storage Swap space management Linux swap space management

Mass-Storage. ICS332 Operating Systems

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

I/O & Storage. Jin-Soo Kim ( Computer Systems Laboratory Sungkyunkwan University

UNIT 4 Device Management

Operating Systems 2010/2011

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

Disks and RAID. CS 4410 Operating Systems. [R. Agarwal, L. Alvisi, A. Bracy, E. Sirer, R. Van Renesse]

STORAGE SYSTEMS. Operating Systems 2015 Spring by Euiseong Seo

I/O Systems and Storage Devices

Lecture 9. I/O Management and Disk Scheduling Algorithms

CSE 120. Operating Systems. March 27, 2014 Lecture 17. Mass Storage. Instructor: Neil Rhodes. Wednesday, March 26, 14

I/O Device Controllers. I/O Systems. I/O Ports & Memory-Mapped I/O. Direct Memory Access (DMA) Operating Systems 10/20/2010. CSC 256/456 Fall

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

Hard Disk Drives (HDDs) Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Lecture 29. Friday, March 23 CS 470 Operating Systems - Lecture 29 1

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Part IV I/O System. Chapter 12: Mass Storage Structure

Hard Disk Drives (HDDs)

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke

Introduction to I/O and Disk Management

Introduction to I/O and Disk Management

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

VIII. Input/Output Operating Systems Prof. Dr. Marc H. Scholl DBIS U KN Summer Term

Disk Scheduling COMPSCI 386

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

CSE 451: Operating Systems Winter Secondary Storage. Steve Gribble. Secondary storage

Storage Systems. Storage Systems

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access

CSE 451: Operating Systems Winter Lecture 12 Secondary Storage. Steve Gribble 323B Sieg Hall.

COMP283-Lecture 3 Applied Database Management

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Storage. CS 3410 Computer System Organization & Programming

Silberschatz and Galvin Chapter 14

COS 318: Operating Systems. Storage Devices. Vivek Pai Computer Science Department Princeton University

CSE 153 Design of Operating Systems

BBM371- Data Management. Lecture 2: Storage Devices

Transcription:

Chapter 12: Mass-Storage Systems Chapter 12: Mass-Storage Systems Revised 2010. Tao Yang Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space Management RAID Structure Disk Attachment Stable-Storage Implementation Tertiary Storage Devices Operating System Support Performance Issues Silberschatz, Galvin and Gagne 2009 12.2 Silberschatz, Galvin and Gagne 2009 Objectives Describe the physical structure of secondary and tertiary storage devices and the resulting effects on the uses of the devices Explain the performance characteristics of mass-storage devices Discuss operating-system services provided for mass storage, including RAID and HSM Mass Storage: Magnet disks Magnetic disks provide bulk of secondary storage of modern computers Drives rotate at 60 to 200 times per second Transfer rate is rate at which data flow between drive and computer Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency) Head crash results from disk head making contact with the disk surface That s bad 12.3 Silberschatz, Galvin and Gagne 2009 12.4 Silberschatz, Galvin and Gagne 2009 Moving-head Disk Mechanism Estimate transferring rate Suppose that a disk drive spins at 7200 RPM, has a sector size of 512 bytes, and holds 160 sectors per track. What is sustained transfer rate of this drive in megabytes per second?.disk spins 120 times per second Each spin transfers a track of 80 KB (160x0.5K) Sustained transfer rate is 120x80 = 9.6MB/s. 12.5 Silberschatz, Galvin and Gagne 2009 12.6 Silberschatz, Galvin and Gagne 2009 1

Consider seek time/rotation overhead 7200 RPM, sector size of 512 bytes, and 160 sectors per track. Average seek time for the drive is 8 milliseconds Estimate # of random sector I/Os per second that can be done and the effective transfer rate for random-access of a sector? Average rotational cost is time to travel half track: 1/120 * 50%=4.167ms Transfer time is 8ms to seek + 4.167 ms rotational latency + 0.052 ms (reading one sector takes 0.5MB/9.6MB). =12.219ms # of random sector access/second= 1/0.012219=81.8 Effective transferring rate: 0.5 KB/0.012.219s=0.0409MB/s. 12.7 Silberschatz, Galvin and Gagne 2009 More fine-grain consideration 7200 RPM, sector size of 512 bytes, and 160 sectors per track. This drive has 7000 cylinders, 20 tracks per cylinder, Head switch time (from one platter to another) of 0.5 millisec An adjacent cylinder seek time of 2 milliseconds. What is the total time and sustained transfer rate for a huge transfer of 100 cylinders of data?.#bytes transferred: 100 cyl * 20 trk/cyl * 80KB/trk, i.e., 160,000 KB. Transfer Time: rotation time + track switch time + cylinder switch time. Rotation time is 2000 trks/120 trks per sec = 16.667 s. Track switch time is 19 switch per cyl * 100 cyl * 0.5ms = 950 ms. Cylinder switch time is 99 * 2 ms =198 ms. total time is 16.667 + 0.950 + 0.198 = 17.815 s. Thus the transfer rate is 160MB/17.815= 8.98MB/s. 12.8 Silberschatz, Galvin and Gagne 2009 Solid State Drive (SSD) Magnetic Tape Relatively permanent and holds large quantities of data Random access ~1000 times slower than disk Mainly used for backup, storage of infrequently-used data, transfer medium between systems Once data under head, transfer rates comparable to disk 20-1.5TB typical storage Common technologies are 4mm, 8mm, 19mm, LTO-2 and SDLT 12.9 Silberschatz, Galvin and Gagne 2009 12.10 Silberschatz, Galvin and Gagne 2009 Disk Attachment SATA connectors Drive attached to computer via I/O bus USB SATA (replacing ATA, PATA, EIDE) SCSI itself is a bus, up to 16 devices on one cable, SCSI initiator requests operation and SCSI targets perform tasks FC (Fiber Channel) is high-speed serial architecture Can be switched fabric with 24-bit address space the basis of storage area networks (SANs) in which many hosts attach to many storage units Can be arbitrated loop (FC-AL) of 126 devices SCSI FC with SAN-switch 12.12 Silberschatz, Galvin and Gagne 2009 12.13 Silberschatz, Galvin and Gagne 2009 2

Network-Attached Storage Storage Area Network Network-attached storage (NAS) is storage made available over a network rather than over a local connection (such as a bus) NFS and CIFS are common protocols Implemented via remote procedure calls (RPCs) between host and storage New iscsi protocol uses IP network to carry the SCSI protocol Common in large storage environments (and becoming more common) Multiple hosts attached to multiple storage arrays - flexible 12.14 Silberschatz, Galvin and Gagne 2009 12.15 Silberschatz, Galvin and Gagne 2009 Disk Scheduling Disk Scheduling (Cont.) The operating system is responsible for using hardware efficiently for the disk drives, this means having a fast access time and disk bandwidth Access time has two major components Seek time is the time for the disk are to move the heads to the cylinder containing the desired sector Rotational latency is the additional time waiting for the disk to rotate the desired sector to the disk head Several algorithms exist to schedule the servicing of disk I/O requests We illustrate them with a request queue (0-199) 98, 183, 37, 122, 14, 124, 65, 67 Head pointer 53 Minimize seek time Seek time seek distance Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer 12.16 Silberschatz, Galvin and Gagne 2009 12.17 Silberschatz, Galvin and Gagne 2009 FCFS (First Come First Serve) SSTF (Shortest Seek Time First) Illustration shows total head movement of 640 cylinders Selects the request with the minimum seek time from the current head position SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests Illustration shows total head movement of 236 cylinders 12.18 Silberschatz, Galvin and Gagne 2009 12.19 Silberschatz, Galvin and Gagne 2009 3

SSTF (Cont.) SCAN The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues. SCAN algorithm Sometimes called the elevator algorithm Illustration shows total head movement of 208 cylinders 12.20 Silberschatz, Galvin and Gagne 2009 12.21 Silberschatz, Galvin and Gagne 2009 SCAN (Cont.) C-SCAN (Circular-SCAN) Provides a more uniform wait time than SCAN The head moves from one end of the disk to the other, servicing requests as it goes When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip Treats the cylinders as a circular list that wraps around from the last cylinder to the first one 12.22 Silberschatz, Galvin and Gagne 2009 12.23 Silberschatz, Galvin and Gagne 2009 C-SCAN (Cont.) C-LOOK Version of C-SCAN Arm only goes as far as the last request in each direction, then reverses direction immediately, without first going all the way to the end of the disk 12.24 Silberschatz, Galvin and Gagne 2009 12.25 Silberschatz, Galvin and Gagne 2009 4

C-LOOK (Cont.) Selecting a Disk-Scheduling Algorithm SSTF is common and has a natural appeal SCAN and C-SCAN perform better for systems that place a heavy load on the disk Performance depends on the number and types of requests Requests for disk service can be influenced by the file-allocation method The disk-scheduling algorithm should be written as a separate module of the operating system, allowing it to be replaced with a different algorithm if necessary Either SSTF or LOOK is a reasonable choice for the default algorithm 12.26 Silberschatz, Galvin and Gagne 2009 12.27 Silberschatz, Galvin and Gagne 2009 Disk Management Booting from a Disk in Windows 2000 Low-level formatting, or physical formatting Dividing a disk into sectors that the disk controller can read and write To use a disk to hold files, the operating system still needs to record its own data structures on the disk Partition the disk into one or more groups of cylinders Logical formatting or making a file system To increase efficiency most file systems group blocks into clusters Disk I/O done in blocks File I/O done in clusters Boot block initializes system The bootstrap is stored in ROM Bootstrap loader program Methods such as sector sparing used to handle bad blocks 12.28 Silberschatz, Galvin and Gagne 2009 12.29 Silberschatz, Galvin and Gagne 2009 Swap-Space Management Data Structures for Swapping on Linux Systems Swap-space Virtual memory uses disk space as an extension of main memory Swap-space can be carved out of the normal file system, or, more commonly, it can be in a separate disk partition Swap-space management 4.3BSD allocates swap space when process starts; holds text segment (the program) and data segment Kernel uses swap maps to track swap-space use Solaris 2 allocates swap space only when a page is forced out of physical memory, not when the virtual memory page is first created 12.30 Silberschatz, Galvin and Gagne 2009 12.31 Silberschatz, Galvin and Gagne 2009 5

RAID (Redundant Array of Inexpensive Disks) RAID (Cont.) Multiple disk drives provide reliability via redundancy. Increases the mean time to failure RAID is arranged into six different levels Hardware RAID with RAID controller vs software RAID Several improvements in disk-use techniques involve the use of multiple disks working cooperatively Disk striping uses a group of disks as one storage unit RAID schemes improve performance and improve the reliability of the storage system by storing redundant data Mirroring or shadowing (RAID 1) keeps duplicate of each disk Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high performance and high reliability Block interleaved parity (RAID 4, 5, 6) uses much less redundancy RAID within a storage array can still fail if the array fails, so automatic replication of the data between arrays is common Frequently, a small number of hot-spare disks are left unallocated, automatically replacing a failed disk and having data rebuilt onto them 12.32 Silberschatz, Galvin and Gagne 2009 12.33 Silberschatz, Galvin and Gagne 2009 Raid Level 0 Raid Level 1 Level 0 is nonredundant disk array Files are Striped across disks, no redundant info High read throughput Best write throughput (no redundant info to write) Any disk failure results in data loss Mirrored Disks Data is written to two places On failure, just use surviving disk On read, choose fastest to read Write performance is same as single drive, read performance is 2x better Expensive 12.34 Silberschatz, Galvin and Gagne 2009 12.35 Silberschatz, Galvin and Gagne 2009 RAID 5 6 RAID Levels 12.36 Silberschatz, Galvin and Gagne 2009 12.37 Silberschatz, Galvin and Gagne 2009 6

RAID (0 + 1) and (1 + 0) Raid Level 0+1 Stripe on a set of disks Then mirror of data blocks is striped on the second set. Stripe 0 Stripe 1 Stripe 2 Stripe 3 Stripe 0 Stripe 1 Stripe 2 Stripe 3 Stripe 4 Stripe 5 Stripe 6 Stripe 7 Stripe 4 Stripe 5 Stripe 6 Stripe 7 Stripe 8 Stripe 9 Stripe 8 Stripe 9 data disks mirror copies 12.38 Silberschatz, Galvin and Gagne 2009 12.39 Silberschatz, Galvin and Gagne 2009 Raid Level 0+1 Raid Level 1+0 When there are multiple files with different sizes. Pair mirrors first. Then stripe on a set of paired mirrors Better reliability than RAID 0+1 Stripe 0 Stripe 0 Stripe 4 Stripe 4 Stripe 8 Stripe 8 Mirror pair Stripe 2 Stripe 6 Stripe 10 Stripe 2 Stripe 6 Stripe 10 Stripe 1 Stripe 5 Stripe 9 Stripe 1 Stripe 5 Stripe 9 Stripe 3 Stripe 7 Stripe 11 Stripe 3 Stripe 7 Stripe 11 12.40 Silberschatz, Galvin and Gagne 2009 12.41 Silberschatz, Galvin and Gagne 2009 Operating System Support Speed Major OS jobs are to manage physical devices and to present a virtual machine abstraction to applications For hard disks, the OS provides two abstraction: Raw device an array of data blocks File system the OS queues and schedules the interleaved requests from several applications Two aspects of speed in tertiary storage are bandwidth and latency. Bandwidth is measured in bytes per second. Sustained bandwidth average data rate during a large transfer; # of bytes/transfer time Data rate when the data stream is actually flowing Effective bandwidth average over the entire I/O time, including seek() or locate(), and cartridge switching Drive s overall data rate 12.51 Silberschatz, Galvin and Gagne 2009 12.56 Silberschatz, Galvin and Gagne 2009 7

Reliability Price per Megabyte of DRAM From 1981 to 2004 A fixed disk drive is likely to be more reliable than a removable disk or tape drive An optical cartridge is likely to be more reliable than a magnetic disk or tape A head crash in a fixed hard disk generally destroys the data, whereas the failure of a tape drive or optical disk drive often leaves the data cartridge unharmed 12.58 Silberschatz, Galvin and Gagne 2009 12.60 Silberschatz, Galvin and Gagne 2009 Price per Megabyte of Magnetic Hard Disk From 1981 to 2004 Price per Megabyte of a Tape Drive From 1984-2000 12.61 Silberschatz, Galvin and Gagne 2009 12.62 Silberschatz, Galvin and Gagne 2009 End of Chapter 12 Silberschatz, Galvin and Gagne 2009 8