Lecture 29. Friday, March 23 CS 470 Operating Systems - Lecture 29 1

Similar documents
V. Mass Storage Systems

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 13: Mass-Storage Systems. Disk Scheduling. Disk Scheduling (Cont.) Disk Structure FCFS. Moving-Head Disk Mechanism

Chapter 13: Mass-Storage Systems. Disk Structure

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College

Mass-Storage Structure

Chapter 12: Mass-Storage

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

Introduction to I/O and Disk Management

Introduction to I/O and Disk Management

Mass-Storage Systems. Mass-Storage Systems. Disk Attachment. Disk Attachment

CS3600 SYSTEMS AND NETWORKS

Module 13: Secondary-Storage Structure

Mass-Storage. ICS332 - Fall 2017 Operating Systems. Henri Casanova

Chapter 14: Mass-Storage Systems. Disk Structure

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

CS370 Operating Systems

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

Mass-Storage Structure

Mass-Storage. ICS332 Operating Systems

CSE 153 Design of Operating Systems

CSE 120. Operating Systems. March 27, 2014 Lecture 17. Mass Storage. Instructor: Neil Rhodes. Wednesday, March 26, 14

CS420: Operating Systems. Mass Storage Structure

COMP283-Lecture 3 Applied Database Management

Chapter 10: Mass-Storage Systems

Operating Systems. Operating Systems Professor Sina Meraji U of T

Chapter 6. Storage and Other I/O Topics

Storage. CS 3410 Computer System Organization & Programming

Today: Secondary Storage! Typical Disk Parameters!

CSE 153 Design of Operating Systems Fall 2018

Storage Technologies - 3

Chapter 12: Mass-Storage Systems. Operating System Concepts 8 th Edition,

Chapter 12: Mass-Storage

Chapter 12: Mass-Storage

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

CSE 120. Overview. July 27, Day 8 Input/Output. Instructor: Neil Rhodes. Hardware. Hardware. Hardware

Tape pictures. CSE 30341: Operating Systems Principles

Chapter 14: Mass-Storage Systems

Secondary storage. CS 537 Lecture 11 Secondary Storage. Disk trends. Another trip down memory lane

CS370 Operating Systems

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Module 13: Secondary-Storage

Silberschatz, et al. Topics based on Chapter 13

u Covered: l Management of CPU & concurrency l Management of main memory & virtual memory u Currently --- Management of I/O devices

Storage Devices for Database Systems

Chapter 10: Mass-Storage Systems

COS 318: Operating Systems. Storage Devices. Vivek Pai Computer Science Department Princeton University

Computer Architecture 计算机体系结构. Lecture 6. Data Storage and I/O 第六讲 数据存储和输入输出. Chao Li, PhD. 李超博士

Input/Output. Today. Next. Principles of I/O hardware & software I/O software layers Disks. Protection & Security

CS370 Operating Systems

Input/Output. Chapter 5: I/O Systems. How fast is I/O hardware? Device controllers. Memory-mapped I/O. How is memory-mapped I/O done?

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke

CS510 Operating System Foundations. Jonathan Walpole

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

COS 318: Operating Systems. Storage Devices. Jaswinder Pal Singh Computer Science Department Princeton University

CS370: Operating Systems [Fall 2018] Dept. Of Computer Science, Colorado State University

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

Monday, May 4, Discs RAID: Introduction Error detection and correction Error detection: Simple parity Error correction: Hamming Codes

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS370: Operating Systems [Spring 2017] Dept. Of Computer Science, Colorado State University

Chapter 12: Mass-Storage Systems. Operating System Concepts 9 th Edition

Lecture 9. I/O Management and Disk Scheduling Algorithms

CHAPTER 12 AND 13 - MASS-STORAGE STRUCTURE & I/O- SYSTEMS

Operating Systems. Mass-Storage Structure Based on Ch of OS Concepts by SGG

Disks and RAID. CS 4410 Operating Systems. [R. Agarwal, L. Alvisi, A. Bracy, E. Sirer, R. Van Renesse]

Chapter 14 Mass-Storage Structure

CS2410: Computer Architecture. Storage systems. Sangyeun Cho. Computer Science Department University of Pittsburgh

CHAPTER 12: MASS-STORAGE SYSTEMS (A) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Ch 11: Storage and File Structure

CMSC 424 Database design Lecture 12 Storage. Mihai Pop

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University

MASS-STORAGE STRUCTURE

Chapter 14: Mass-Storage Systems

Session: Hardware Topic: Disks. Daniel Chang. COP 3502 Introduction to Computer Science. Lecture. Copyright August 2004, Daniel Chang

Database Systems. November 2, 2011 Lecture #7. topobo (mit)

Storage and File Structure. Classification of Physical Storage Media. Physical Storage Media. Physical Storage Media

BBM371- Data Management. Lecture 2: Storage Devices

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

Storage Systems. Storage Systems

Principles of Operating Systems CS 446/646

Hard facts. Hard disk drives

Advanced Database Systems

CSC369 Lecture 9. Larry Zhang, November 16, 2015

UNIT 4 Device Management

STORING DATA: DISK AND FILES

I/O CANNOT BE IGNORED

CSE 451: Operating Systems Winter Lecture 12 Secondary Storage. Steve Gribble 323B Sieg Hall.

Storage Technologies and the Memory Hierarchy

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

UNIT 2 Data Center Environment

Appendix D: Storage Systems

Professor: Pete Keleher! Closures, candidate keys, canonical covers etc! Armstrong axioms!

CS 554: Advanced Database System

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access

Transcription:

Lecture 29 Reminder: Homework 7 is due on Monday at class time for Exam 2 review; no late work accepted. Reminder: Exam 2 is on Wednesday. Exam 2 review sheet is posted. Questions? Friday, March 23 CS 470 Operating Systems - Lecture 29 1

Outline Disk systems Disk scheduling Disk management RAID Friday, March 23 CS 470 Operating Systems - Lecture 29 2

Disk Drives A disk is viewed logically as a linear array of blocks. How is it mapped onto a circular disk drive? A disk drive is one or more platters rotating on a spindle. Each side of a platter has a head that reads the data off that side of the platter. Each platter side has concentric grooves called tracks. The vertical extent of the same track position on each platter is a cylinder. Each track/cylinder is divided into sectors. Friday, March 23 CS 470 Operating Systems - Lecture 29 3

Disk Drives Friday, March 23 CS 470 Operating Systems - Lecture 29 4

Disk Drives Generally, block numbers are mapped with Block 0 at cylinder/track 0 (outermost groove), head 0, sector 0. The next block is sector 1 until the track is full, then the next block is head 1, sector 0, etc., until the cylinder is full, then the next block is cylinder/track 1, head, 0, sector 0, and so forth. Conceptually, it is possible for OS's to map logical block numbers to <cyl, head, sector> addresses, but this does not happen any more with mapping handled by the disk controller. Friday, March 23 CS 470 Operating Systems - Lecture 29 5

Disk Drives One reason mapping is done in disk controller is that disks have been getting larger. Density has increased in three dimensions. # sectors/track (higher rotation speed) # tracks/platter (shorter seek separation) # bits/space (vertical writes in groove) Components of disk performance are seek time: disk arm movement to correct cylinder rotational delay (latency): wait for correct sector to rotate under the head Friday, March 23 CS 470 Operating Systems - Lecture 29 6

Disk Drives Taken together, data access time is determined by Bandwidth (bytes transferred/unit time): buffer to disk, buffer to host Buffer size Disk drives come in various speeds and sizes optimized for various applications Friday, March 23 CS 470 Operating Systems - Lecture 29 7

Disk Drives Using Western Digital as a prototypical line Disk drive Application Sizes RPM Cache Buffer to host Notes, Street price WD Caviar Blue Standard, internal desktop 80GB- 1TB 7200 32MB SATA 6Gb/s 1TB ~$105 WD Caviar Black Maximum speed, internal desktop 500GB- 2TB 7200 64MB SATA 6Gb/s 2TB ~$210 WD Caviar Green Maximum capacity, low power, internal desktop 320GB- 3TB variable 64MB SATA 3Gb/s 3TB ~$200 WD VelociRaptor Internal, enterprise server 150-600GB 10000 32MB SATA 6Gb/s 600GB ~$270 WD Scorpio Blue Standard, internal laptop 80GB- 1TB 5200 8MB SATA 3Gb/s 1TB ~$135 WD Scorpio Black Maximum power, internal laptop 160GB- 750GB 7200 16MB SATA 3Gb/s 750GB ~$165 Friday, March 23 CS 470 Operating Systems - Lecture 29 8

Disk Drives Disk drive Application Sizes RPM Cache Buffer to host WD AV-25 24/7 surveillance 160-500GB WD My Book Essential WD My Passport Essential WD My Book Live Duo 5400 32MB SATA 3Gb/s External desktop 1-3TB USB 3.0 5Gb/s External portable Networked Personal Cloud Storage 500GB- 2TB USB 3.0 5Gb/s Notes, Street price MTBF 1 million hours, 500GB ~$90 3TB ~$170 1TB ~$130 4-6TB Ethernet RAID 1/0 (2 drives in box), 6TB ~$480 Toshiba makes a 240GB, 4200 RPM, 8MB cache disk drive. Why would anyone want to buy this small, slow drive? Friday, March 23 CS 470 Operating Systems - Lecture 29 9

Disk Drives What is the limit on the capacity of a disk drive using conventional magnetic media? Typical drives are ~250Gb/sq.in. The Toshiba drive is ~344Gb/sq.in. Current limit is ~500Gb/sq.in. Theoretical limit is ~1Tb/sq.in., any smaller grains and heat will change the magnetization of the bits Seagate research into ways of packing more bits. Theoretically up to 50Tb/sq.in. Friday, March 23 CS 470 Operating Systems - Lecture 29 10

Disk Scheduling As with all resources, can extract best performance if schedule disk accesses. Now mostly done in the disk controller because: Original IDE interface has maximum 16383 cylinders x 16 heads x 63 sectors = 8.4GB to be reported. All disks do this now and the EIDE interface was added to find the actual geometry using LBA (linear block addressing). Most disks map out defective sectors to spare ones. # sectors/track is not constant. About 40% more sectors on outer tracks than on inner tracks. Friday, March 23 CS 470 Operating Systems - Lecture 29 11

Disk Scheduling OS generally just makes requests to the controller. The controller has a queue and a scheduling algorithm to choose which request is serviced next. The algorithms are straightforward and have similar properties to other scheduling algorithms that we have studied. OS's are now more concerned with disk management. I.e., how to make a disk usable to users. Friday, March 23 CS 470 Operating Systems - Lecture 29 12

Formatting Low-level, physical formatting is done at the factory, but OS can do this, too. File system formatting Create a partition table that groups cylinders into a virtual disk. Tools like fdisk, sfdisk, PartitionMagic Create file system. In Unix, makefs allocates inodes (index blocks). Create swap space. Friday, March 23 CS 470 Operating Systems - Lecture 29 13

Boot Block How does a computer find the OS to boot? Cannot require that it be in a particular location on a particular disk, since we can choose between more than one. Bootstrap loader is a program that loads OS's. It could be stored in ROM, but then would be hard to change. Usually very small loader is stored in ROM that knows where the loader program is in the boot block (aka MBR - master boot record). Example loaders include grub, lilo, the Windows loader,... Friday, March 23 CS 470 Operating Systems - Lecture 29 14

Boot Block Boot loaders know how to initialize the CPU and bring up the file system. They are configured to know where the OS program code resides. E.g., grub knows they are in the file system, usually in /boot. Boot loader loads the kernel into memory, then jumps to the first instruction of the OS. Then the OS takes over. Friday, March 23 CS 470 Operating Systems - Lecture 29 15

Bad Blocks All disks have bad areas. The factory initially maps out the blocks that would be been allocated to these areas. (Too many of them causes the disk to be rejected.) Some disk controllers are "smart" (e.g., SCSI) and automatically remap bad blocks when encountered. Spare sectors are reserved on each cylinder for this. Other controllers rely on OS to inform. E.g. Win marks FAT entries after chkdisk scan. Friday, March 23 CS 470 Operating Systems - Lecture 29 16

Swap Space Usage of swap space depends on memory management algorithm and OS. Some store entire program and data in swap space for duration of execution. Others only store the pages being used. Friday, March 23 CS 470 Operating Systems - Lecture 29 17

Swap Space Swap space issues include file vs. disk partition - usually a raw partition with dedicated manager for speed single vs. multiple spaces location - if single, usually in center of disk; multiple only if multiple disks size - running out means aborting processes, but more real memory means less need to swap Friday, March 23 CS 470 Operating Systems - Lecture 29 18

RAID Disks have gotten physically smaller and much cheaper. Want to combine multiple disks into one system to increase read/write performance and to improve reliability. Initially, RAID was Redundant Arrays of Inexpensive Disks focusing on providing large amounts of storage cheaply. Now focus is on reliability, so now RAID is Redundant Arrays of Independent Disks. Friday, March 23 CS 470 Operating Systems - Lecture 29 19

RAID Reliability is characterized by mean time to failure (MTF). E.g., 100,000 hours for a disk. For an array of 100 disks, the MFT that some disk will fail is 100000/100 = 1000 hours = 41.66 days(!). If only one copy of each piece of data is stored, each failure is costly. To solve this problem, introduce redundancy. I.e., store extra information that can be used to rebuild lost information. Friday, March 23 CS 470 Operating Systems - Lecture 29 20

RAID Simplest redundancy is to mirror a disk. I.e., create a duplicate. Every write goes to both disks and a read can go to either one. The only way to lose data is if the second disk fails during the time to repair the first disk. MTF for the system depends on the MTF of the disks and the mean time to repair (MTR). Friday, March 23 CS 470 Operating Systems - Lecture 29 21

RAID If disk failures are independent and MTR is 10 hours, MTF (i.e., data loss) is 100000 2 /(2*10) hours = 500x10 6 hours = ~57,000 years(!) Of course, many failures are not independent. E.g., power failures, natural disasters, manufacturing defects, etc. Friday, March 23 CS 470 Operating Systems - Lecture 29 22

RAID Performance is increased through parallelism. E.g., for a mirrored disk, transfer rate is the same as a single disk, but overall read rate doubles. Transfer rate can be improved by striping data across multiple disks. E.g., if we have 8 disks, can write one bit of each byte on each disk simultaneously. Number of accesses per unit time is the same, but each accesses reads 8 times as much data. Larger units such as block striping is common. Friday, March 23 CS 470 Operating Systems - Lecture 29 23

RAID Levels Striping does not help with reliability, and mirroring is expensive. Various schemes, called RAID levels, provide both with different tradeoffs. RAID 0 is simple striping. RAID 1 is simple mirroring. Higher levels are more complicated. Friday, March 23 CS 470 Operating Systems - Lecture 29 24

RAID 0+1 and RAID 1+0 Can also combine schemes. RAID 0+1 is a mirrored RAID 0 system. RAID 1+0 is a RAID 1 system that is striped. Friday, March 23 CS 470 Operating Systems - Lecture 29 25