CS61C : Machine Structures

Similar documents
CS61C - Machine Structures. Week 7 - Disks. October 10, 2003 John Wawrzynek

www-inst.eecs.berkeley.edu/~cs61c/

UCB CS61C : Machine Structures

CS2410: Computer Architecture. Storage systems. Sangyeun Cho. Computer Science Department University of Pittsburgh

Review. Lecture #25 Input / Output, Networks II, Disks ABCs of Networks: 2 Computers

Recap of Networking Intro. Lecture #28 Networking & Disks Protocol Family Concept. Protocol for Networks of Networks?

CS61C : Machine Structures

CS61C : Machine Structures

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

Topics. Lecture 8: Magnetic Disks

Page 1. Motivation: Who Cares About I/O? Lecture 5: I/O Introduction: Storage Devices & RAID. I/O Systems. Big Picture: Who cares about CPUs?

Computer Science 146. Computer Architecture

High-Performance Storage Systems

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

Today: Coda, xfs! Brief overview of other file systems. Distributed File System Requirements!

Lecture 23: I/O Redundant Arrays of Inexpensive Disks Professor Randy H. Katz Computer Science 252 Spring 1996

Most important topics today. I/O bus (computer bus) Polling vs interrupts Switch vs shared network Network protocol layers Various types of RAID

Lecture #28 I/O, networks, disks

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED

CS61C : Machine Structures

Agenda. Project #3, Part I. Agenda. Project #3, Part I. No SSE 12/1/10

Storage Systems. Storage Systems

Advanced Computer Architecture

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Appendix D: Storage Systems

Chapter 6. Storage and Other I/O Topics

Lecture for Storage System

Concepts Introduced. I/O Cannot Be Ignored. Typical Collection of I/O Devices. I/O Issues

Agenda. Agenda 11/12/12. Review - 6 Great Ideas in Computer Architecture

Storage System COSC UCB

Page 1. Magnetic Disk Purpose Long term, nonvolatile storage Lowest level in the memory hierarchy. Typical Disk Access Time

UC Berkeley CS61C : Machine Structures

UC Berkeley CS61C : Machine Structures

NOW Handout Page 1 S258 S99 1

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

Computer System Architecture

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

UCB CS61C : Machine Structures

CMSC 611: Advanced Computer Architecture

CS61C : Machine Structures

RAID (Redundant Array of Inexpensive Disks)

COS 318: Operating Systems. Storage Devices. Vivek Pai Computer Science Department Princeton University

Computer Science 146. Computer Architecture

CS152 Computer Architecture and Engineering Lecture 19: I/O Systems

Chapter 6. Storage & Other I/O

CS61C : Machine Structures

CS61C : Machine Structures

Lecture 23: Storage Systems. Topics: disk access, bus design, evaluation metrics, RAID (Sections )

CS61C : Machine Structures

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

COS 318: Operating Systems. Storage Devices. Jaswinder Pal Singh Computer Science Department Princeton University

CSE 153 Design of Operating Systems

Lecture 25: Dependability and RAID

Administrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review

Where Have We Been? Ch. 6 Memory Technology

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

CPS104 Computer Organization and Programming Lecture 18: Input-Output. Outline of Today s Lecture. The Big Picture: Where are We Now?

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

UCB CS61C : Machine Structures

Motivation for Input/Output

inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 26 IO Basics, Storage & Performance I ! HACKING THE SMART GRID!

Goals for today:! Recall : 5 components of any Computer! ! What do we need to make I/O work?!

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University

Readings. Storage Hierarchy III: I/O System. I/O (Disk) Performance. I/O Device Characteristics. often boring, but still quite important

Outline. EEL-4713 Computer Architecture I/O Systems. I/O System Design Issues. The Big Picture: Where are We Now? I/O Performance Measures

u Covered: l Management of CPU & concurrency l Management of main memory & virtual memory u Currently --- Management of I/O devices

CMSC 424 Database design Lecture 12 Storage. Mihai Pop

COMP283-Lecture 3 Applied Database Management

Database Systems II. Secondary Storage

Mass-Storage Structure

Computer Architecture 计算机体系结构. Lecture 6. Data Storage and I/O 第六讲 数据存储和输入输出. Chao Li, PhD. 李超博士

UC Berkeley CS61C : Machine Structures

Today: Secondary Storage! Typical Disk Parameters!

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

UC Berkeley CS61C : Machine Structures

CSE 153 Design of Operating Systems Fall 2018

Lecture 33 Caches III What to do on a write hit? Block Size Tradeoff (1/3) Benefits of Larger Block Size

Lecture 29. Friday, March 23 CS 470 Operating Systems - Lecture 29 1

Lecture: Storage, GPUs. Topics: disks, RAID, reliability, GPUs (Appendix D, Ch 4)

Where We Are in This Course Right Now. ECE 152 Introduction to Computer Architecture Input/Output (I/O) Copyright 2012 Daniel J. Sorin Duke University

Magnetic Disks. Tore Brox-Larsen Including material developed by Pål Halvorsen, University of Oslo (primarily) Kai Li, Princeton University

CIT 668: System Architecture. Computer Systems Architecture

Review. Motivation for Input/Output. What do we need to make I/O work?

CENG3420 Lecture 08: Memory Organization

CS61C : Machine Structures

SCSI vs. ATA More than an interface. Dave Anderson, Jim Dykes, Erik Riedel Seagate Research April 2003

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

ECE331: Hardware Organization and Design

Storage systems. Computer Systems Architecture CMSC 411 Unit 6 Storage Systems. (Hard) Disks. Disk and Tape Technologies. Disks (cont.

Chapter 11. I/O Management and Disk Scheduling

CS 261 Fall Mike Lam, Professor. Memory

Block Size Tradeoff (1/3) Benefits of Larger Block Size. Lecture #22 Caches II Block Size Tradeoff (3/3) Block Size Tradeoff (2/3)

Dependency Redundancy & RAID

CSE 120. Operating Systems. March 27, 2014 Lecture 17. Mass Storage. Instructor: Neil Rhodes. Wednesday, March 26, 14

Chapter 6 External Memory

CS61C : Machine Structures

Session: Hardware Topic: Disks. Daniel Chang. COP 3502 Introduction to Computer Science. Lecture. Copyright August 2004, Daniel Chang

Monday, May 4, Discs RAID: Introduction Error detection and correction Error detection: Simple parity Error correction: Hamming Codes

Transcription:

inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures CS61C L40 I/O: Disks (1) Lecture 40 I/O : Disks 2004-12-03 Lecturer PSOE Dan Garcia www.cs.berkeley.edu/~ddgarcia I talk to robots Japan's growing elderly population will be able to buy companionship in the form of a robot, programmed to provide just enough small talk to keep them from going senile. Snuggling Ifbot, dressed in an astronaut suit with a glowing face, has the conversation ability of a five-year-old, the language level needed to stimulate the brains of sr citizens www.thematuremarket.com/seniorstrategic/dossier.php?numtxt=3567&idrb=5

Review Protocol suites allow heterogeneous networking Another form of principle of abstraction Protocols operation in presence of failures Standardization key for LAN, WAN Integrated circuit ( Moore s Law ) revolutionizing network switches as well as processors Switch just a specialized computer Trend from shared to switched networks to get faster links and scalable bandwidth CS61C L40 I/O: Disks (2)

Magnetic Disks Purpose: Computer Processor Memory (active) (passive) Control (where ( brain ) programs, Datapath data live ( brawn ) when running) Devices Input Output Long-term, nonvolatile, inexpensive storage for files Keyboard, Mouse Disk, Network Display, Printer Large, inexpensive, slow level in the memory hierarchy (discuss later) CS61C L40 I/O: Disks (3)

Photo of Disk Head, Arm, Actuator Arm Spindle Actuator Head { Platters (12) CS61C L40 I/O: Disks (4)

Disk Device Terminology Arm Head Sector Inner Track Outer Track Actuator Platter Several platters, with information recorded magnetically on both surfaces (usually) Bits recorded in tracks, which in turn divided into sectors (e.g., 512 Bytes) Actuator moves head (end of arm) over track ( seek ), wait for sector rotate under head, then read or write CS61C L40 I/O: Disks (5)

Disk Device Performance Platter Outer Track Inner Sector Head Arm Track Spindle Controller Actuator Disk Latency = Seek Time + Rotation Time + Transfer Time + Controller Overhead Seek Time? depends no. tracks move arm, seek speed of disk Rotation Time? depends on speed disk rotates, how far sector is from head Transfer Time? depends on data rate (bandwidth) of disk (bit density), size of request CS61C L40 I/O: Disks (6)

Data Rate: Inner vs. Outer Tracks To keep things simple, originally same # of sectors/track Since outer track longer, lower bits per inch Competition decided to keep bits/inch (BPI) high for all tracks ( constant bit density ) More capacity per disk More sectors per track towards edge Since disk spins at constant speed, outer tracks have faster data rate Bandwidth outer track 1.7X inner track! CS61C L40 I/O: Disks (7)

Disk Performance Model /Trends Capacity : + 100% / year (2X / 1.0 yrs) Over time, grown so fast that # of platters has reduced (some even use only 1 now!) Transfer rate (BW) : + 40%/yr (2X / 2 yrs) Rotation+Seek time : 8%/yr (1/2 in 10 yrs) Areal Density Bits recorded along a track: Bits/Inch (BPI) # of tracks per surface: Tracks/Inch (TPI) We care about bit density per unit area Bits/Inch 2 Called Areal Density = BPI x TPI MB/$: > 100%/year (2X / 1.0 yrs) Fewer chips + areal density CS61C L40 I/O: Disks (8)

Disk History (IBM) Data density Mibit/sq. in. Capacity of Unit Shown Mibytes 1973: 1. 7 Mibit/sq. in 0.14 GiBytes 1979: 7. 7 Mibit/sq. in 2.3 GiBytes source: New York Times, 2/23/98, page C3, Makers of disk drives crowd even more data into even smaller spaces CS61C L40 I/O: Disks (9)

Disk History 1989: 63 Mibit/sq. in 60 GiBytes 1997: 1450 Mibit/sq. in 2.3 GiBytes 1997: 3090 Mibit/sq. in 8.1 GiBytes source: New York Times, 2/23/98, page C3, Makers of disk drives crowd even more data into even smaller spaces CS61C L40 I/O: Disks (10)

Historical Perspective Form factor and capacity drives market, more than performance 1970s: Mainframes 14" diam. disks 1980s: Minicomputers, Servers 8", 5.25" diam. disks Late 1980s/Early 1990s: Pizzabox PCs 3.5 inch diameter disks Laptops, notebooks 2.5 inch disks Palmtops didn t use disks, so 1.8 inch diameter disks didn t make it CS61C L40 I/O: Disks (11)

State of the Art: Barracuda 7200.7 (2004) 200 GB, 3.5-inch disk 7200 RPM; Serial ATA 2 platters, 4 surfaces 8 watts (idle) 8.5 ms avg. seek 32 to 58 MB/s Xfer rate $125 = $0.625 / GB source: www.seagate.com; CS61C L40 I/O: Disks (12)

1 inch disk drive! 2004 Hitachi Microdrive: 1.7 x 1.4 x 0.2 4 GB, 3600 RPM, 4-7 MB/s, 12 ms seek Digital cameras, PalmPC 2006 MicroDrive? 16 GB, 10 MB/s! Assuming past trends continue CS61C L40 I/O: Disks (13)

Use Arrays of Small Disks Katz and Patterson asked in 1987: Can smaller disks be used to close gap in performance between disks and CPUs? Conventional: 4 disk designs 3.5 5.25 Low End 10 14 High End Disk Array: 1 disk design 3.5 CS61C L40 I/O: Disks (14)

Replace Small Number of Large Disks with Large Number of Small Disks! (1988 Disks) Capacity Volume Power Data Rate I/O Rate MTTF Cost IBM 3390K 20 GBytes 97 cu. ft. 3 KW 15 MB/s 600 I/Os/s 250 KHrs $250K IBM 3.5" 0061 320 MBytes 0.1 cu. ft. 11 W 1.5 MB/s 55 I/Os/s 50 KHrs $2K x70 23 GBytes 11 cu. ft. 1 KW 120 MB/s 3900 IOs/s??? Hrs $150K Disk Arrays potentially high performance, high MB per cu. ft., high MB per KW, but what about reliability? 9X 3X 8X 6X CS61C L40 I/O: Disks (15)

Array Reliability Reliability - whether or not a component has failed measured as Mean Time To Failure (MTTF) Reliability of N disks = Reliability of 1 Disk N (assuming failures independent) 50,000 Hours 70 disks = 700 hour Disk system MTTF: Drops from 6 years to 1 month! Disk arrays too unreliable to be useful! CS61C L40 I/O: Disks (16)

Redundant Arrays of (Inexpensive) Disks Files are striped across multiple disks Redundancy yields high data availability Availability: service still provided to user, even if some components failed Disks will still fail Contents reconstructed from data redundantly stored in the array Capacity penalty to store redundant info Bandwidth penalty to update redundant info CS61C L40 I/O: Disks (17)

Berkeley History, RAID-I RAID-I (1989) Consisted of a Sun 4/280 workstation with 128 MB of DRAM, four dual-string SCSI controllers, 28 5.25- inch SCSI disks and specialized disk striping software Today RAID is > $27 billion dollar industry, 80% nonpc disks sold in RAIDs CS61C L40 I/O: Disks (18)

RAID 0 : No redundancy = AID Assume have 4 disks of data for this example, organized in blocks Large accesses faster since transfer from several disks at once This and next 5 slides from RAID.edu, http://www.acnc.com/04_01_00.html CS61C L40 I/O: Disks (19)

RAID 1: Mirror data Each disk is fully duplicated onto its mirror Very high availability can be achieved Bandwidth reduced on write: 1 Logical write = 2 physical writes Most expensive solution: 100% capacity overhead CS61C L40 I/O: Disks (20)

RAID 3: Parity Parity computed across group to protect against hard disk failures, stored in P disk Logically, a single high capacity, high transfer rate disk 25% capacity cost for parity in this example vs. 100% for RAID 1 (5 disks vs. 8 disks) CS61C L40 I/O: Disks (21)

RAID 4: parity plus small sized accesses RAID 3 relies on parity disk to discover errors on Read But every sector has an error detection field Rely on error detection field to catch errors on read, not on the parity disk Allows small independent reads to different disks simultaneously CS61C L40 I/O: Disks (22)

Inspiration for RAID 5 Small writes (write to one disk): Option 1: read other data disks, create new sum and write to Parity Disk (access all disks) Option 2: since P has old sum, compare old data to new data, add the difference to P: 1 logical write = 2 physical reads + 2 physical writes to 2 disks Parity Disk is bottleneck for Small writes: Write to A0, B1 => both write to P disk A0 B0 C0 D0 P A1 B1 C1 D1 P CS61C L40 I/O: Disks (23)

RAID 5: Rotated Parity, faster small writes Independent writes possible because of interleaved parity Example: write to A0, B1 uses disks 0, 1, 4, 5, so can proceed in parallel Still 1 small write = 4 physical disk accesses CS61C L40 I/O: Disks (24)

Peer Instruction 1. RAID 1 (mirror) and 5 (rotated parity) help with performance and availability 2. RAID 1 has higher cost than RAID 5 3. Small writes on RAID 5 are slower than on RAID 1 CS61C L40 I/O: Disks (25) ABC 1: FFF 2: FFT 3: FTF 4: FTT 5: TFF 6: TFT 7: TTF 8: TTT

And In conclusion Magnetic Disks continue rapid advance: 60%/yr capacity, 40%/yr bandwidth, slow on seek, rotation improvements, MB/$ improving 100%/yr? Designs to fit high volume form factor RAID Higher performance with more disk arms per $ Adds option for small # of extra disks Today RAID is > $27 billion dollar industry, 80% nonpc disks sold in RAIDs; started at Cal CS61C L40 I/O: Disks (26)