RAID (Redundant Array of Inexpensive Disks) I/O Benchmarks ABCs of UNIX File Systems A Study Comparing UNIX File System Performance

Size: px
Start display at page:

Download "RAID (Redundant Array of Inexpensive Disks) I/O Benchmarks ABCs of UNIX File Systems A Study Comparing UNIX File System Performance"

Transcription

1 Magnetic Disk Characteristics I/O Connection Structure Types of Buses Cache & I/O I/O Performance Metrics I/O System Modeling Using Queuing Theory Designing an I/O System RAID (Redundant Array of Inexpensive Disks) I/O Benchmarks ABCs of UNIX File Systems A Study Comparing UNIX File System Performance #1 Lec # 13 Winter

2 RAID (Redundant Array of Inexpensive Disks) The term RAID was coined in a 1988 paper by Patterson, Gibson and Katz of the University of California at Berkeley. In that article, the authors proposed that large arrays of small, inexpensive disks --usually SCSI, IDE support just started-- could be used to replace the large, expensive disks used on mainframes and minicomputers. In such arrays files are "striped" and/or mirrored across multiple drives. Their analysis showed that the cost per megabyte could be substantially reduced, while both performance (throughput) and fault tolerance could be increased. The Catch: Array Reliability without any redundancy : Reliability of N disks = Reliability of 1 Disk N 50,000 Hours 70 disks = 700 hours Disk system MTTF: Drops from 6 years to 1 month! Arrays (without redundancy) too unreliable to be useful! #2 Lec # 13 Winter

3 Manufacturing Advantages of Disk Arrays Disk Product Families Conventional: 4 disk form factors Low End High End Disk Array: 1 disk form factor 3.5 #3 Lec # 13 Winter

4 RAID Subsystem Organization host host adapter array controller single board disk controller manages interface to host, DMA control, buffering, parity logic physical device control Striping software off-loaded from host to array controller No application modifications No reduction of host performance single board disk controller single board disk controller single board disk controller often piggy-backed in small format devices #4 Lec # 13 Winter

5 Basic RAID Organizations Non-Redundant (RAID Level 0) Mirrored (RAID Level 1) Memory-Style ECC (RAID Level 2) Bit-Interleaved Parity (RAID Level 3) Block-Interleaved Parity (RAID Level 4) Block-Interleaved Distributed-Parity (RAID Level 5) P+Q Redundancy (RAID Level 6) Striped Mirrors (RAID Level 10) #5 Lec # 13 Winter

6 Non-Redundant (RAID Level 0) RAID 0 simply stripes data across all drives (minimum 2 drives) to increase data throughput but provides no fault protection. Sequential blocks of data are written across multiple disks in stripes, as follows: The size of a data block, which is known as the "stripe width", varies with the implementation, but is always at least as large as a disk's sector size. This scheme offers the best write performance since it never needs to update redundant information. It does not have the best read performance. Redundancy schemes that duplicate data, such as mirroring, can perform better on reads by selectively scheduling requests on the disk with the shortest expected seek and rotational delays. #6 Lec # 13 Winter

7 Optimal Size of Data Striping Unit (Applies to RAID Levels 0, 5, 6, 10) Lee and Katz [1991] use an analytic model of non-redundant disk arrays to derive an equation for the optimal size of data striping unit. They show that the optimal size of data strip-ing is equal to: Where: P is the average disk positioning time, X is the average disk transfer rate, PX ( L 1) Z N L is the concurrency, Z is the request size, and N is the array size in disks. Their equation also predicts that the optimal size of data striping unit is dependent only the relative rates at which a disk positions and transfers data, PX, rather than P or X individually. Lee and Katz show that the opti-mal striping unit depends on request size; Chen and Patterson show that this dependency can be ignored without significantly affecting performance. #7 Lec # 13 Winter

8 Mirrored (RAID Level 1) Utilizes mirroring or shadowing of data using twice as many disks as a non-redundant disk array. Whenever data is written to a disk the same data is also written to a redundant disk, so that there are always two copies of the information. When data is read, it can be retrieved from the disk with the shorter queuing, seek and rotational delays If a disk fails, the other copy is used to service requests. Mirroring is frequently used in database applications where availability and transaction rate are more important than storage efficiency. #8 Lec # 13 Winter

9 Memory-Style ECC (RAID Level 2) RAID 2 performs data striping with a block size of one bit or byte, so that all disks in the array must be read to perform any read operation. A RAID 2 system would normally have as many data disks as the word size of the computer, typically 32. In addition, RAID 2 requires the use of extra disks to store an errorcorrecting code for redundancy. With 32 data disks, a RAID 2 system would require 7 additional disks for a Hamming-code ECC. Such an array of 39 disks was the subject of a U.S. patent granted to Unisys Corporation in 1988, but no commercial product was ever released. For a number of reasons, including the fact that modern disk drives contain their own internal ECC, RAID 2 is not a practical disk array scheme. #9 Lec # 13 Winter

10 Bit-Interleaved Parity (RAID Level 3) One can improve upon memory-style ECC disk arrays ( RAID 2) by noting that, unlike memory component failures, disk controllers can easily identify which disk has failed. Thus, one can use a single parity disk rather than a set of parity disks to recover lost information. As with RAID 2, RAID 3 must read all data disks for every read operation. This requires synchronized disk spindles for optimal performance, and works best on a single-tasking system with large sequential data requirements. An example might be a system used to perform video editing, where huge video files must be read sequentially. #10 Lec # 13 Winter

11 Block-Interleaved Parity (RAID Level 4) RAID 4 is similar to RAID 3 except that blocks of data are striped across the disks rather than bits/bytes. Read requests smaller than the striping unit access only a single data disk. Write requests must update the requested data blocks and must also compute and update the parity block. For large writes that touch blocks on all disks, parity is easily computed by exclusive-or ing the new data for each disk. For small write requests that update only one data disk, parity is computed by noting how the new data differs from the old data and apply-ing those differences to the parity block. This can be an important performance improvement for small or random file access (like a typical database application) if the application record size can be matched to the RAID 4 block size. #11 Lec # 13 Winter

12 Block-Interleaved Distributed-Parity (RAID Level 5) The block-interleaved distributed-parity disk array eliminates the parity disk bottleneck present in RAID 4 by distributing the parity uniformly over all of the disks. An additional, frequently overlooked advantage to distributing the parity is that it also distributes data over all of the disks rather than over all but one. RAID 5 has the best small read, large read and large write performance of any redundant disk array. Small write requests are somewhat inefficient compared with redundancy schemes such as mirroring however, due to the need to perform readmodify-write operations to update parity. #12 Lec # 13 Winter

13 Problems of Disk Arrays: Small Writes RAID-5: Small Write Algorithm 1 Logical Write = 2 Physical Reads + 2 Physical Writes D0' D0 D1 D2 D3 P new data old data (1. Read) old parity (2. Read) + XOR + XOR (3. Write) (4. Write) D0' D1 D2 D3 P' #13 Lec # 13 Winter

14 P+Q Redundancy (RAID Level 6) An enhanced RAID 5 with stronger error-correcting codes used. One such scheme, called P+Q redundancy, uses Reed-Solomon codes, in addition to parity, to protect against up to two disk failures using the bare minimum of two redundant disks. The P+Q redundant disk arrays are structurally very similar to the block-interleaved distributed-parity disk arrays (RAID 5) and operate in much the same manner. In particular, P+Q redundant disk arrays also perform small write opera-tions using a read-modify-write procedure, except that instead of four disk accesses per write requests, P+Q redundant disk arrays require six disk accesses due to the need to update both the P and Q information. #14 Lec # 13 Winter

15 RAID 5/6: High I/O Rate Parity A logical logical write write becomes four four physical I/Os I/Os Independent writes writes possible because of of interleaved parity parity Reed-Solomon Codes Codes ("Q") ("Q") for for protection during during reconstruction Targeted for mixed applications D0 D1 D2 D3 P D4 D5 D6 P D7 D8 D9 P D10 D11 D12 P D13 D14 D15 P D16 D17 D18 D19 Increasing Logical Disk Addresses Stripe Stripe Unit D20 D21 D22 D23 P Disk Columns #15 Lec # 13 Winter

16 RAID 10 (Striped Mirrors) RAID 10 (also known as RAID 1+0) was not mentioned in the original 1988 article that defined RAID 1 through RAID 5. The term is now used to mean the combination of RAID 0 (striping) and RAID 1 (mirroring). Disks are mirrored in pairs for redundancy and improved performance, then data is striped across multiple disks for maximum performance. In the diagram below, Disks 0 & 2 and Disks 1 & 3 are mirrored pairs. Obviously, RAID 10 uses more disk space to provide redundant data than RAID 5. However, it also provides a performance advantage by reading from all disks in parallel while eliminating the write penalty of RAID 5. #16 Lec # 13 Winter

17 RAID Levels Comparison: Throughput Per Dollar Relative to RAID Level 0. #17 Lec # 13 Winter

18 RAID Levels Comparison: Throughput Per Dollar Relative to RAID Level 0. #18 Lec # 13 Winter

19 RAID Levels Comparison: Throughput Per Dollar Relative to RAID Level 0. #19 Lec # 13 Winter

20 RAID Levels Comparison: Throughput Per Dollar Relative to RAID Level 0. #20 Lec # 13 Winter

21 RAID Reliability Redundancy in disk arrays is motivated by the need to overcome disk failures. When only independent disk failures are considered, a simple parity scheme works admirably. Patterson, Gibson, and Katz derive the mean time between failures for a RAID level 5 to be: MTTF (disk) 2 / N (G - 1) MTTR(disk) where MTTF(disk) is the mean-time-to-failure of a single disk, MTTR(disk) is the mean-time-to-repair of a single disk, N is the total number of disks in the disk array G is the parity group size For illustration purposes, let us assume we have: 100 disks that each had a mean time to failure (MTTF) of 200,000 hours and a mean time to repair of one hour. If we organized these 100 disks into parity groups of average size 16, then the mean time to failure of the system would be an astounding 3000 years! Mean times to failure of this magnitude lower the chances of failure over any given period of time. #21 Lec # 13 Winter

22 System Availability: Orthogonal RAIDs Array Controller String Controller String Controller String Controller String Controller String Controller String Controller Data Recovery Group: unit of data redundancy Redundant Support Components: power supplies, controller, cables #22 Lec # 13 Winter

23 host I/O Controller System-Level Availability Fully dual redundant host I/O Controller Array Controller Array Controller Goal: Goal: No No Single Single Points Points of of Failure Failure... Recovery Group... with duplicated paths, higher performance can be obtained when there are no failures #23 Lec # 13 Winter

24 I/O Benchmarks Processor benchmarks classically aimed at response time for fixed sized problem. I/O benchmarks typically measure throughput, possibly with upper limit on response times (or 90% of response times) Traditional I/O benchmarks fix the problem size in the benchmark. Examples: Benchmark Size of Data % Time I/O Year I/OStones 1 MB 26% 1990 Andrew 4.5 MB 4% 1988 Not much I/O time in benchmarks Limited problem size Not measuring disk (or even main memory) #24 Lec # 13 Winter

25 The Ideal I/O Benchmark An I/O benchmark should help system designers and users understand why the system performs as it does. The performance of an I/O benchmark should be limited by the I/O devices. to maintain the focus of measuring and understanding I/O systems. The ideal I/O benchmark should scale gracefully over a wide range of current and future machines, otherwise I/O benchmarks quickly become obsolete as machines evolve. A good I/O benchmark should allow fair comparisons across machines. The ideal I/O benchmark would be relevant to a wide range of applications. In order for results to be meaningful, benchmarks must be tightly specified. Results should be reproducible by general users; optimizations which are allowed and disallowed must be explicitly stated. #25 Lec # 13 Winter

26 I/O Benchmarks Comparison #26 Lec # 13 Winter

27 Self-scaling I/O Benchmarks Alternative to traditional I/O benchmarks: self-scaling I/O benchmarks; automatically and dynamically increase aspects of workload to match characteristics of system measured Measures wide range of current & future applications Types of self-scaling benchmarks: Transaction Processing - Interested in IOPS not bandwidth TPC-A, TPC-B, TPC-C NFS: SPEC SFS/ LADDIS - average response time and throughput. Unix I/O - Performance of files systems Willy #27 Lec # 13 Winter

28 I/O Benchmarks: Transaction Processing Transaction Processing (TP) (or On-line TP=OLTP) Changes to a large body of shared information from many terminals, with the TP system guaranteeing proper behavior on a failure If a bank s computer fails when a customer withdraws money, the TP system would guarantee that the account is debited if the customer received the money and that the account is unchanged if the money was not received Airline reservation systems & banks use TP Atomic transactions makes this work Each transaction => 2 to 10 disk I/Os & 5,000 and 20,000 CPU instructions per disk I/O Efficiency of TP SW & avoiding disks accesses by keeping information in main memory Classic metric is Transactions Per Second (TPS) Under what workload? how machine configured? #28 Lec # 13 Winter

29 I/O Benchmarks: TPC-C Complex OLTP Models a wholesale supplier managing orders. Order-entry conceptual model for benchmark. Workload = 5 transaction types. Users and database scale linearly with throughput. Defines full-screen end-user interface Metrics: new-order rate (tpmc) and price/performance ($/tpmc) Approved July 1992 #29 Lec # 13 Winter

30 SPEC SFS/LADDIS Predecessor: NFSstones NFSStones: synthetic benchmark that generates series of NFS requests from single client to test server: reads, writes, & commands & file sizes from other studies. Problem: 1 client could not always stress server. Files and block sizes not realistic. Clients had to run SunOS. #30 Lec # 13 Winter

31 SPEC SFS/LADDIS 1993 Attempt by NFS companies to agree on standard benchmark: Legato, Auspex, Data General, DEC, Interphase, Sun. Like NFSstones but: Run on multiple clients & networks (to prevent bottlenecks) Same caching policy in all clients Reads: 85% full block & 15% partial blocks Writes: 50% full block & 50% partial blocks Average response time: 50 ms Scaling: for every 100 NFS ops/sec, increase capacity 1GB. Results: plot of server load (throughput) vs. response time & number of users Assumes: 1 user => 10 NFS ops/sec #31 Lec # 13 Winter

32 Unix I/O Benchmarks: Willy UNIX File System Benchmark that gives insight into I/O system behavior (Chen and Patterson, 1993) Self scaling to automatically explore system size Examines five parameters Unique bytes touched: data size; locality via LRU Gives file cache size Percentage of reads: %writes = 1 % reads; typically 50% 100% reads gives peak throughput Average I/O Request Size: Bernoulli, C=1 Percentage sequential requests: typically 50% Number of processes: concurrency of workload (number processes issuing I/O requests) Fix four parameters while vary one parameter Searches space to find high throughput #32 Lec # 13 Winter

33 OS Policies and I/O Performance Performance potential determined by HW: CPU, Disk, bus, memory system. Operating system policies can determine how much of that potential is achieved. OS Policies: 1) How much main memory allocated for file cache? 2) Can boundary change dynamically? 3) Write policy for disk cache. Write Through with Write Buffer Write Back #33 Lec # 13 Winter

34 ABCs of UNIX File Systems Key Issues File vs. Raw I/O File Cache Size Policy Write Policy Local Disk vs. Server Disk File vs. Raw: File system access is the norm: standard policies apply Raw: alternate I/O system to avoid file system, used by data bases File Cache Size Policy Files are cached in main memory, rather than being accessed from disk With older UNIX, % of main memory dedicated to file cache is fixed at system generation (e.g., 10%) With new UNIX % of main memory for file cache varies depending on amount of file I/O (e.g., up to 80%) #34 Lec # 13 Winter

35 Write Policy ABCs of UNIX File Systems File Storage should be permanent; either write immediately or flush file cache after fixed period (e.g., 30 seconds) Write Through with Write Buffer Write Back Write Buffer often confused with Write Back Write Through with Write Buffer, all writes go to disk Write Through with Write Buffer, writes are asynchronous, so processor doesn t have to wait for disk write Write Back will combine multiple writes to same page; hence can be called Write Cancelling #35 Lec # 13 Winter

36 ABCs of UNIX File Systems Local vs. Server Unix File systems have historically had different policies (and even file systems) for local client vs. remote server NFS local disk allows 30 second delay to flush writes NFS server disk writes through to disk on file close Cache coherency problem if allow are allowed to have file caches in addition to server file cache NFS just writes through on file close Stateless protocol: periodically get new copies of file blocks Other file systems use cache coherency with write back to check state and selectively invalidate or update #36 Lec # 13 Winter

37 Network File Systems Application Program UNIX System Call Layer remote accesses Virtual File System Interface NFS Client UNIX File System Network Protocol Stack Block Device Driver local accesses UNIX System Call Layer UNIX System Call Layer Virtual File System Interface Virtual File System Interface NFS File System Server Routines RPC/Transmission Protocols Client Network Server RPC/Transmission Protocols #37 Lec # 13 Winter

38 UNIX File System Performance Study 9 Machines & OSs Desktop Mini/Mainframe Using Willy Machine OS Year Price Memory Alpha AXP 3000/400 OSF/ $30, MB DECstation 5000/200 Sprite LFS 1990 $20, MB DECstation 5000/200 Ultrix $20, MB HP 730 HP/UX 8 & $35, MB IBM RS/6000/550 AIX $30, MB SparcStation 1+ SunOS $30, MB SparcStation 10/30 Solaris $20, MB Convex C2/240 Convex OS 1988 $750, MB IBM 3090/600J VF AIX/ESA 1990 $1,000, MB #38 Lec # 13 Winter

39 #39 Lec # 13 Winter

40 Self-Scaling Benchmark Parameters #40 Lec # 13 Winter

41 Disk Performance Convex C240, ConvexOS10 SS 10, Solaris IPI-2, RAID Machine and Operating System AXP/4000, OSF1 RS/6000,AIX HP 730, HP/UX ,AIX/ESA Sparc1+,SunOS 4.1 DS5000,Ultrix DS5000,Sprite RPM SCSI-II disk IBM Channel, IBM 3390 Disk 32 KB reads Megabytes per Second SS 10 disk spins 5400 RPM; 4 IPI disks on Convex #41 Lec # 13 Winter

42 File Cache Performance UNIX File System Performance: not how fast disk, but whether disk is used (File cache has 3 to 7 x disk perf.) 4X speedup between generations; DEC & Sparc AXP/4000, OSF Machines & Operating Systems RS/6000,AIX HP 730, HP/UX ,AIX/ESA SS 10, Solaris 2 Convex C240, ConvexOS10 DS5000,Sprite DS5000,Ultrix Sparc1+,SunOS Fast Mem Sys DEC Generations Sun Generations Megabytes per Second #42 Lec # 13 Winter

43 File Cache Size HP v8 (8%) vs. v9 (81%); DS 5000 Ultrix (10%) vs. Sprite (63%) % Main Memory for FIle Cache 90% 80% 70% 60% 50% 40% 30% 20% 10% 8% 10% 20% 63% 71% 74% 77% 80% 81% 87% File Cache Size (MB) 0% 1 HP730, HP/UX 8 DS5000, Ultrix 3090, AIX/ESA DS5000, Sprite Sparc1+, SunOS 4.1 SS 10, Solaris 2 Alpha, OSF1 RS/6000, AIX HP730, HP/UX 9 Convex C240, ConvexOS10 #43 Lec # 13 Winter

44 MB/sec File System Write Policies Write Through with Write Buffer (Asynchronous): AIX, Convex, OSF/1 w.t., Solaris, Ultrix Convex Solaris AIX OSF/1 Fast Disks Fast File Caches for Reads 0% 20% 40% 60% 80% 100% % Reads #44 Lec # 13 Winter

45 File cache performance vs. read percentage #45 Lec # 13 Winter

46 Performance vs. Megabytes Touched #46 Lec # 13 Winter

47 Write policy Performance For Client/Server Computing NFS: write through on close (no buffers) HPUX: client caches writes; 25X 80% reads FDDI Network MB/sec HP , HP/UX 8, DUX 6 4 Ethernet 2 0 SS1+, SunOS 4.1, NFS 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% % Reads #47 Lec # 13 Winter

48 UNIX I/O Performance Study Conclusions Study uses Willy, an I/O benchmark which supports self-scaling evaluation and predicted performance. The hardware determines the potential I/O performance, but the operating system determines how much of that potential is delivered: differences of factors of 100. File cache performance in workstations is improving rapidly, with over four-fold improvements in three years for DEC (AXP/3000 vs. DECStation 5000) and Sun (SPARCStation 10 vs. SPARCStation 1+). File cache performance of Unix on mainframes and minisupercomputers is no better than on workstations. Workstations benchmarked can take advantage of high performance disks. RAID systems can deliver much higher disk performance. File caching policy determines performance of most I/O events, and hence is the place to start when trying to improve OS I/O performance. #48 Lec # 13 Winter

RAID (Redundant Array of Inexpensive Disks)

RAID (Redundant Array of Inexpensive Disks) Magnetic Disk Characteristics I/O Connection Structure Types of Buses Cache & I/O I/O Performance Metrics I/O System Modeling Using Queuing Theory Designing an I/O System RAID (Redundant Array of Inexpensive

More information

Lecture 23: I/O Redundant Arrays of Inexpensive Disks Professor Randy H. Katz Computer Science 252 Spring 1996

Lecture 23: I/O Redundant Arrays of Inexpensive Disks Professor Randy H. Katz Computer Science 252 Spring 1996 Lecture 23: I/O Redundant Arrays of Inexpensive Disks Professor Randy H Katz Computer Science 252 Spring 996 RHKS96 Review: Storage System Issues Historical Context of Storage I/O Storage I/O Performance

More information

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier Some material adapted from Mohamed Younis, UMBC CMSC 6 Spr 23 course slides Some material adapted from Hennessy & Patterson / 23 Elsevier Science Characteristics IBM 39 IBM UltraStar Integral 82 Disk diameter

More information

Computer Science 146. Computer Architecture

Computer Science 146. Computer Architecture Computer Science 46 Computer Architecture Spring 24 Harvard University Instructor: Prof dbrooks@eecsharvardedu Lecture 22: More I/O Computer Science 46 Lecture Outline HW5 and Project Questions? Storage

More information

An Introduction to RAID

An Introduction to RAID Intro An Introduction to RAID Gursimtan Singh Dept. of CS & IT Doaba College RAID stands for Redundant Array of Inexpensive Disks. RAID is the organization of multiple disks into a large, high performance

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Administrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review

Administrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review Administrivia CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Homework #4 due Thursday answers posted soon after Exam #2 on Thursday, April 24 on memory hierarchy (Unit 4) and

More information

Storage Systems. Storage Systems

Storage Systems. Storage Systems Storage Systems Storage Systems We already know about four levels of storage: Registers Cache Memory Disk But we've been a little vague on how these devices are interconnected In this unit, we study Input/output

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ali Ghodsi and Ion Stoica, UC Berkeley January 31, 2018 (based on slide from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ion Stoica, UC Berkeley September 13, 2016 (based on presentation from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk N

More information

Lecture 23: Storage Systems. Topics: disk access, bus design, evaluation metrics, RAID (Sections )

Lecture 23: Storage Systems. Topics: disk access, bus design, evaluation metrics, RAID (Sections ) Lecture 23: Storage Systems Topics: disk access, bus design, evaluation metrics, RAID (Sections 7.1-7.9) 1 Role of I/O Activities external to the CPU are typically orders of magnitude slower Example: while

More information

CS2410: Computer Architecture. Storage systems. Sangyeun Cho. Computer Science Department University of Pittsburgh

CS2410: Computer Architecture. Storage systems. Sangyeun Cho. Computer Science Department University of Pittsburgh CS24: Computer Architecture Storage systems Sangyeun Cho Computer Science Department (Some slides borrowed from D Patterson s lecture slides) Case for storage Shift in focus from computation to communication

More information

Storage systems. Computer Systems Architecture CMSC 411 Unit 6 Storage Systems. (Hard) Disks. Disk and Tape Technologies. Disks (cont.

Storage systems. Computer Systems Architecture CMSC 411 Unit 6 Storage Systems. (Hard) Disks. Disk and Tape Technologies. Disks (cont. Computer Systems Architecture CMSC 4 Unit 6 Storage Systems Alan Sussman November 23, 2004 Storage systems We already know about four levels of storage: registers cache memory disk but we've been a little

More information

Definition of RAID Levels

Definition of RAID Levels RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Computer Architecture Computer Science & Engineering. Chapter 6. Storage and Other I/O Topics BK TP.HCM

Computer Architecture Computer Science & Engineering. Chapter 6. Storage and Other I/O Topics BK TP.HCM Computer Architecture Computer Science & Engineering Chapter 6 Storage and Other I/O Topics Introduction I/O devices can be characterized by Behaviour: input, output, storage Partner: human or machine

More information

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Computer Organization and Structure. Bing-Yu Chen National Taiwan University Computer Organization and Structure Bing-Yu Chen National Taiwan University Storage and Other I/O Topics I/O Performance Measures Types and Characteristics of I/O Devices Buses Interfacing I/O Devices

More information

Lecture 13. Storage, Network and Other Peripherals

Lecture 13. Storage, Network and Other Peripherals Lecture 13 Storage, Network and Other Peripherals 1 I/O Systems Processor interrupts Cache Processor & I/O Communication Memory - I/O Bus Main Memory I/O Controller I/O Controller I/O Controller Disk Disk

More information

Storage. Hwansoo Han

Storage. Hwansoo Han Storage Hwansoo Han I/O Devices I/O devices can be characterized by Behavior: input, out, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections 2 I/O System Characteristics

More information

Advanced Computer Architecture

Advanced Computer Architecture ECE 563 Advanced Computer Architecture Fall 29 Lecture 5: I/O 563 L5. Fall 29 I/O Systems Processor interrupts Cache Memory - I/O Bus Main Memory I/O Controller I/O Controller I/O Controller Disk Disk

More information

CMSC 424 Database design Lecture 12 Storage. Mihai Pop

CMSC 424 Database design Lecture 12 Storage. Mihai Pop CMSC 424 Database design Lecture 12 Storage Mihai Pop Administrative Office hours tomorrow @ 10 Midterms are in solutions for part C will be posted later this week Project partners I have an odd number

More information

Chapter 6. Storage and Other I/O Topics. Jiang Jiang

Chapter 6. Storage and Other I/O Topics. Jiang Jiang Chapter 6 Storage and Other I/O Topics Jiang Jiang jiangjiang@ic.sjtu.edu.cn [Adapted from Computer Organization and Design, 4 th Edition, Patterson & Hennessy, 2008, MK] Chapter 6 Storage and Other I/O

More information

Chapter 6. Storage and Other I/O Topics

Chapter 6. Storage and Other I/O Topics Chapter 6 Storage and Other I/O Topics Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

Readings. Storage Hierarchy III: I/O System. I/O (Disk) Performance. I/O Device Characteristics. often boring, but still quite important

Readings. Storage Hierarchy III: I/O System. I/O (Disk) Performance. I/O Device Characteristics. often boring, but still quite important Storage Hierarchy III: I/O System Readings reg I$ D$ L2 L3 memory disk (swap) often boring, but still quite important ostensibly about general I/O, mainly about disks performance: latency & throughput

More information

Operating Systems 2010/2011

Operating Systems 2010/2011 Operating Systems 2010/2011 Input/Output Systems part 2 (ch13, ch12) Shudong Chen 1 Recap Discuss the principles of I/O hardware and its complexity Explore the structure of an operating system s I/O subsystem

More information

Chapter 6. Storage and Other I/O Topics. ICE3003: Computer Architecture Fall 2012 Euiseong Seo

Chapter 6. Storage and Other I/O Topics. ICE3003: Computer Architecture Fall 2012 Euiseong Seo Chapter 6 Storage and Other I/O Topics 1 Introduction I/O devices can be characterized by Behaviour: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

Chapter 6. Storage and Other I/O Topics

Chapter 6. Storage and Other I/O Topics Chapter 6 Storage and Other I/O Topics Introduction I/O devices can be characterized by Behaviour: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

Storage Devices for Database Systems

Storage Devices for Database Systems Storage Devices for Database Systems 5DV120 Database System Principles Umeå University Department of Computing Science Stephen J. Hegner hegner@cs.umu.se http://www.cs.umu.se/~hegner Storage Devices for

More information

Concepts Introduced. I/O Cannot Be Ignored. Typical Collection of I/O Devices. I/O Issues

Concepts Introduced. I/O Cannot Be Ignored. Typical Collection of I/O Devices. I/O Issues Concepts Introduced I/O Cannot Be Ignored Assume a program requires 100 seconds, 90 seconds for accessing main memory and 10 seconds for I/O. I/O introduction magnetic disks ash memory communication with

More information

CSE 120. Operating Systems. March 27, 2014 Lecture 17. Mass Storage. Instructor: Neil Rhodes. Wednesday, March 26, 14

CSE 120. Operating Systems. March 27, 2014 Lecture 17. Mass Storage. Instructor: Neil Rhodes. Wednesday, March 26, 14 CSE 120 Operating Systems March 27, 2014 Lecture 17 Mass Storage Instructor: Neil Rhodes Paging and Translation Lookaside Buffer frame dirty? no yes CPU checks TLB PTE in TLB? Free page frame? no yes OS

More information

Computer Systems Laboratory Sungkyunkwan University

Computer Systems Laboratory Sungkyunkwan University I/O System Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Introduction (1) I/O devices can be characterized by Behavior: input, output, storage

More information

Lecture 14: I/O Benchmarks, Busses, and Automated Data Libraries Professor David A. Patterson Computer Science 252 Spring 1998

Lecture 14: I/O Benchmarks, Busses, and Automated Data Libraries Professor David A. Patterson Computer Science 252 Spring 1998 Lecture 14: I/O Benchmarks, Busses, and Automated Data Libraries Professor David A. Patterson Computer Science 252 Spring 1998 DAP Spr. 98 UCB 1 Review: A Little Queuing Theory Queue System server Proc

More information

NOW Handout Page 1 S258 S99 1

NOW Handout Page 1 S258 S99 1 Areal Density COSC 535 Advanced Computer Slides modified from Hennessy CS252 course slides Why do we need disks Magnetic Disks RAID Advanced Dependability/Reliability/Availability I/O Benchmarks, and Performance

More information

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568 UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Computer Architecture ECE 568 Part 6 Input/Output Israel Koren ECE568/Koren Part.6. Motivation: Why Care About I/O? CPU Performance:

More information

Chapter 6. I/O issues

Chapter 6. I/O issues Computer Architectures Chapter 6 I/O issues Tien-Fu Chen National Chung Cheng Univ Chap6 - Input / Output Issues I/O organization issue- CPU-memory bus, I/O bus width A/D multiplex Split transaction Synchronous

More information

Chapter 6. Storage and Other I/O Topics. ICE3003: Computer Architecture Spring 2014 Euiseong Seo

Chapter 6. Storage and Other I/O Topics. ICE3003: Computer Architecture Spring 2014 Euiseong Seo Chapter 6 Storage and Other I/O Topics 1 Introduction I/O devices can be characterized by Behaviour: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

- SLED: single large expensive disk - RAID: redundant array of (independent, inexpensive) disks

- SLED: single large expensive disk - RAID: redundant array of (independent, inexpensive) disks RAID and AutoRAID RAID background Problem: technology trends - computers getting larger, need more disk bandwidth - disk bandwidth not riding moore s law - faster CPU enables more computation to support

More information

Storage System COSC UCB

Storage System COSC UCB Storage System COSC4201 1 1999 UCB I/O and Disks Over the years much less attention was paid to I/O compared with CPU design. As frustrating as a CPU crash is, disk crash is a lot worse. Disks are mechanical

More information

Database Systems. November 2, 2011 Lecture #7. topobo (mit)

Database Systems. November 2, 2011 Lecture #7. topobo (mit) Database Systems November 2, 2011 Lecture #7 1 topobo (mit) 1 Announcement Assignment #2 due today Assignment #3 out today & due on 11/16. Midterm exam in class next week. Cover Chapters 1, 2,

More information

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568 UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Computer Architecture ECE 568 Part 6 Input/Output Israel Koren ECE568/Koren Part.6. CPU performance keeps increasing 26 72-core Xeon

More information

Today s Papers. Array Reliability. RAID Basics (Two optional papers) EECS 262a Advanced Topics in Computer Systems Lecture 3

Today s Papers. Array Reliability. RAID Basics (Two optional papers) EECS 262a Advanced Topics in Computer Systems Lecture 3 EECS 262a Advanced Topics in Computer Systems Lecture 3 Filesystems (Con t) September 10 th, 2012 John Kubiatowicz and Anthony D. Joseph Electrical Engineering and Computer Sciences University of California,

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

Appendix D: Storage Systems

Appendix D: Storage Systems Appendix D: Storage Systems Instructor: Josep Torrellas CS433 Copyright Josep Torrellas 1999, 2001, 2002, 2013 1 Storage Systems : Disks Used for long term storage of files temporarily store parts of pgm

More information

Input/Output. Today. Next. Principles of I/O hardware & software I/O software layers Disks. Protection & Security

Input/Output. Today. Next. Principles of I/O hardware & software I/O software layers Disks. Protection & Security Input/Output Today Principles of I/O hardware & software I/O software layers Disks Next Protection & Security Operating Systems and I/O Two key operating system goals Control I/O devices Provide a simple,

More information

High Performance Computing Course Notes High Performance Storage

High Performance Computing Course Notes High Performance Storage High Performance Computing Course Notes 2008-2009 2009 High Performance Storage Storage devices Primary storage: register (1 CPU cycle, a few ns) Cache (10-200 cycles, 0.02-0.5us) Main memory Local main

More information

5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 485.e1

5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 485.e1 5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 485.e1 5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks Amdahl s law in Chapter 1 reminds us that

More information

CS252 S05. CMSC 411 Computer Systems Architecture Lecture 18 Storage Systems 2. I/O performance measures. I/O performance measures

CS252 S05. CMSC 411 Computer Systems Architecture Lecture 18 Storage Systems 2. I/O performance measures. I/O performance measures CMSC 411 Computer Systems Architecture Lecture 18 Storage Systems 2 I/O performance measures I/O performance measures diversity: which I/O devices can connect to the system? capacity: how many I/O devices

More information

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke CSCI-GA.2250-001 Operating Systems I/O : Disk Scheduling and RAID Hubertus Franke frankeh@cs.nyu.edu Disks Scheduling Abstracted by OS as files A Conventional Hard Disk (Magnetic) Structure Hard Disk

More information

CS61C : Machine Structures

CS61C : Machine Structures inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures CS61C L40 I/O: Disks (1) Lecture 40 I/O : Disks 2004-12-03 Lecturer PSOE Dan Garcia www.cs.berkeley.edu/~ddgarcia I talk to robots Japan's growing

More information

Today: Coda, xfs! Brief overview of other file systems. Distributed File System Requirements!

Today: Coda, xfs! Brief overview of other file systems. Distributed File System Requirements! Today: Coda, xfs! Case Study: Coda File System Brief overview of other file systems xfs Log structured file systems Lecture 21, page 1 Distributed File System Requirements! Transparency Access, location,

More information

Chapter 11. I/O Management and Disk Scheduling

Chapter 11. I/O Management and Disk Scheduling Operating System Chapter 11. I/O Management and Disk Scheduling Lynn Choi School of Electrical Engineering Categories of I/O Devices I/O devices can be grouped into 3 categories Human readable devices

More information

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage CSCI-GA.2433-001 Database Systems Lecture 8: Physical Schema: Storage Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com View 1 View 2 View 3 Conceptual Schema Physical Schema 1. Create a

More information

ECE Enterprise Storage Architecture. Fall 2018

ECE Enterprise Storage Architecture. Fall 2018 ECE590-03 Enterprise Storage Architecture Fall 2018 RAID Tyler Bletsch Duke University Slides include material from Vince Freeh (NCSU) A case for redundant arrays of inexpensive disks Circa late 80s..

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 2018 Lecture 22: File system optimizations and advanced topics There s more to filesystems J Standard Performance improvement techniques Alternative important

More information

Chapter 6. Storage and Other I/O Topics

Chapter 6. Storage and Other I/O Topics Chapter 6 Storage and Other I/O Topics Introduction I/O devices can be characterized by Behavior: input, output, t storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Chapter 10: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space

More information

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili Virtual Memory Lecture notes from MKP and S. Yalamanchili Sections 5.4, 5.5, 5.6, 5.8, 5.10 Reading (2) 1 The Memory Hierarchy ALU registers Cache Memory Memory Memory Managed by the compiler Memory Managed

More information

Mladen Stefanov F48235 R.A.I.D

Mladen Stefanov F48235 R.A.I.D R.A.I.D Data is the most valuable asset of any business today. Lost data, in most cases, means lost business. Even if you backup regularly, you need a fail-safe way to ensure that your data is protected

More information

Chapter 6. Storage and Other I/O Topics

Chapter 6. Storage and Other I/O Topics Chapter 6 Storage and Other I/O Topics Introduction I/O devices can be characterized by Behaviour: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne 2013 Objectives To describe the physical structure of secondary storage devices and its effects on the uses of the devices To explain the

More information

V. Mass Storage Systems

V. Mass Storage Systems TDIU25: Operating Systems V. Mass Storage Systems SGG9: chapter 12 o Mass storage: Hard disks, structure, scheduling, RAID Copyright Notice: The lecture notes are mainly based on modifications of the slides

More information

Chapter 6 Storage and Other I/O Topics

Chapter 6 Storage and Other I/O Topics Department of Electr rical Eng ineering, Chapter 6 Storage and Other I/O Topics 王振傑 (Chen-Chieh Wang) ccwang@mail.ee.ncku.edu.tw ncku edu Feng-Chia Unive ersity Outline 6.1 Introduction 6.2 Dependability,

More information

COMP283-Lecture 3 Applied Database Management

COMP283-Lecture 3 Applied Database Management COMP283-Lecture 3 Applied Database Management Introduction DB Design Continued Disk Sizing Disk Types & Controllers DB Capacity 1 COMP283-Lecture 3 DB Storage: Linear Growth Disk space requirements increases

More information

Part IV I/O System. Chapter 12: Mass Storage Structure

Part IV I/O System. Chapter 12: Mass Storage Structure Part IV I/O System Chapter 12: Mass Storage Structure Disk Structure Three elements: cylinder, track and sector/block. Three types of latency (i.e., delay) Positional or seek delay mechanical and slowest

More information

Where We Are in This Course Right Now. ECE 152 Introduction to Computer Architecture Input/Output (I/O) Copyright 2012 Daniel J. Sorin Duke University

Where We Are in This Course Right Now. ECE 152 Introduction to Computer Architecture Input/Output (I/O) Copyright 2012 Daniel J. Sorin Duke University Introduction to Computer Architecture Input/Output () Copyright 2012 Daniel J. Sorin Duke University Slides are derived from work by Amir Roth (Penn) Spring 2012 Where We Are in This Course Right Now So

More information

COS 318: Operating Systems. Storage Devices. Jaswinder Pal Singh Computer Science Department Princeton University

COS 318: Operating Systems. Storage Devices. Jaswinder Pal Singh Computer Science Department Princeton University COS 318: Operating Systems Storage Devices Jaswinder Pal Singh Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Today s Topics Magnetic disks

More information

COS 318: Operating Systems. Storage Devices. Vivek Pai Computer Science Department Princeton University

COS 318: Operating Systems. Storage Devices. Vivek Pai Computer Science Department Princeton University COS 318: Operating Systems Storage Devices Vivek Pai Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall11/cos318/ Today s Topics Magnetic disks Magnetic disk

More information

Reliable Computing I

Reliable Computing I Instructor: Mehdi Tahoori Reliable Computing I Lecture 8: Redundant Disk Arrays INSTITUTE OF COMPUTER ENGINEERING (ITEC) CHAIR FOR DEPENDABLE NANO COMPUTING (CDNC) National Research Center of the Helmholtz

More information

Storing Data: Disks and Files

Storing Data: Disks and Files Storing Data: Disks and Files Chapter 7 (2 nd edition) Chapter 9 (3 rd edition) Yea, from the table of my memory I ll wipe away all trivial fond records. -- Shakespeare, Hamlet Database Management Systems,

More information

SYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID

SYSTEM UPGRADE, INC Making Good Computers Better. System Upgrade Teaches RAID System Upgrade Teaches RAID In the growing computer industry we often find it difficult to keep track of the everyday changes in technology. At System Upgrade, Inc it is our goal and mission to provide

More information

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University I/O, Disks, and RAID Yi Shi Fall 2017 Xi an Jiaotong University Goals for Today Disks How does a computer system permanently store data? RAID How to make storage both efficient and reliable? 2 What does

More information

1 of 6 4/8/2011 4:08 PM Electronic Hardware Information, Guides and Tools search newsletter subscribe Home Utilities Downloads Links Info Ads by Google Raid Hard Drives Raid Raid Data Recovery SSD in Raid

More information

Lecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown

Lecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown Lecture 21: Reliable, High Performance Storage CSC 469H1F Fall 2006 Angela Demke Brown 1 Review We ve looked at fault tolerance via server replication Continue operating with up to f failures Recovery

More information

Address Accessible Memories. A.R. Hurson Department of Computer Science Missouri University of Science & Technology

Address Accessible Memories. A.R. Hurson Department of Computer Science Missouri University of Science & Technology Address Accessible Memories A.R. Hurson Department of Computer Science Missouri University of Science & Technology 1 Memory System Memory Requirements for a Computer An internal storage medium to store

More information

Appendix D: Storage Systems (Cont)

Appendix D: Storage Systems (Cont) Appendix D: Storage Systems (Cont) Instructor: Josep Torrellas CS433 Copyright Josep Torrellas 1999, 2001, 2002, 2013 1 Reliability, Availability, Dependability Dependability: deliver service such that

More information

Page 1. Magnetic Disk Purpose Long term, nonvolatile storage Lowest level in the memory hierarchy. Typical Disk Access Time

Page 1. Magnetic Disk Purpose Long term, nonvolatile storage Lowest level in the memory hierarchy. Typical Disk Access Time Review: Major Components of a Computer Processor Control Datapath Cache Memory Main Memory Secondary Memory (Disk) Devices Output Input Magnetic Disk Purpose Long term, nonvolatile storage Lowest level

More information

CSE 380 Computer Operating Systems

CSE 380 Computer Operating Systems CSE 380 Computer Operating Systems Instructor: Insup Lee University of Pennsylvania Fall 2003 Lecture Note on Disk I/O 1 I/O Devices Storage devices Floppy, Magnetic disk, Magnetic tape, CD-ROM, DVD User

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 9: Mass Storage Structure Prof. Alan Mislove (amislove@ccs.neu.edu) Moving-head Disk Mechanism 2 Overview of Mass Storage Structure Magnetic

More information

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University COS 318: Operating Systems Storage Devices Kai Li Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall11/cos318/ Today s Topics Magnetic disks Magnetic disk

More information

UC Santa Barbara. Operating Systems. Christopher Kruegel Department of Computer Science UC Santa Barbara

UC Santa Barbara. Operating Systems. Christopher Kruegel Department of Computer Science UC Santa Barbara Operating Systems Christopher Kruegel Department of Computer Science http://www.cs.ucsb.edu/~chris/ Input and Output Input/Output Devices The OS is responsible for managing I/O devices Issue requests Manage

More information

I/O Systems and Storage Devices

I/O Systems and Storage Devices CSC 256/456: Operating Systems I/O Systems and Storage Devices John Criswell! University of Rochester 1 I/O Device Controllers I/O devices have both mechanical component & electronic component! The electronic

More information

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011 CSE325 Principles of Operating Systems Mass-Storage Systems David P. Duggan dduggan@sandia.gov April 19, 2011 Outline Storage Devices Disk Scheduling FCFS SSTF SCAN, C-SCAN LOOK, C-LOOK Redundant Arrays

More information

Mass-Storage Structure

Mass-Storage Structure Operating Systems (Fall/Winter 2018) Mass-Storage Structure Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review On-disk structure

More information

Chapter 12: Mass-Storage

Chapter 12: Mass-Storage hapter 12: Mass-Storage Systems hapter 12: Mass-Storage Systems To explain the performance characteristics of mass-storage devices To evaluate disk scheduling algorithms To discuss operating-system services

More information

Chapter 12: Mass-Storage

Chapter 12: Mass-Storage hapter 12: Mass-Storage Systems hapter 12: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management RAID Structure Objectives Moving-head Disk

More information

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University Che-Wei Chang chewei@mail.cgu.edu.tw Department of Computer Science and Information Engineering, Chang Gung University l Chapter 10: File System l Chapter 11: Implementing File-Systems l Chapter 12: Mass-Storage

More information

In the late 1980s, rapid adoption of computers

In the late 1980s, rapid adoption of computers hapter 3 ata Protection: RI In the late 1980s, rapid adoption of computers for business processes stimulated the KY ONPTS Hardware and Software RI growth of new applications and databases, significantly

More information

Chapter 11: File System Implementation. Objectives

Chapter 11: File System Implementation. Objectives Chapter 11: File System Implementation Objectives To describe the details of implementing local file systems and directory structures To describe the implementation of remote file systems To discuss block

More information

Introduction to Input and Output

Introduction to Input and Output Introduction to Input and Output The I/O subsystem provides the mechanism for communication between the CPU and the outside world (I/O devices). Design factors: I/O device characteristics (input, output,

More information

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010 Part IV I/O System Chapter 12: Mass Storage Structure Fall 2010 1 Disk Structure Three elements: cylinder, track and sector/block. Three types of latency (i.e., delay) Positional or seek delay mechanical

More information

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College CISC 7310X C11: Mass Storage Hui Chen Department of Computer & Information Science CUNY Brooklyn College 4/19/2018 CUNY Brooklyn College 1 Outline Review of memory hierarchy Mass storage devices Reliability

More information

CSE380 - Operating Systems. Communicating with Devices

CSE380 - Operating Systems. Communicating with Devices CSE380 - Operating Systems Notes for Lecture 15-11/4/04 Matt Blaze (some examples by Insup Lee) Communicating with Devices Modern architectures support convenient communication with devices memory mapped

More information

CSE 502 Graduate Computer Architecture. Lec 22 Disk Storage

CSE 502 Graduate Computer Architecture. Lec 22 Disk Storage CSE 52 Graduate Computer Architecture Lec 22 Disk Storage Larry Wittie Computer Science, StonyBrook University http://www.cs.sunysb.edu/~cse52 and ~lw Slides adapted from David Patterson, UC-Berkeley cs252-s6

More information

RAID. Redundant Array of Inexpensive Disks. Industry tends to use Independent Disks

RAID. Redundant Array of Inexpensive Disks. Industry tends to use Independent Disks RAID Chapter 5 1 RAID Redundant Array of Inexpensive Disks Industry tends to use Independent Disks Idea: Use multiple disks to parallelise Disk I/O for better performance Use multiple redundant disks for

More information

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568 UNIVERSITY OF MASSACHUSETTS Dept of Electrical & Computer Engineering Computer Architecture ECE 568 art 5 Input/Output Israel Koren ECE568/Koren art5 CU performance keeps increasing 26 72-core Xeon hi

More information

CSC Operating Systems Spring Lecture - XIX Storage and I/O - II. Tevfik Koşar. Louisiana State University.

CSC Operating Systems Spring Lecture - XIX Storage and I/O - II. Tevfik Koşar. Louisiana State University. CSC 4103 - Operating Systems Spring 2007 Lecture - XIX Storage and I/O - II Tevfik Koşar Louisiana State University April 10 th, 2007 1 RAID Structure As disks get cheaper, adding multiple disks to the

More information

RAID Structure. RAID Levels. RAID (cont) RAID (0 + 1) and (1 + 0) Tevfik Koşar. Hierarchical Storage Management (HSM)

RAID Structure. RAID Levels. RAID (cont) RAID (0 + 1) and (1 + 0) Tevfik Koşar. Hierarchical Storage Management (HSM) CSC 4103 - Operating Systems Spring 2007 Lecture - XIX Storage and I/O - II Tevfik Koşar RAID Structure As disks get cheaper, adding multiple disks to the same system provides increased storage space,

More information

I/O Management and Disk Scheduling. Chapter 11

I/O Management and Disk Scheduling. Chapter 11 I/O Management and Disk Scheduling Chapter 11 Categories of I/O Devices Human readable used to communicate with the user video display terminals keyboard mouse printer Categories of I/O Devices Machine

More information

Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part 2 - Disk Array and RAID

Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part 2 - Disk Array and RAID Distributed Video ystems Chapter 5 Issues in Video torage and Retrieval art 2 - Disk Array and RAID Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 5.1

More information