Movie 1, Seg. 1 Movie 2, Seg. 1. Group 0. Movie 1, Seg. 3 Movie 2, Seg. 3. Movie 1, Seg. 2 Movie 2, Seg. 2. Group 1. Movie 1, Seg. 4 Movie 2, Seg.

Size: px
Start display at page:

Download "Movie 1, Seg. 1 Movie 2, Seg. 1. Group 0. Movie 1, Seg. 3 Movie 2, Seg. 3. Movie 1, Seg. 2 Movie 2, Seg. 2. Group 1. Movie 1, Seg. 4 Movie 2, Seg."

Transcription

1 Pipelined Disk Arrays for Digital Movie Retrieval Ariel Cohen Walter A Burkhard P Venkat Rangan Gemini Storage Systems Laboratory & Multimedia Laboratory Department of Computer Science & Engineering University of California, San Diego La Jolla, CA Abstract We develop a reliable disk array based storage architecture for digital video retrieval Our goals are twofold: maximizing the number of concurrent realtime sessions while minimizing the buering requirements, and ensuring a high degree of reliability The rst goal is achieved by adopting a pipelined approach and by reducing latencies through specialized disk caching and constrained data placement schemes The second goal is achieved by dividing the disks into RAID 3 reliability groups which serve as pipeline stages We note that the buering requirement decreases as the number of groups increases To improve the performance further, we introduce two techniques for more ecient movie retrieval: on arrival caching, and interleaved annular layout We present a case study of the performance of these techniques which shows a signicant improvement when they are incorporated 1 Introduction Storage servers for video retrieval typically consist of an array of high-performance disks which is connected to display stations over a high-bandwidth network (see [3] for a study of a possible network architecture) The system uses memory either at the server or at the display stations (or both) for buering Naturally, the goal is to serve as many concurrent users as possible given the available resources We will focus on increasing the number of concurrent streams supported by the array and decreasing the buering requirements while maintaining a high degree of reliability We consider a video retrieval workload; that is, the system operates in an essentially read-only environment in which the task of the storage server is to retrieve movies from the disk array for consumption by Work supported in part by grants from NCR, Wichita, Kansas, SAIC, San Diego, California, and the University of California MICRO program the display units No editing is allowed The disk array will be devoted to video data; other les (eg system les) will be stored in separate areas The central motivation for our work is the observation that increasing the number of disks in a RAID 3 results in diminishing returns for our workload of small reads since adding disks improves the transfer rate without reducing the other latencies which will dominate To address this issue, we use a disk caching scheme to reduce rotational latencies to a minimum, a constrained layout scheme to reduce seek times, and a pipelined approach which breaks up the RAID 3 into a number of smaller RAID 3's to improve concurrency 2 Related work The issue of reducing the buering requirement and maximizing the number of concurrent sessions in a multimedia retrieval environment is addressed in [4] and [9] In [4] the sorting-set algorithm is proposed, and in [9] the Grouped Sweeping Scheme (GSS) is introduced These schemes are functionally equivalent techniques which are based on the idea of dividing the streams into sets which are serviced in a xed order; the reads for the streams within each set are sorted so that they can be serviced in one sweep of the disk arm The number of sets is a parameter which affects the buering requirement The data layout is unconstrained, no special caching is attempted, and no redundancy is introduced into the system for reliability purposes The schemes which we propose in this paper can be used in conjunction with GSS (or the sorting-set algorithm) in a straightforward manner An important aspect of disk array storage servers for video is load-balancing The goal is to prevent situations in which certain disks become overloaded (because they contain the only copy of a popular movie, for example) There are three approaches to this problem One approach is to replicate movies according to their popularities in an attempt to achieve loadbalancing; such a scheme for a video-on-demand system is proposed in [5] A second approach is to stripe

2 all the data across the entire array such that all disks participate in all reads; this approach is pursued in [7] A third approach is to store parts of all movies on all disks; limited striping is used only if necessary to achieve the bandwidth required by the streams Such an approach for uncompressed video is described in [1] In this paper, we adopt a middle ground between the last two approaches are evenly dispersed across all disks, so no hot spots can develop even when some movies are much more popular than others Figure 1 illustrates the basic layout In this example, D = 4 and G = 2 Dx:y:z denotes fragment z of segment y of movie x, and P x:y denotes the parity of the two fragments of segment y of movie x (ie Dx:y:1 Dx:y:2) The data within rectangles is read in parallel by all disks in the group 3 The basic approach This section describes the basic layout and reading policies which underlie the techniques introduced in later sections Many of the details are omitted here due to a lack of space See [2] for a full presentation and a detailed analysis 31 The basic layout The disk array contains D data disks which are divided into G equal size groups of D G data disks and one parity disk; thus, the total number of disks in the array is D + G Parity disks store the parity of the data stored in the data disks in their group The disk array stores a number of movies Each movie is divided into segments We can divide the movies into segments of equal size or equal display time These two policies are of course equivalent when xed rate compression (or no compression) is used However, in the case of variable bit rate compression the display time of a xed amount of data can vary, so segments of equal display time will generally not be of the same size Each segment is striped across an entire group That is, equal size fragments of each segment appear on each data disk in the group at the same location, so that a segment can be read in parallel by the entire group with synchronized arms The granularity of the striping unit does not matter for our puposes; a bit, byte, sector, or other striping unit can be chosen The fragments of segments are stored contiguously on each disk; ie, no seeks (other than track to track seeks) are required while reading a segment Each group can be considered a virtual disk with D G times the capacity and bandwidth of a single disk Note that each group is essentially a RAID 3 disk array Also note that when G = 1 the data is striped across the entire array, and when G = D no striping exists and the parity disks mirror the data disks Segments are assigned to groups in a round-robin fashion: segment s of each movie is striped across group s mod G (the groups are numbered 0; : : : ; G?1) When a movie is retrieved, its segments are read in this round-robin fashion, so we can consider each group to be a service station in a pipeline Note that all movies Group 0 Group 1 D111 D112 P11 D211 D131 D231 D121 D221 D141 D241 D212 D132 D232 D122 D222 D142 D242 P21 P13 P23 P12 P22 P14 P24 Figure 1: Basic Layout Movie 1, Seg 1 Movie 2, Seg 1 Movie 1, Seg 3 Movie 2, Seg 3 Movie 1, Seg 2 Movie 2, Seg 2 Movie 1, Seg 4 Movie 2, Seg 4 Note that each group can sustain up to one disk failure since the data which is missing due to the failed disk can be reconstructed by computing the parity of the data read from the functioning disks 1 Moreover, no degradation of performance occurs if the parity can be computed fast enough The preservation of performance under failure is a crucial property in our environment Clearly, a failure in a read-only movie server is not likely to result in a loss of data since other copies of the movie exist elsewhere However, a disk failure can render the entire server useless until the disk is replaced and reconstructed With the RAID 3 scheme, the server can sustain up to one disk failure in each group without any loss of service Any scheme with degraded performance under failure (eg RAID 5) would result in a need to drop at least some of the sessions, which we consider unacceptable Based on the analysis in [2] it is easy to show that if we x all parameters except the number of groups then the buering requirement decreases as the number of groups increases (see also Section 6) However, if we wish to include a parity disk in each group then the number of required parity disks may be excessive if the number of groups is large There are two possible approaches to locating the segments within each group: one is to disperse the segments in an arbitrary fashion, and the other approach 1 If there is no failure, no parity computation is performed, and the data read from the parity disk is discarded

3 is to constrain their location in a way that would reduce seek latencies We will show that the regular nature of video accesses coupled with some constraints on start-up times can make constrained layout attractive We now describe the buering and reading policies These dier somewhat depending on whether equal size or equal display time segments are used Group 0 Group 1 reading cycle t Cohort 0 S S S S Cohort 1 S S S reading cycle t+1 Cohort 1 S S S Cohort 0 S 8 S S S S Figure 2: Reading Cycles Movies are divided into equal display time segments; in other words, the segments contain the same number of frames (but segment sizes may vary due to VBR compression) The segments of each stream are read from the disk groups in a round-robin fashion Time during playback is divided into reading cycles during which exactly one segment is read for each stream from some group Each reading cycle concurrently serves G groups of streams which we call cohorts Each stream is a member of exactly one cohort in each reading cycle We can think of cohorts as tasks to be performed by a circular pipeline During a reading cycle, each group serves one cohort, and then the cohorts move to the next group where they will be served during the next reading cycle The maximum number of streams in a cohort is xed to permit a certain xed maximum number of streams to be serviced When the number of streams in a cohort is lower than this maximum number, we say that the cohort contains one or more free slots When the display of a new stream needs to be initiated, the system waits until a cohort with a free slot is about to be served by the group where the rst segment of the requested movie resides; the new stream is then incorporated into this cohort When a stream ends, it is dropped from its cohort; this results in a free slot which can be used to initiate a new stream Note that once a stream is assigned to a cohort, it remains a member of that cohort until its display is nished Figure 2 shows examples of reading cycles along with their cohorts in a server with two groups In this example, a cohort may contain up to 4 streams Cohort 0 is served by group 0 during reading cycle t, group 1 during the reading cycle t + 1, and group 0 again during reading cycle t + 2 Cohort 1 is served by group 1 during reading cycle t, group 0 during reading cycle t + 1, and group 1 during reading cycle t + 2 A new stream (S 8 ) is incorporated into cohort 1 during reading cycle t + 1 Note that the order of reads may vary from reading cycle to reading cycle; this exibility enables us to use seek optimization algorithms During each reading cycle, for each stream, the sys- reading cycle t+2 Cohort 0 S S S S Cohort 1 S S S S 8 32 Fixed display time segments tem reads the next segment from the group serving the stream's cohort into the buer while consuming the segment which was read in the previous reading cycle Let the segment display time be t (seconds); if the reading cycle takes less than t seconds, the system waits until t seconds are over before starting the next reading cycle The issue of nding the minimum value for t that still ensures starvation-free operation is addressed in [2] We know that when VBR compression is used the segment size can vary from segment to segment This raises the question of whether our layout scheme will result in an even distribution of data across the groups For example, a situation in which one group happens to be assigned more than its share of long segments could result in a disk capacity overow at that group The layout is such that all groups contain the same number of segments 2, denote this number by s We can view the amount of data assigned to a group as the sum of s independent bounded random variables with an identical probability distribution 3 It can be shown that for any realistic choice of parameters, the probability of a signicant deviation from the average sum is negligible (see [2]) 33 Fixed size segments In Section 4 we describe a disk caching technique (OAC) which requires us to be able to x the size of segments When VBR compression is used, the xed time approach described above does not permit us to do this We now describe a xed size approach which addresses this issue Unless OAC is used in conjunction with VBR compression, however, the xed time approach is the preferred approach since it requires less buering (see below) The xed size approach is similar to the xed time approach described above with the following dier- 2 Strictly speaking, slight dierences can occur if the length of some movies is not a multiple of G t, but such dierences would be insignicant 3 These assumptions are justied since the segments in a group correspond to parts of the movie that are far enough apart (unless the number of groups is very small), or to parts of dierent movies altogether See [6] for a study of autocorrelation in MPEG streams

4 ences: movies are divided into segments of equal size rather than equal display time, and time during playback is divided into reading cycles during which at most one segment is read for each stream that is being displayed Unlike the xed time approach, the xed size approach may require omitting the reads for some streams during some reading cycles in order to prevent buer overow This introduces a complication for our pipelined scheme: we cannot just allocate segments to groups in a strict round-robin fashion like we did in the previous section The reason is that if we follow this scheme then there is the possibility that in order to prevent buer overow a segment will not be requested when the group on which it resides serves the stream's cohort the segment will be needed only later when the cohort is served by another group When the data is laid out on the groups, it is necessary to keep track of the expected buering situation for all movies so that the layout process can avoid putting a segment in a group if the corresponding reading cycle will not request it During layout, segments are put in the groups in an essentially roundrobin fashion However, some groups are occasionally skipped for some movies; these skips correspond to the skips that would occur during the corresponding reading cycles Hence, by simulating the consumption process during the layout process we can ensure that the segments requested during any reading cycle will be in the proper groups The buering requirement for the xed size approach is almost 50% higher than that for the xed display time approach (this is a result of the need to ensure starvation-free operation even when streams are omitted during some reading cycles see [2]) This raises the question of whether the xed size approach has any advantages As we will see in Section 4, the fact that xed size segments are read in each reading cycle enables us to use a highly benecial disk caching scheme which cannot be used with the xed display time approach This caching technique can make the xed size approach much more attractive than the xed display time approach 4 On arrival caching (OAC) Disks typically use a local buer or cache to improve the performance of read operations [8] The technique which is used for this purpose is called look ahead read or readahead The idea is to preload the buer with the data that immediately follows the requested data on the disk This is motivated by the fact that read requests tend to have a high degree of contiguity in typical environments, and thus it is likely that read requests will be followed by additional requests to the immediately following data When such requests arrive they may be satised by the buer without incurring the delay of accessing the disk Readahead caching is far from ideal for our environment Our workload consists of small reads of one or two tracks or per disk Furthermore, with the xed size segment approach (see Section 33) we know exactly how much contiguous data will be read from each disk (the size of a segment divided by the number of disks in a group); clearly there will usually be no advantage to caching data beyond that amount Another pertinent fact is that a large portion of the latency in a reading cycle is due to rotational latency If possible, we would like to use caching to eliminate that latency Fortunately, we can achieve this goal with the xed segment size approach by exploiting the fact that the size of the reads is xed 41 The basic technique The preferred disk caching scheme for our environment is on arrival caching (OAC) (also known as on arrival readahead or zero latency read) The idea is to eliminate the rotational latency by doing the following: after a read request is received by the disk drive, the drive seeks to the proper location and starts reading the data into the buer immediately (ie at the next sector to pass under the read head) An entire track (or more) is read and cached in the buer Obviously, this scheme oers a substantial advantage if requests are a small whole number of tracks in size and the data can be laid out properly As we will see, this is the case in our environment OAC has been used in some early disk drives when track capacities were lower and the size of typical accesses was closer to the size of a track [8] Some current disks support OAC, but the use of this scheme is not widespread since it is not appropriate for most environments We will show that OAC can oer a substantial benet in a multi-session video environment 42 Layout for OAC The goal of a layout scheme for OAC is to achieve the elimination of the rotational latency without incurring the penalty of fragmentation The disk cache should only contain data that is relevant (ie belongs to the segment that is being read), so no track should contain data belonging to more than one segment Clearly, if the size of the fragment of a segment that is stored on each disk is not a whole number of tracks, then fragmentation occurs The conclusion is that the fragment size should be a whole number of tracks, and data belonging to dierent fragments should be stored in dierent tracks

5 Another relevant fact needs to be taken into consideration: tracks in most modern high capacity disks are grouped into 3{20 zones Tracks in outer zones have higher capacities than those in inner zones This technique is called zoned bit recording (ZBR) [8] Segment sizes must be tailored to the particular zone in which they reside in order to satisfy the requirement that each fragment of a segment is a whole number of tracks in size Segments in outer zones might be larger than segments in inner zones because of the need to ll a track to capacity Note that this does not result in longer reading times for segments in outer zones the number of tracks read there will be the same as or lower than the number of tracks read for segments in inner zones, and track reading times are the same regardless of the location of the track See [2] for details about computing the segment sizes 5 Interleaved annular layout (IAL) Another way to reduce the length of reading cycles (and thus reduce the buering requirement) is to lay out the segments in such a way that the distances among segments that participate in a reading cycle are short This obviously requires constraining the times at which new streams can be introduced into reading cycles If we allow new streams to be introduced any time there is a free slot, then cohorts may request any combination of segments, thus foiling any constrained layout schemes We propose a layout scheme which we call interleaved annular layout (IAL) The cylinders of the disks in the disk array are divided into contiguous cylinder groups called rings The scheme ensures that all segments read by any cohort in any reading cycle are in the same ring Every other ring is accessed as the heads move from one edge of the disks to the other; the rings read when the heads move in one direction are the ones skipped when the heads move in the other direction The purpose of this interleaving is to prevent the need for a long seek back to the rst ring when the heads reach the last one An example is shown in Figure 3 There are 5 rings, 2 movies and 10 segments per movie in this example, and only one group in the disk array The gure shows how the fragments of the segments might be laid out; only one disk is shown (the layout on all disks in the group is identical) The notation F x:y denotes a fragment of segment y of movie x The arrows illustrate the direction in which the reading proceeds; dashed arcs denote skipping Each cohort reads segments from a single ring of the group which is currently serving the cohort; in our example (where there is only one cohort), if the F25 F29 F26 F28 F27 F10 F14 F11 F13 F12 F22 F23 F21 F24 F20 F17 F18 F16 F19 F15 Ring 2 Ring 3 Ring 1 Ring 4 Ring 0 Figure 3: IAL with 5 Rings heads are currently in the outer ring (Ring 0) then only segments 0 and 5 of movies 1 and 2 can participate in the current reading cycle This means that the server might not be able to start servicing a new request immediately since it has to wait for the ring containing the rst segment of the movie to become the next ring to be accessed We use the term startup latency to refer to this latency The reads within a ring are performed in the order in which the segments appear in the ring IAL can be used with or without OAC If we wish to use IAL without OAC, we can use the xed time approach which is described in Section 32 Let R be the number of rings Segment s of each movie is striped across ring 4 s mod R of group s mod G There is no constraint on the order of segments within a ring R should be relatively prime to G in order to ensure an even distribution of segments among the rings This will also ensure that if there is a free slot in some cohort then a new stream will not have to wait more than G R reading cycles before it can be initiated A new stream can be initiated when a cohort with a free slot is about to be served at the group and ring containing the rst segment of the new stream If we wish to combine OAC with IAL, we need to use the xed segment size approach which was discussed in Section 33 Recall that the xed size approach requires omitting the reads for some streams during some reading cycles in order to prevent buer overow This introduces a layout complication which is dealt with by skipping groups during layout as described in Section 33 4 Rings are numbered by the order in which they are read; see Figure 3

6 6 Case study Using the techniques described in [2], we studied the performance of a disk array with 24 data disks The disk parameters were based on the performance characteristics of Seagate Elite 9 disks which are highcapacity (9 GB) high-performance disk drives that utilize ZBR The maximum consumption rate was 12 Mbits/s 5 Table 1 shows the case study results The buer size was constrained to be at most 8 MB per stream The + and { signs that appear after OAC and IAL signify whether the scheme was used (+) or not ({) The results in the MIN columns were obtained by considering all seeks to be track to track seeks; hence, these results serve as a bound on what can be achieved by decreasing seek times Each entry in the table shows the maximum number of concurrent streams and the required buer size per stream (in MB) G OAC{ OAC{ OAC{ OAC+ OAC+ OAC+ IAL{ IAL+ MIN IAL{ IAL+ MIN 1 55(8) 58(8) 58(8) 68(5) 79(5) 79(5) 2 64(8) 66(8) 66(7) 74(7) 78(3) 78(3) 3 66(7) 69(7) 69(7) 75(8) 78(2) 78(2) 4 68(6) 72(7) 72(7) 72(5) 76(2) 76(2) 6 72(7) 72(5) 72(5) 72(4) 78(2) 78(1) 8 72(5) 72(4) 72(4) 72(3) 72(1) 72(1) 12 72(4) 72(3) 72(3) 72(3) 72(1) 72(1) 24 72(3) 72(2) 72(2) 72(2) 72(1) 72(1) Table 1: Case Study Results We see the substantial benet obtained by increasing the number of groups G If the maximum number of concurrent streams is u max then a reading cycle will contain at most umax G reads per cohort6 Hence, as G increases, the maximum number of reads that need to be performed per cohort during a reading cycle decreases proportionally, but the bandwidth of a disk group also decreases proprtionally to the increase in the number of groups (recall that each group contains D G disks) The reason for the benet obtained by increasing G is that the number of seeks and rotational latencies incurred per cohort during a reading cycle also decreases proportionally to the increase in G because of the reduction in the number of reads Since the buering requirement is proportional to the maximum length of a reading cycle, the shorter reading cycles obtained by increasing G result in a lower 5 This is a reasonable rate for NTSC quality MPEG compressed movies (see [6]) 6 Note that umax has to be a multiple of G, which accounts for the sporadic non-monotonicity in Table 1 buering requirement or a larger number of supportable streams for the available memory The graph in Figure 4 focuses on the impact of OAC and IAL when G = 1 Note the jumps exhibited by the OAC curves These jumps are a result of the requirement that the segment size be a multiple of the track size for OAC We see the signicant benet of using OAC, and IAL with OAC IAL alone provides a less signicant benet Buffer Size per Stream (MB) Number of Streams References Figure 4: Case Study Results (G = 1) {OAC-, IAL-} {OAC-, IAL+} {OAC+, IAL-} {OAC+, IAL+} [1] Berson S, et al Staggered Striping in Multimedia Information Systems In Proc ACM SIGMOD, pp 79{90, 1994 [2] Cohen, A, WA Burkhard, and PV Rangan Pipelined Disk Arrays for Digital Movie Retrieval Tech Report CS , CSE Dept, University of California, San Diego, 1995 [3] Cohen, A, CW Padgett, and WA Burkhard A High-Performance Circuit-Switched Network for Distributed Video Servers Tech Report CS , CSE Dept, University of California, San Diego, 1995 [4] Gemmell, J Multimedia Network File Servers: Multichannel Delay Sensitive Data Retrieval In Proc ACM Multimedia '93, pp 243{250, 1993 [5] Little, TDC, and D Venkatesh Probabilistic Assignment of Movies to Storage Devices in a Video-On- Demand System In Proc 4th Int'l Workshop on Network and Operating System Support for Digital Audio and Video, pp 213{224, 1993 [6] Pancha, P, and M El Zarki MPEG Coding for Variable Bit Rate Video Transmission IEEE Communications Magazine, 32(5), pp 54{66, 1994 [7] Reddy, ALN, and J Wyllie Disk Scheduling in a Multimedia I/O System In Proc ACM Multimedia '93, 225{233, 1993 [8] Ruemmler, C, and J Wilkes Modelling Disks HP Labs Tech Report HPL-93-68, 1993

7 [9] Yu, PS, M-S Chen, and DD Kandlur Design and Analysis of a Grouped Sweeping Scheme for Multimedia Storage Management In Proc 3rd Int'l Workshop on Network and Operating System Support for Digital Audio and Video, pp 38{49, 1992

Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part I - The Single-Disk Case

Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part I - The Single-Disk Case Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part I - he Single-Disk Case Jack Yiu-bun Lee Department of Information Engineering he Chinese University of Hong Kong Contents

More information

Multimedia Storage Servers

Multimedia Storage Servers Multimedia Storage Servers Cyrus Shahabi shahabi@usc.edu Computer Science Department University of Southern California Los Angeles CA, 90089-0781 http://infolab.usc.edu 1 OUTLINE Introduction Continuous

More information

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao File Systems Hai Tao File System File system is used to store sources, objects, libraries and executables, numeric data, text, video, audio, etc. The file system provide access and control function for

More information

RESPONSIVENESS IN A VIDEO. College Station, TX In this paper, we will address the problem of designing an interactive video server

RESPONSIVENESS IN A VIDEO. College Station, TX In this paper, we will address the problem of designing an interactive video server 1 IMPROVING THE INTERACTIVE RESPONSIVENESS IN A VIDEO SERVER A. L. Narasimha Reddy ABSTRACT Dept. of Elec. Engg. 214 Zachry Texas A & M University College Station, TX 77843-3128 reddy@ee.tamu.edu In this

More information

Distributed Video Systems Chapter 3 Storage Technologies

Distributed Video Systems Chapter 3 Storage Technologies Distributed Video Systems Chapter 3 Storage Technologies Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 3.1 Introduction 3.2 Magnetic Disks 3.3 Video

More information

Design and Analysis of a Real-Time Storage Server. for Multimedia Applications using Disk Arrays. Jinsung Cho and Heonshik Shin

Design and Analysis of a Real-Time Storage Server. for Multimedia Applications using Disk Arrays. Jinsung Cho and Heonshik Shin Design and Analysis of a Real-Time Storage Server for Multimedia Applications using Disk Arrays Jinsung Cho and Heonshik Shin Department of Computer Engineering Seoul National University Seoul 151-742,

More information

IBM Almaden Research Center, at regular intervals to deliver smooth playback of video streams. A video-on-demand

IBM Almaden Research Center, at regular intervals to deliver smooth playback of video streams. A video-on-demand 1 SCHEDULING IN MULTIMEDIA SYSTEMS A. L. Narasimha Reddy IBM Almaden Research Center, 650 Harry Road, K56/802, San Jose, CA 95120, USA ABSTRACT In video-on-demand multimedia systems, the data has to be

More information

Real-time communication scheduling in a multicomputer video. server. A. L. Narasimha Reddy Eli Upfal. 214 Zachry 650 Harry Road.

Real-time communication scheduling in a multicomputer video. server. A. L. Narasimha Reddy Eli Upfal. 214 Zachry 650 Harry Road. Real-time communication scheduling in a multicomputer video server A. L. Narasimha Reddy Eli Upfal Texas A & M University IBM Almaden Research Center 214 Zachry 650 Harry Road College Station, TX 77843-3128

More information

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke CSCI-GA.2250-001 Operating Systems I/O : Disk Scheduling and RAID Hubertus Franke frankeh@cs.nyu.edu Disks Scheduling Abstracted by OS as files A Conventional Hard Disk (Magnetic) Structure Hard Disk

More information

Today: Secondary Storage! Typical Disk Parameters!

Today: Secondary Storage! Typical Disk Parameters! Today: Secondary Storage! To read or write a disk block: Seek: (latency) position head over a track/cylinder. The seek time depends on how fast the hardware moves the arm. Rotational delay: (latency) time

More information

Chapter 11. I/O Management and Disk Scheduling

Chapter 11. I/O Management and Disk Scheduling Operating System Chapter 11. I/O Management and Disk Scheduling Lynn Choi School of Electrical Engineering Categories of I/O Devices I/O devices can be grouped into 3 categories Human readable devices

More information

Chapter 9: Peripheral Devices: Magnetic Disks

Chapter 9: Peripheral Devices: Magnetic Disks Chapter 9: Peripheral Devices: Magnetic Disks Basic Disk Operation Performance Parameters and History of Improvement Example disks RAID (Redundant Arrays of Inexpensive Disks) Improving Reliability Improving

More information

Multimedia Storage Servers: A Tutorial

Multimedia Storage Servers: A Tutorial Multimedia Storage Servers: A Tutorial D. James Gemmell Department of Computer Science, Simon Fraser University E-mail: gemmell@cs.sfu.ca Harrick M. Vin Department of Computer Sciences, The University

More information

Comparing Random Data Allocation and Data Striping in Multimedia Servers

Comparing Random Data Allocation and Data Striping in Multimedia Servers Comparing Random Data Allocation and Data Striping in Multimedia Servers Preliminary Version y Jose Renato Santos z UCLA Computer Science Dept. 4732 Boelter Hall Los Angeles, CA 90095-1596 santos@cs.ucla.edu

More information

Striping for Interactive Video: Is it Worth it?

Striping for Interactive Video: Is it Worth it? Striping for Interactive Video: Is it Worth it? Martin Reisslein Keith W. Ross Subin Shrestha GMD FOKUS Institute Eurecom Wharton Computing (WCIT) reisslein@fokus.gmd.de ross@eurecom.fr subin@wharton.upenn.edu

More information

A track on a magnetic disk is a concentric rings where data is stored.

A track on a magnetic disk is a concentric rings where data is stored. CS 320 Ch 6 External Memory Figure 6.1 shows a typical read/ head on a magnetic disk system. Read and heads separate. Read head uses a material that changes resistance in response to a magnetic field.

More information

Hard Disk Drives. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau)

Hard Disk Drives. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau) Hard Disk Drives Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau) Storage Stack in the OS Application Virtual file system Concrete file system Generic block layer Driver Disk drive Build

More information

Chapter 6 - External Memory

Chapter 6 - External Memory Chapter 6 - External Memory Luis Tarrataca luis.tarrataca@gmail.com CEFET-RJ L. Tarrataca Chapter 6 - External Memory 1 / 66 Table of Contents I 1 Motivation 2 Magnetic Disks Write Mechanism Read Mechanism

More information

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File File File System Implementation Operating Systems Hebrew University Spring 2007 Sequence of bytes, with no structure as far as the operating system is concerned. The only operations are to read and write

More information

Database Systems II. Secondary Storage

Database Systems II. Secondary Storage Database Systems II Secondary Storage CMPT 454, Simon Fraser University, Fall 2009, Martin Ester 29 The Memory Hierarchy Swapping, Main-memory DBMS s Tertiary Storage: Tape, Network Backup 3,200 MB/s (DDR-SDRAM

More information

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University COS 318: Operating Systems Storage Devices Kai Li Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall11/cos318/ Today s Topics Magnetic disks Magnetic disk

More information

Operating Systems 2010/2011

Operating Systems 2010/2011 Operating Systems 2010/2011 Input/Output Systems part 2 (ch13, ch12) Shudong Chen 1 Recap Discuss the principles of I/O hardware and its complexity Explore the structure of an operating system s I/O subsystem

More information

Chapter 7 Multimedia Operating Systems

Chapter 7 Multimedia Operating Systems MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 7 Multimedia Operating Systems Introduction To Multimedia (1) Figure 7-1. Video on demand using different local distribution technologies.

More information

Parallel Databases C H A P T E R18. Practice Exercises

Parallel Databases C H A P T E R18. Practice Exercises C H A P T E R18 Parallel Databases Practice Exercises 181 In a range selection on a range-partitioned attribute, it is possible that only one disk may need to be accessed Describe the benefits and drawbacks

More information

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel Chapter-6 SUBJECT:- Operating System TOPICS:- I/O Management Created by : - Sanjay Patel Disk Scheduling Algorithm 1) First-In-First-Out (FIFO) 2) Shortest Service Time First (SSTF) 3) SCAN 4) Circular-SCAN

More information

Block Device Scheduling. Don Porter CSE 506

Block Device Scheduling. Don Porter CSE 506 Block Device Scheduling Don Porter CSE 506 Quick Recap CPU Scheduling Balance competing concerns with heuristics What were some goals? No perfect solution Today: Block device scheduling How different from

More information

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006 November 21, 2006 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds MBs to GBs expandable Disk milliseconds

More information

COS 318: Operating Systems. Storage Devices. Jaswinder Pal Singh Computer Science Department Princeton University

COS 318: Operating Systems. Storage Devices. Jaswinder Pal Singh Computer Science Department Princeton University COS 318: Operating Systems Storage Devices Jaswinder Pal Singh Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Today s Topics Magnetic disks

More information

ECE 5730 Memory Systems

ECE 5730 Memory Systems ECE 5730 Memory Systems Spring 2009 Command Scheduling Disk Caching Lecture 23: 1 Announcements Quiz 12 I ll give credit for #4 if you answered (d) Quiz 13 (last one!) on Tuesday Make-up class #2 Thursday,

More information

RAID. Redundant Array of Inexpensive Disks. Industry tends to use Independent Disks

RAID. Redundant Array of Inexpensive Disks. Industry tends to use Independent Disks RAID Chapter 5 1 RAID Redundant Array of Inexpensive Disks Industry tends to use Independent Disks Idea: Use multiple disks to parallelise Disk I/O for better performance Use multiple redundant disks for

More information

u Covered: l Management of CPU & concurrency l Management of main memory & virtual memory u Currently --- Management of I/O devices

u Covered: l Management of CPU & concurrency l Management of main memory & virtual memory u Currently --- Management of I/O devices Where Are We? COS 318: Operating Systems Storage Devices Jaswinder Pal Singh Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) u Covered: l Management of CPU

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Availability of Coding Based Replication Schemes. Gagan Agrawal. University of Maryland. College Park, MD 20742

Availability of Coding Based Replication Schemes. Gagan Agrawal. University of Maryland. College Park, MD 20742 Availability of Coding Based Replication Schemes Gagan Agrawal Department of Computer Science University of Maryland College Park, MD 20742 Abstract Data is often replicated in distributed systems to improve

More information

C C C 0.1 X X X X X X d 1. d 2. d 5. d 4

C C C 0.1 X X X X X X d 1. d 2. d 5. d 4 Striping in Multi-disk Video Servers Shahram Ghandeharizadeh and Seon Ho Kim Department of Computer Science University of Southern California Los Angeles, California 989 Abstract A challenging task when

More information

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for Comparison of Two Image-Space Subdivision Algorithms for Direct Volume Rendering on Distributed-Memory Multicomputers Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc Dept. of Computer Eng. and

More information

Block Device Scheduling. Don Porter CSE 506

Block Device Scheduling. Don Porter CSE 506 Block Device Scheduling Don Porter CSE 506 Logical Diagram Binary Formats Memory Allocators System Calls Threads User Kernel RCU File System Networking Sync Memory Management Device Drivers CPU Scheduler

More information

Block Device Scheduling

Block Device Scheduling Logical Diagram Block Device Scheduling Don Porter CSE 506 Binary Formats RCU Memory Management File System Memory Allocators System Calls Device Drivers Interrupts Net Networking Threads Sync User Kernel

More information

Efficient support for interactive operations in multi-resolution video servers

Efficient support for interactive operations in multi-resolution video servers Multimedia Systems 7: 241 253 (1999) Multimedia Systems c Springer-Verlag 1999 Efficient support for interactive operations in multi-resolution video servers Prashant J. Shenoy, Harrick M. Vin Distributed

More information

6. Parallel Volume Rendering Algorithms

6. Parallel Volume Rendering Algorithms 6. Parallel Volume Algorithms This chapter introduces a taxonomy of parallel volume rendering algorithms. In the thesis statement we claim that parallel algorithms may be described by "... how the tasks

More information

Part IV I/O System. Chapter 12: Mass Storage Structure

Part IV I/O System. Chapter 12: Mass Storage Structure Part IV I/O System Chapter 12: Mass Storage Structure Disk Structure Three elements: cylinder, track and sector/block. Three types of latency (i.e., delay) Positional or seek delay mechanical and slowest

More information

Disk. Real Time Mach Disk Device Driver. Open/Play/stop CRAS. Application. Shared Buffer. Read Done 5. 2 Read Request. Start I/O.

Disk. Real Time Mach Disk Device Driver. Open/Play/stop CRAS. Application. Shared Buffer. Read Done 5. 2 Read Request. Start I/O. Simple Continuous Media Storage Server on Real-Time Mach Hiroshi Tezuka y Tatsuo Nakajima Japan Advanced Institute of Science and Technology ftezuka,tatsuog@jaist.ac.jp http://mmmc.jaist.ac.jp:8000/ Abstract

More information

MANAGING PARALLEL DISKS FOR CONTINUOUS MEDIA DATA

MANAGING PARALLEL DISKS FOR CONTINUOUS MEDIA DATA Chapter 1 MANAGING PARALLEL DISKS FOR CONTINUOUS MEDIA DATA Edward Chang University of California, Santa Barbara echang@ece.ucsb.edu Chen Li and Hector Garcia-Molina Stanford University chenli@stanford.edu,hector@db.stanford.edu

More information

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010

Part IV I/O System Chapter 1 2: 12: Mass S torage Storage Structur Structur Fall 2010 Part IV I/O System Chapter 12: Mass Storage Structure Fall 2010 1 Disk Structure Three elements: cylinder, track and sector/block. Three types of latency (i.e., delay) Positional or seek delay mechanical

More information

William Stallings Computer Organization and Architecture 8 th Edition. Chapter 6 External Memory

William Stallings Computer Organization and Architecture 8 th Edition. Chapter 6 External Memory William Stallings Computer Organization and Architecture 8 th Edition Chapter 6 External Memory Types of External Memory Magnetic Disk RAID Removable Optical CD-ROM CD-Recordable (CD-R) CD-R/W DVD Magnetic

More information

Zone-Bit-Recording-Enhanced Video Data Layout Strategies

Zone-Bit-Recording-Enhanced Video Data Layout Strategies "HEWLETT PACKARD Zone-Bit-Recording-Enhanced Video Data Layout Strategies Shenze Chen, Manu Thapar Broadband Information Systems Laboratory HPL-95-124 November, 1995 video-on-demand, VOD, video layout,

More information

CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS

CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS UNIT-I OVERVIEW & INSTRUCTIONS 1. What are the eight great ideas in computer architecture? The eight

More information

Disk Scheduling. Based on the slides supporting the text

Disk Scheduling. Based on the slides supporting the text Disk Scheduling Based on the slides supporting the text 1 User-Space I/O Software Layers of the I/O system and the main functions of each layer 2 Disk Structure Disk drives are addressed as large 1-dimensional

More information

COS 318: Operating Systems. Storage Devices. Vivek Pai Computer Science Department Princeton University

COS 318: Operating Systems. Storage Devices. Vivek Pai Computer Science Department Princeton University COS 318: Operating Systems Storage Devices Vivek Pai Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall11/cos318/ Today s Topics Magnetic disks Magnetic disk

More information

Lecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown

Lecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown Lecture 21: Reliable, High Performance Storage CSC 469H1F Fall 2006 Angela Demke Brown 1 Review We ve looked at fault tolerance via server replication Continue operating with up to f failures Recovery

More information

Continuity and Synchronization in MPEG. P. Venkat Rangan, Srihari SampathKumar, and Sreerang Rajan. Multimedia Laboratory. La Jolla, CA

Continuity and Synchronization in MPEG. P. Venkat Rangan, Srihari SampathKumar, and Sreerang Rajan. Multimedia Laboratory. La Jolla, CA Continuity and Synchronization in MPEG P. Venkat Rangan, Srihari SampathKumar, and Sreerang Rajan Multimedia Laboratory Department of Computer Science and Engineering University of California at San Diego

More information

CSE506: Operating Systems CSE 506: Operating Systems

CSE506: Operating Systems CSE 506: Operating Systems CSE 506: Operating Systems Disk Scheduling Key to Disk Performance Don t access the disk Whenever possible Cache contents in memory Most accesses hit in the block cache Prefetch blocks into block cache

More information

Frank Miller, George Apostolopoulos, and Satish Tripathi. University of Maryland. College Park, MD ffwmiller, georgeap,

Frank Miller, George Apostolopoulos, and Satish Tripathi. University of Maryland. College Park, MD ffwmiller, georgeap, Simple Input/Output Streaming in the Operating System Frank Miller, George Apostolopoulos, and Satish Tripathi Mobile Computing and Multimedia Laboratory Department of Computer Science University of Maryland

More information

Mass-Storage Structure

Mass-Storage Structure CS 4410 Operating Systems Mass-Storage Structure Summer 2011 Cornell University 1 Today How is data saved in the hard disk? Magnetic disk Disk speed parameters Disk Scheduling RAID Structure 2 Secondary

More information

Adaptive Methods for Distributed Video Presentation. Oregon Graduate Institute of Science and Technology. fcrispin, scen, walpole,

Adaptive Methods for Distributed Video Presentation. Oregon Graduate Institute of Science and Technology. fcrispin, scen, walpole, Adaptive Methods for Distributed Video Presentation Crispin Cowan, Shanwei Cen, Jonathan Walpole, and Calton Pu Department of Computer Science and Engineering Oregon Graduate Institute of Science and Technology

More information

A Fault Tolerant Video Server Using Combined Raid 5 and Mirroring

A Fault Tolerant Video Server Using Combined Raid 5 and Mirroring Proceedings of Multimedia Computing and Networking 1997 (MMCN97), San Jose, CA, February 1997 A Fault Tolerant Video Server Using Combined Raid 5 and Mirroring Ernst W. BIERSACK, Christoph BERNHARDT Institut

More information

Introduction to I/O and Disk Management

Introduction to I/O and Disk Management 1 Secondary Storage Management Disks just like memory, only different Introduction to I/O and Disk Management Why have disks? Ø Memory is small. Disks are large. Short term storage for memory contents

More information

Principles of Data Management. Lecture #2 (Storing Data: Disks and Files)

Principles of Data Management. Lecture #2 (Storing Data: Disks and Files) Principles of Data Management Lecture #2 (Storing Data: Disks and Files) Instructor: Mike Carey mjcarey@ics.uci.edu Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Today s Topics v Today

More information

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili Virtual Memory Lecture notes from MKP and S. Yalamanchili Sections 5.4, 5.5, 5.6, 5.8, 5.10 Reading (2) 1 The Memory Hierarchy ALU registers Cache Memory Memory Memory Managed by the compiler Memory Managed

More information

Introduction to I/O and Disk Management

Introduction to I/O and Disk Management Introduction to I/O and Disk Management 1 Secondary Storage Management Disks just like memory, only different Why have disks? Ø Memory is small. Disks are large. Short term storage for memory contents

More information

Storage Devices for Database Systems

Storage Devices for Database Systems Storage Devices for Database Systems 5DV120 Database System Principles Umeå University Department of Computing Science Stephen J. Hegner hegner@cs.umu.se http://www.cs.umu.se/~hegner Storage Devices for

More information

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568 UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Computer Architecture ECE 568 Part 6 Input/Output Israel Koren ECE568/Koren Part.6. Motivation: Why Care About I/O? CPU Performance:

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ion Stoica, UC Berkeley September 13, 2016 (based on presentation from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk N

More information

Module 13: Secondary-Storage Structure

Module 13: Secondary-Storage Structure Module 13: Secondary-Storage Structure Disk Structure Disk Scheduling Disk Management Swap-Space Management Disk Reliability Stable-Storage Implementation Operating System Concepts 13.1 Silberschatz and

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

Semiconductor Memory Types Microprocessor Design & Organisation HCA2102

Semiconductor Memory Types Microprocessor Design & Organisation HCA2102 Semiconductor Memory Types Microprocessor Design & Organisation HCA2102 Internal & External Memory Semiconductor Memory RAM Misnamed as all semiconductor memory is random access Read/Write Volatile Temporary

More information

1 What is an operating system?

1 What is an operating system? B16 SOFTWARE ENGINEERING: OPERATING SYSTEMS 1 1 What is an operating system? At first sight, an operating system is just a program that supports the use of some hardware. It emulates an ideal machine one

More information

CSE 451: Operating Systems Winter Redundant Arrays of Inexpensive Disks (RAID) and OS structure. Gary Kimura

CSE 451: Operating Systems Winter Redundant Arrays of Inexpensive Disks (RAID) and OS structure. Gary Kimura CSE 451: Operating Systems Winter 2013 Redundant Arrays of Inexpensive Disks (RAID) and OS structure Gary Kimura The challenge Disk transfer rates are improving, but much less fast than CPU performance

More information

Physical Storage Media

Physical Storage Media Physical Storage Media These slides are a modified version of the slides of the book Database System Concepts, 5th Ed., McGraw-Hill, by Silberschatz, Korth and Sudarshan. Original slides are available

More information

RAID (Redundant Array of Inexpensive Disks)

RAID (Redundant Array of Inexpensive Disks) Magnetic Disk Characteristics I/O Connection Structure Types of Buses Cache & I/O I/O Performance Metrics I/O System Modeling Using Queuing Theory Designing an I/O System RAID (Redundant Array of Inexpensive

More information

Symphony: An Integrated Multimedia File System

Symphony: An Integrated Multimedia File System Symphony: An Integrated Multimedia File System Prashant J. Shenoy, Pawan Goyal, Sriram S. Rao, and Harrick M. Vin Distributed Multimedia Computing Laboratory Department of Computer Sciences, University

More information

Spindle. Head. assembly. Platter. Cylinder

Spindle. Head. assembly. Platter. Cylinder Placement of Data in Multi-Zone Disk Drives Shahram Ghandeharizadeh, Douglas J. Ierardi, Dongho Kim, Roger Zimmermann Department of Computer Science University of Southern California Los Angeles, California

More information

Data Storage and Query Answering. Data Storage and Disk Structure (2)

Data Storage and Query Answering. Data Storage and Disk Structure (2) Data Storage and Query Answering Data Storage and Disk Structure (2) Review: The Memory Hierarchy Swapping, Main-memory DBMS s Tertiary Storage: Tape, Network Backup 3,200 MB/s (DDR-SDRAM @200MHz) 6,400

More information

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011

CSE325 Principles of Operating Systems. Mass-Storage Systems. David P. Duggan. April 19, 2011 CSE325 Principles of Operating Systems Mass-Storage Systems David P. Duggan dduggan@sandia.gov April 19, 2011 Outline Storage Devices Disk Scheduling FCFS SSTF SCAN, C-SCAN LOOK, C-LOOK Redundant Arrays

More information

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017 CS 471 Operating Systems Yue Cheng George Mason University Fall 2017 Review: Disks 2 Device I/O Protocol Variants o Status checks Polling Interrupts o Data PIO DMA 3 Disks o Doing an disk I/O requires:

More information

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University I/O, Disks, and RAID Yi Shi Fall 2017 Xi an Jiaotong University Goals for Today Disks How does a computer system permanently store data? RAID How to make storage both efficient and reliable? 2 What does

More information

Block Device Driver. Pradipta De

Block Device Driver. Pradipta De Block Device Driver Pradipta De pradipta.de@sunykorea.ac.kr Today s Topic Block Devices Structure of devices Kernel components I/O Scheduling USB Device Driver Basics CSE506: Block Devices & IO Scheduling

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Physical Representation of Files

Physical Representation of Files Physical Representation of Files A disk drive consists of a disk pack containing one or more platters stacked like phonograph records. Information is stored on both sides of the platter. Each platter is

More information

Introduction to I/O. April 30, Howard Huang 1

Introduction to I/O. April 30, Howard Huang 1 Introduction to I/O Where does the data for our CPU and memory come from or go to? Computers communicate with the outside world via I/O devices. Input devices supply computers with data to operate on.

More information

Using IDA for Performance Improvement in Multimedia Servers. Antonio Puliato, Salvatore Riccobene, Lorenzo Vita

Using IDA for Performance Improvement in Multimedia Servers. Antonio Puliato, Salvatore Riccobene, Lorenzo Vita Using IDA for Performance Improvement in Multimedia Servers Antonio Puliato, Salvatore Riccobene, Lorenzo Vita Istituto di Informatica e Telecomunicazioni Facolta di Ingegneria - Universita di Catania

More information

THROUGHPUT IN THE DQDB NETWORK y. Shun Yan Cheung. Emory University, Atlanta, GA 30322, U.S.A. made the request.

THROUGHPUT IN THE DQDB NETWORK y. Shun Yan Cheung. Emory University, Atlanta, GA 30322, U.S.A. made the request. CONTROLLED REQUEST DQDB: ACHIEVING FAIRNESS AND MAXIMUM THROUGHPUT IN THE DQDB NETWORK y Shun Yan Cheung Department of Mathematics and Computer Science Emory University, Atlanta, GA 30322, U.S.A. ABSTRACT

More information

Current Topics in OS Research. So, what s hot?

Current Topics in OS Research. So, what s hot? Current Topics in OS Research COMP7840 OSDI Current OS Research 0 So, what s hot? Operating systems have been around for a long time in many forms for different types of devices It is normally general

More information

Disk Scheduling. Chapter 14 Based on the slides supporting the text and B.Ramamurthy s slides from Spring 2001

Disk Scheduling. Chapter 14 Based on the slides supporting the text and B.Ramamurthy s slides from Spring 2001 Disk Scheduling Chapter 14 Based on the slides supporting the text and B.Ramamurthy s slides from Spring 2001 1 User-Space I/O Software Layers of the I/O system and the main functions of each layer 2 Disks

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems Chapter 10: Mass-Storage Systems Silberschatz, Galvin and Gagne Overview of Mass Storage Structure Magnetic disks provide bulk of secondary storage of modern computers Drives rotate at 60 to 200 times

More information

Module 1: Basics and Background Lecture 4: Memory and Disk Accesses. The Lecture Contains: Memory organisation. Memory hierarchy. Disks.

Module 1: Basics and Background Lecture 4: Memory and Disk Accesses. The Lecture Contains: Memory organisation. Memory hierarchy. Disks. The Lecture Contains: Memory organisation Example of memory hierarchy Memory hierarchy Disks Disk access Disk capacity Disk access time Typical disk parameters Access times file:///c /Documents%20and%20Settings/iitkrana1/My%20Documents/Google%20Talk%20Received%20Files/ist_data/lecture4/4_1.htm[6/14/2012

More information

William Stallings Computer Organization and Architecture 6 th Edition. Chapter 6 External Memory

William Stallings Computer Organization and Architecture 6 th Edition. Chapter 6 External Memory William Stallings Computer Organization and Architecture 6 th Edition Chapter 6 External Memory Types of External Memory Magnetic Disk RAID Removable Optical CD-ROM CD-Recordable (CD-R) CD-R/W DVD Magnetic

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 2018 Lecture 22: File system optimizations and advanced topics There s more to filesystems J Standard Performance improvement techniques Alternative important

More information

Providing Resource Allocation and Performance Isolation in a Shared Streaming-Media Hosting Service

Providing Resource Allocation and Performance Isolation in a Shared Streaming-Media Hosting Service Providing Resource Allocation and Performance Isolation in a Shared Streaming-Media Hosting Service Ludmila Cherkasova Hewlett-Packard Laboratories 11 Page Mill Road, Palo Alto, CA 94303, USA cherkasova@hpl.hp.com

More information

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage CSCI-GA.2433-001 Database Systems Lecture 8: Physical Schema: Storage Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com View 1 View 2 View 3 Conceptual Schema Physical Schema 1. Create a

More information

Storage System. Distributor. Network. Drive. Drive. Storage System. Controller. Controller. Disk. Disk

Storage System. Distributor. Network. Drive. Drive. Storage System. Controller. Controller. Disk. Disk HRaid: a Flexible Storage-system Simulator Toni Cortes Jesus Labarta Universitat Politecnica de Catalunya - Barcelona ftoni, jesusg@ac.upc.es - http://www.ac.upc.es/hpc Abstract Clusters of workstations

More information

File Structures and Indexing

File Structures and Indexing File Structures and Indexing CPS352: Database Systems Simon Miner Gordon College Last Revised: 10/11/12 Agenda Check-in Database File Structures Indexing Database Design Tips Check-in Database File Structures

More information

A Disk Head Scheduling Simulator

A Disk Head Scheduling Simulator A Disk Head Scheduling Simulator Steven Robbins Department of Computer Science University of Texas at San Antonio srobbins@cs.utsa.edu Abstract Disk head scheduling is a standard topic in undergraduate

More information

Parallel DBMS. Parallel Database Systems. PDBS vs Distributed DBS. Types of Parallelism. Goals and Metrics Speedup. Types of Parallelism

Parallel DBMS. Parallel Database Systems. PDBS vs Distributed DBS. Types of Parallelism. Goals and Metrics Speedup. Types of Parallelism Parallel DBMS Parallel Database Systems CS5225 Parallel DB 1 Uniprocessor technology has reached its limit Difficult to build machines powerful enough to meet the CPU and I/O demands of DBMS serving large

More information

Chapter 14: Mass-Storage Systems

Chapter 14: Mass-Storage Systems Chapter 14: Mass-Storage Systems Disk Structure Disk Scheduling Disk Management Swap-Space Management RAID Structure Disk Attachment Stable-Storage Implementation Tertiary Storage Devices Operating System

More information

Components of the Virtual Memory System

Components of the Virtual Memory System Components of the Virtual Memory System Arrows indicate what happens on a lw virtual page number (VPN) page offset virtual address TLB physical address PPN page offset page table tag index block offset

More information

Disk Scheduling for Mixed-Media Workloads in a Multimedia Server 1

Disk Scheduling for Mixed-Media Workloads in a Multimedia Server 1 Paper no: S-118 Disk Scheduling for Mixed-Media Workloads in a Multimedia Server 1 Y. Romboyannakis, 2 G. Nerjes, 3 P. Muth, 3 M. Paterakis, 2 P. Triantafillou, 24 G. Weikum 3 Abstract Most multimedia

More information

6 Distributed data management I Hashing

6 Distributed data management I Hashing 6 Distributed data management I Hashing There are two major approaches for the management of data in distributed systems: hashing and caching. The hashing approach tries to minimize the use of communication

More information

CSE 153 Design of Operating Systems Fall 2018

CSE 153 Design of Operating Systems Fall 2018 CSE 153 Design of Operating Systems Fall 2018 Lecture 12: File Systems (1) Disk drives OS Abstractions Applications Process File system Virtual memory Operating System CPU Hardware Disk RAM CSE 153 Lecture

More information

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang CHAPTER 6 Memory 6.1 Memory 341 6.2 Types of Memory 341 6.3 The Memory Hierarchy 343 6.3.1 Locality of Reference 346 6.4 Cache Memory 347 6.4.1 Cache Mapping Schemes 349 6.4.2 Replacement Policies 365

More information

UNIT I (Two Marks Questions & Answers)

UNIT I (Two Marks Questions & Answers) UNIT I (Two Marks Questions & Answers) Discuss the different ways how instruction set architecture can be classified? Stack Architecture,Accumulator Architecture, Register-Memory Architecture,Register-

More information