An Efficient Buffer Management Scheme for Multimedia File System

Size: px
Start display at page:

Download "An Efficient Buffer Management Scheme for Multimedia File System"

Transcription

1 IEICE TRANS. INF. & SYST., VOL.E83 D, NO.6 JUNE PAPER An Efficient Buffer Management Scheme for Multimedia File System Jongho NANG, Regular Member and Sungkwan HEO, Nonmember SUMMARY File system buffers provide memory space for data being transferred to and from disk and act as caches for the recently used blocks, and the buffer manager usually reads ahead data blocks to minimize the number of disk accesses. However, if several multimedia files with different consumption rates are accessed simultaneously from the file system in which LRU buffer replacement strategy is used, the read-ahead blocks of the low rate file are unloaded from memory to be used for loading a data block of a high data rate file, therefore they should be reloaded again into memory from disk when these blocks are actually referenced. This paper proposes and implements a new buffer cache management scheme for a multimedia file system and analyzes the performance of the proposed scheme by modifying the file system kernel of FreeBSD. In this proposed scheme, initially, some buffers are allocated to each opened multimedia file, privately, then these buffers are reused for other data blocks of that file when they are loaded from the disk. Moreover, the number of private buffers allocated for the file is dynamically adjusted according to its data rate. An admission control scheme is also proposed to prevent opening of a new file which may cause overloads in the file system. Experimental results comparing proposed scheme with the original FreeBSD and a simple CTL-based model show that the proposed buffer management scheme could support the realtime play back of several multimedia files with various data rates concurrently without helps of a realtime CPU and disk scheduling. key words: operating system, multimedia system, file system, buffer management, VoD server 1. Introduction Multimedia systems combine a variety of data, such as voice, graphics, animation, images, and most importantly, audio and full-motion video, into a wide range of applications. Among these multimedia data, the continuous media data such as audio samples and video frames can deliver its meaning only when they are accessed and processed while keeping their real-time constraints [5]. This means that the files storing continuous media data should be accessed from disk with their own consuming rates (or data rates) in order to be processed accurately. The accessing (or consuming) rate of a multimedia file is dependent on the type of multimedia data stored in the file and its compression technique. 4.4BSD [8] is one of the most popular UNIX operating systems in research and academic institutes when Manuscript received February 1, The authors are with Department of Computer Science, Sogang University, 1 Shinsoo-Dong, Mapo-Ku, Seoul , Korea. This work was supported by the Brain Korea 21 Project. implementing a prototype multimedia server. It has a good file system buffer management scheme that reads ahead data blocks into buffer caches which are referenced in the near future to reduce disk I/O time. But since this management scheme uses a global LRU to replace the buffers, there are some problems in accessing various multimedia files with different consuming rates simultaneously. The most serious problem is that the buffer, which contains a read-ahead block of a relatively low data rate file such as an audio file, is replaced with a data block of a relatively high rate file such as a video file. This causes the read-ahead blocks of the low rate file to be unloaded from memory to be used for loading the data block of the high data rate file, therefore they are to be reloaded into memory from disk when these blocks are actually referenced. The problem comes from the fact that 4.4BSD uses a global LRU replacement algorithm to manage the read-ahead buffers. Furthermore, since 4.4BSD s file system has no admission control mechanism, the continuous accessing of multimedia files may not be guaranteed when more multimedia files than the file system can handle are accessed simultaneously. Main objective of this research is to extend the 4.4BSD file system to meet the realtime accessing requirements of several multimedia files with different data rates while keeping the modifications of its kernel as small as possible. Since recently proposed buffer cache management schemes [2], [4], [6], [7], [12], [15] are designed based on CTL (Constant Time Length) scheduling [4] in which the exactly same number of buffers for data blocks that multimedia process will consume are allocated in each period, they would be optimal with respect to their buffer requirements. However, to implement these schemes on existing operating systems such as 4.4BSD or Linux, the CPU and disk scheduling algorithm should also be modified to support their realtime constraints precisely. Furthermore, these schemes assume some meta data for each multimedia file that contains the amount of data that is accessed in each period is available, but it would be very hard to assume that such kind of meta data are available for all multimedia files with different types being serviced. This paper proposes a new buffer cache management scheme for a multimedia file system which guarantees a continuous accessing of data blocks of mul-

2 1226 IEICE TRANS. INF. & SYST., VOL.E83 D, NO.6 JUNE 2000 timedia files without modifying other kernel components such as CPU and disk scheduling of 4.4BSD. The proposed scheme initially allocates a fixed number of private buffers to each opened multimedia file according to its data rate when a multimedia file is opened. These buffers are reused for another data blocks of the file when they are to be loaded from disk, i.e., a local FIFO read-ahead buffers replacement algorithm is adopted in the proposed management scheme. However, the number of buffers initially allocated for the file is dynamically adjusted according to its data rate in order to maximize the utilization ratio of read-ahead buffers. An admission control mechanism is also proposed to prevent the opening of a new file which may cause an overload. We implement the proposed buffer management scheme on FreeBSD, which is a PC operating system based on 4.4BSD. Experimental results show that the proposed buffer management scheme produces a higher buffer cache hit ratio with even less number of buffers than the FreeBSD scheme when files with various data rate are being referenced. This higher hit ratio contributes to reduce jitter when playing multimedia files. Some experimental comparisons with CTL-based model are also presented to show the effectiveness of proposed scheme. Although the proposed scheme assumes the buffer cache management scheme of 4.4BSD, it can be also applied to any file system that uses a cluster read-ahead scheme together with a global LRU buffer cache replacement algorithm. 2. Problemsin 4.4BSD Buffer Cache Management Accessing Multimedia Files This section explains the referencing characteristics of multimedia files, and experimentally shows the problems of 4.4BSD s buffer cache management scheme when several multimedia files with difference data rates are being referenced simultaneously. 2.1 Reference Characteristics of Multimedia Files Multimedia files have the following referencing characteristics which differ from general files [14]. Since multimedia files are accessed sequentially they have no locality of references. According to multimedia data types such as audio samples and video frames each file has its own data rate. For continuous playback of multimedia data the delay time should be constant. That is, each continuous media data file is referenced at an almost constant rate. Figure 1 shows these reference characteristics of multimedia file experimentally. In this figure, the amount of consumed data of three continuous media Fig. 1 An example of data rates of multimedia files. files (one au file and two MPEG video files with different display sizes) are shown graphically while they are being played back on FreeBSD. An au audio file is accessed at a constant rate (8 KBytes per second), and each video file is accessed at an almost constant rate (16 KBytes and 36 KBytes per second), although a VBR (Variable Bit Rate) coding makes the data rates to be floating. From this experiment, it could be identified that the data rates of multimedia files are different from each other according to the types and contents of the files, and they are kept as almost constants when they are being referenced. Since a multimedia server usually services multimedia streams simultaneously, the file system of this server should allocate the file system resources (especially buffer caches) according to the data rate of each opened continuous media file in order to give a fair service to all multimedia streams. 2.2 Contention for Getting Buffers File system buffers provide memory space for data being transferred to and from disk and act as the caches for the recently used blocks [8]. These buffers are usually shared by all processes, and the free-list that contains the buffers of used blocks is managed in an LRU fashion. Furthermore, to minimize the number of disk accesses, the buffer management scheme usually reads ahead data blocks when the file is accessed sequentially. One of the main differences between traditional file systems and 4.4BSD s file system is that the former ones read ahead only one data block, whereas the latter reads ahead a cluster which is a logical group of continuous blocks in disk that can be loaded into memory by only one asynchronous transfer. Figure 2 shows an example of a cluster read-ahead mechanism of 4.4BSD s file system. Assume that cluster A is currently loaded into the buffer cache. In this figure, lblkno points the block currently being referenced, maxra is the last read- The maximum size of a cluster is 64 KBytes in FreeBSD. If the size of a block is 8 KBytes then the cluster consists of maximum 8 blocks [8].

3 NANG and HEO: AN EFFICIENT BUFFER MANAGEMENT SCHEME FOR MULTIMEDIA FILE SYSTEM 1227 ahead block, and ralen is the read-ahead length that determines whether the file system should read ahead the next cluster. Currently, block a is being accessed, and blocks b, c, d and e will be accessed one by one. When block d is being accessed, since the difference of maxra and lblkno is less than ralen, the file system issues a read ahead instruction asynchronously to load the next cluster B. Since 4.4BSD adopts such cluster read ahead mechanism, the file system always can keep the blocks that will be accessed in the near future in memory if the file is accessed sequentially. Since the cluster read-ahead mechanism of 4.4BSD s file system helps the data blocks to be loaded from disk before being referenced as explained in Fig. 2, a 4.4BSD s file system is completely proper for accessing only one continuous multimedia file that is usually referenced sequentially. However, when 4.4BSD is used as a multimedia server in which a lot of multimedia files are serviced simultaneously, its cluster read-ahead mechanism shows an undesirable behaviour. The problem occurs when a file with a high data rate is being referenced so that it consumes a large number of buffers in the global LRU list. In this case the buffers, which are already allocated for the read-ahead blocks of the file with a relatively low rate, are reallocated for the data blocks of the high data rate file, since they are loaded relatively a long times ago and are located at the front of the free block list. For that reason, the read-ahead blocks of the file referenced at low rate have to be read again from disk when they are actually referenced. Let us explain this problem, experimentally. Figure 3 shows the number of read-ahead blocks of an au file when it is being accessed alone and when accessed with other 10 MPEG files simultaneously. Of Fig. 2 system. Fig. 3 An example of cluster read-ahead in 4.4BSD file An example of buffer contention for read-ahead blocks. course, the data rate of the au file is less than that of the MPEG video files. When an au audio file is referenced alone, as shown in the dotted-line in this figure, the read-ahead blocks are consumed whenever a read request to this file is issued, and the next cluster is read ahead when the number of read-ahead blocks is less than six in this case. On the other hand, when it is accessed with other 10 MPEG files simultaneously, as shown with the solid line, one or more blocks are consumed whenever a read request to the au file for only one data block is issued. In this case, the buffers that hold the read-ahead blocks of the au file were reallocated to the data blocks for the MPEG files with higher data rates than the au file. The read-ahead blocks of the au file which were stored in these buffers must be reloaded from disk when they are actually referenced. This causes a delay of the au file playing back. This undesirable behaviour comes from the fact that all buffers in the file system are managed with a global LRU scheme. In order to resolve this problem, the buffer caches should be allocated privately to each opened file and the replacement scheme should be local. Furthermore, an admission control mechanism should be provided in order not to start accessing of a new file which may cause an overload in terms of the number of available buffer caches. 3. A New Buffer Cache Management Scheme The problem of stealing the buffers holding the readahead data blocks of a low data rate file by a relatively high data rate file could be solved if the buffer caches are allocated privately to each opened file and these buffers are reused only for data blocks of that file (i.e. local FIFO replacement algorithm is used). Of course, the number of simultaneously opened files should be controlled by the total number of available buffer caches in the file system. Allocating a fixed number of private buffers to each opened multimedia file has been a widely used buffer management scheme for multimedia file systems, and could solve the above buffer contention problem. However, adopting this scheme alone may cause a delay when accessing data blocks or does not maximize the utilization ratio of buffer caches. For example, if less than required buffers are allocated to a file referenced at a high date rate, a data block may be requested before an asynchronous read-ahead disk operation is completed. On the other hand, if more than the required buffers are allocated to a file referenced at a low data rate, the read-ahead blocks hold the buffers for a long time, causing the utilization ratio of buffer caches to be low. These problems could be resolved if the number of buffers allocated to each opened file is dynamically adjusted according to its data rate. Let us explain how to apply these buffer cache allocating and adjusting mechanisms to 4.4BSD s file system in more detail.

4 1228 IEICE TRANS. INF. & SYST., VOL.E83 D, NO.6 JUNE 2000 In the proposed management scheme, the file system buffers are divided, with respect to their usages, into two parts as following Cache Buffers These buffers always hold previously used blocks and are shared by all processes. The number of the buffers can be from 30 to the number of all buffers. Read-ahead Buffers These buffers hold the readahead blocks. When a request to open a multimedia file is serviced, some buffers in the cache buffers are changed into the read-ahead buffers and are allocated to that file. Each read-ahead buffer containing multimedia data is called CM (Continuous Media) buffer and is referenced only by the process that opened the file. The number of readahead buffers can be from 0 to the number of all buffers minus 30. Figures 4 and 5 show the system structure and overall algorithm of proposed buffer management scheme, respectively. If a request to open a multimedia file is arrived and there are enough buffers in Cache Buffers, the proposed scheme allocates a fixed number of buffers to that file and changes these buffers to Read-ahead Buffers. Then these buffers become the CM buffers and contain only the read-ahead blocks of that file. When a user process reads data from that file, the data which has already been read ahead into the CM buffers of the file is copied into the address space of the user process and the file system starts reading ahead the next cluster asynchronously, if required. That is, if the number of read-ahead blocks is less than a threshold, the next cluster is started to read ahead the data blocks asynchronously. At this moment, the number of CM buffers is adjusted dynamically and the threshold is also recalculated according to its data rate. Finally, all CM buffers allocated to the file are returned to the file system buffer manager when the file is closed. Let us explain these procedures in more detail. 3.1 Allocating Cache Buffers Privately The proposed buffer management scheme allocates the CM buffers to a multimedia file if the file is opened with an O RDCM flag. Since the file system initially can not know which data rate this file is referenced with at this moment, the proposed scheme allocates a fixed number of buffer caches, that is two clusters as a double buffer. This is the minimum number of buffer caches that a cluster is asynchronously read from disk, while the other cluster is being used. A data block reading from the opened multimedia file is performed through the CM buffers which are managed with a circular queue. That is, when data blocks are read from CM buffers, the file system gets the buffer in front of the queue. On the other hand, when data blocks are read into the CM buffers, the file system puts them as the last buffers of the queue. If the number of readahead blocks is less than the threshold value, the system starts a read-ahead procedure to read the next cluster Fig. 4 scheme. The structure of the proposed buffer management Fig. 5 Overall algorithm of proposed buffer management scheme. The buffer cache of a FreeBSD file system is designed to have at least 30 buffers to guarantee a stable system performance.

5 NANG and HEO: AN EFFICIENT BUFFER MANAGEMENT SCHEME FOR MULTIMEDIA FILE SYSTEM 1229 asynchronously. The threshold value is given by the following equation: Threshold =(the number of buffers allocated) (the size of a cluster) By setting the threshold like this, if the number of readahead buffers is less than the threshold, there are always the cluster-sized number of buffers which contain the already referenced blocks and can be now reused for the read-ahead of new data blocks from disk. When a user process calls close(), the CM buffers are returned to the file system buffer manager and the buffers are now reused for another newly opened multimedia file. The reason that the FIFO replacement algorithm rather than the conventional LRU algorithm is used in the proposed scheme is that the referenced data blocks of a multimedia file are seldom used again. In the proposed buffer management scheme, there is no scheduling among requests from multiple clients. The client process issues the read() system calls with its own consumption rate so that there is a possibility that multiple block read requests are issued simultaneously. However, the 4.4BSD kernel code for accessing the buffer cache in read() system call is a critical section so that only one request could be serviced at a time. It means that although read() system call could be executed simultaneously by several clients, its buffer cache handling is executed sequentially in the proposed buffer cache management scheme. The block read ahead operation is triggered only when the data blocks in buffer cache are copied into user space by read() system call and, as a result, the number of remaining data blocks is less than the threshold. At this time, the kernel code requests the disk device driver to read ahead the cluster asynchronously. This request is enqueued to device driver s request queue and the driver selects one request according to its disk scheduling algorithm (for example, C-SCAN algorithm). The amount of data blocks to be retrieved from disk is determined by how many blocks are allocated continuously in the disk space when it is stored, but the maximum is fixed as 64 KBytes in 4.4BSD. Therefore, the detailed I/O scheduling among the simultaneous read ahead requests and the determination of cluster size are carried out by 4.4BSD disk device drivers itself in the proposed (and 4.4BSD kernel) management scheme. In order to support the realtime requirements of multimedia files accessing, we could adopt a realtime disk scheduling algorithm such as EDF or Group Sweeping algorithms to 4.4BSD. However, since it requires a lot of kernel and device driver code modifications, we gave up the scheduling of simultaneous client requests in the proposed scheme. However, as shown in our new experiments in Sect. 4, a slight modification of only buffer cache management scheme could produce a satisfiable performance. 3.2 Adjusting Buffer Caches Dynamically The fixed number of read-ahead buffers initially allocated to each opened file must be adjusted with respect to its data rate so that a relatively high data rate file has more read-ahead buffers than a relatively low data rate file. The main design issue is how to know if more than enough buffers or less than required buffers are allocated to each opened file. Recently proposed buffer cache management schemes such as CTL (Constant Time Length)-based models [4] assume that a meta data for each multimedia file saying how many blocks are required for each round is available, so that the optimal number of buffers could be allocated to each opened multimedia file. However, since it would be very hard to assume that such meta data are available for all multimedia files stored in the server in advance, it is difficult to build a real running multimedia server with this model. An efficient way proposed in this paper is to check if the process that opens a multimedia file waits when reading a data block from the disk. Let us explain this idea in more detail. If a process waits for completion of an I/O request to read a cluster of a multimedia file, it means that the consuming rate of the process is greater than the reading-ahead rate of the file system. In this case the proposed scheme allocates an additional buffer to the file and increases the threshold for the file by one in order to have more read-ahead buffers as CM buffers. On the other hand, if there has been no I/O waiting for a long time, it means that there is a possibility that more than enough buffers have been allocated to that file. In this case, one buffer from the CM buffers of that file is returned to the file system and the threshold is decreased by one to check if more than enough buffers are allocated to that file. In the next cluster read-ahead time, if there is an I/O request waiting again while a cluster is being consumed, one more buffer should be reallocated to that file and the threshold is increased by one, since the previous state could be considered as a stable state. At this moment, no more adjustments may be required for that file if CBR (Constant Bit Rate) multimedia file is being accessed, and would be changed again if VBR (Variable Bit Rate) multimedia file is being accessed. The dynamic buffer cache adjusting scheme proposed in this paper is a simple enough algorithm to be easily implemented on an existing operating system without supports of complex real-time CPU and disk scheduling. Furthermore, since it is based on the cluster read-ahead mechanism which has been already adopted by most of modern operating systems, the proposed scheme could be easily implemented in existing operating system with a slight modification.

6 1230 IEICE TRANS. INF. & SYST., VOL.E83 D, NO.6 JUNE Admission Control Usually a multimedia server should not accept too many requests to prevent overloads causing delays in delivery. The admission control refers to the mechanism to reject requests which may cause an overload in the server because of its lack of computing resources such as CPU time, harddisk bandwidth, and the buffer cache. The Cache Buffer, in FreeBSD, must have at least 30 buffers to guarantee a good system performance. If too many read-ahead buffers are requested by opened files, these buffers must be allocated as CM buffers of a new opened file. Since it causes a serious problem in the file system performance, the number of opened files should be controlled to prevent over-stealing of buffers in the Cache Buffer. This problem can be solved if the file system accepts the request only when there are more buffers than the size of two clusters in Cache Buffer, otherwise, it rejects the request by failing the open() system call in order to prevent the file system from overloading. This simple admission control scheme helps to keep the system performance stable, while maximizing the number of requests being serviced as shown in following experimental analysis. Since the global LRU replacement algorithm of a FreeBSD file system causes a buffer contention, the proposed scheme allocates some private read-ahead buffers to each opened file and adopts the local FIFO replacement algorithm as explained in Sect In order to evaluate the effectiveness of the proposed private buffer allocation scheme, the number of used readahead buffers and the hit ratio are adopted as the evaluation criteria. In this experiment each of the 40 processes selects a file among 60 files, randomly, and reads the file at a given data rate. And the experiment was performed on the dynamic state. That is, after a client finish playing a file it select another file randomly and play it adjustmenting the buffers dynamically. Each of the 60 files has its own data rate, from 10 KBytes/sec to 100 KBytes/sec with a peak to mean ratio of 1.3:1, and no file shares the same data rate with other files. These artificial VBR data files are used in the experiment in order to reflect various multimedia files with different data rates. Figures 6 (a) and (b) show the average number of buffers and the average hit ratio of each of the 10 files, respectively, when the original and the proposed kernels are used to access the 60 files randomly. In these figures the X-axis represents multimedia files being ref- 4. Experimental Analysis We have implemented the proposed buffer cache management schemes by modifying the kernel of FreeBSD 2.0.1, which is a PC operating system based on 4.4BSD, on an IBM-PC with an Intel Pentium microprocessor and 64 Mbytes of physical memory. The proposed schemes on the private buffer allocation, the dynamic read-ahead buffers adjustment, and the admission control mechanisms are experimented and compared with the original kernel with respect to the cache hit ratio and the number of used read-ahead buffers in Sect. 4.1, and compared with a CTL-based buffer management scheme in Sect (a) Average Number of Buffers Allocated to Each Multimedia File 4.1 Experimental Comparison with Original FreeBSD Let us first compare the performance of proposed scheme with the original FreeBSD kernel which uses a global buffer cache allocation and LRU replacement algorithm Experiments on Private Buffer Allocation (b) Average Hit Ratio of Each Multimedia File Fig. 6 Experiments on Private Buffer Allocation and FIFO Replacement. The reason we have experimented with 40 processes is as follows; Since 8 MBytes of 64 MBytes main memory are assigned to buffer caches and each file initially needs 128 KBytes buffers, theoretically we could experiment to open and read 64 (8 MBytes/128 KBytes) files simultaneously. However, since the disk of our experimental system could not support such wide bandwidths, we just open 40 files simultaneously in our experiments.

7 NANG and HEO: AN EFFICIENT BUFFER MANAGEMENT SCHEME FOR MULTIMEDIA FILE SYSTEM 1231 erenced, here only 10 files among the 60 files are displayed for the sake of simplicity. For example, F10 in the X-axis means a multimedia file which is referenced at 10 Kbytes/sec with a variation of 30%. The Y-axis represents the average number of buffers allocated to each file in Fig. 6 (a), and the average hit ratio of each file in Fig. 6 (b). In the original FreeBSD kernel, since all buffers are shared by all clients and a global LRU scheme is used as a buffer replacement algorithm, all the buffers could be even allocated to a file, for example F100, in an extreme case. Therefore, the data blocks for F100, which are most recently accessed, became to occupy the greatest portion of buffer caches as shown in Fig. 6 (a). On the other hands, the proposed scheme adopts a private buffer cache allocation scheme with a local FIFO replacement algorithm. Since there was no I/O waits for F100 only with about 28 buffers in the proposed scheme, no more buffers was allocated to F100 file in this experiment. That is reason why F100 uses only 60% of buffers compared with original cluster readahead of 4.4BSD. Another finding from this figure is that more buffers are allocated to the files with higher data rates when the original FreeBSD kernel is used, but just enough number of buffers are allocated to each opened file regardless of its data rate when the proposed kernel is used to access files, as shown in Fig. 6 (a). But as shown in Fig. 6 (b), the proposed scheme can produce almost the same (or higher) hit ratios. Especially, in the case of relatively lower data rate files (F20 and F30), the proposed scheme produces a higher hit ratio because the read-ahead buffers are not reallocated to a higher data rate file. It can be seen from Fig. 6 that the proposed private file allocation and the local FIFO replacement algorithm can produce the higher cache hit ratios with even less read-ahead buffers Experiments on Dynamic Buffer Adjustment Since the scheme that allocates a fixed number of buffers to each multimedia file and manages these buffers with the local FIFO replacement can t reflect the data rate of a file each of which has different data rate, the proposed scheme uses an additional technique to reflect these data rates, as explained in Sect In an experiment to show the effectiveness of the proposed dynamic buffer adjustment scheme, we also use the hit ratio and the number of used buffers as the evaluation criteria. The experiment is the same as the previous one except that 19 buffers are statically allocated to each file in the case of without dynamic buffer adjustment. The statically allocated buffers, 19 buffers, results from the previous experiment that 40 processes used 760 buffers in average so 19 buffers would be a reasonable assumption to be an average number of buffers being used. Figures 7 (a) and (b) show the average number of (a) Average Number of Buffers Allocated to Each Multimedia File (b) Average Hit Ratio of Each Multimedia File Fig. 7 Experiments on dynamic buffer adjustment. buffers and average hit ratios of each of the 10 files, respectively, when with and without dynamic buffer adjustment schemes are used. In the case of without dynamic buffer adjustment, the same number of buffers (19 buffers) are allocated to all the files as shown in this figure. However, with dynamic buffer adjustment, more buffers are allocated to the files referenced at KBytes/sec than the files referenced at low data rate, KBytes/sec. It can be identified from this figure that an almost perfect hit ratio (100%) could be guaranteed with dynamic adjustment scheme regardless of the data rates of the files. However, without the dynamic adjustment scheme, the hit ratios of extremely high data rate (especially KBytes/sec) files are just 93% because a sufficient number of buffers is not allocated to these files in this experiment. From this experiment we can argue that the hit ratio could be improved with dynamic adjustment scheme because it takes the buffers from the low data rate files and allocates them to the high data rate files Experiments on Admission Control The proposed admission control technique would be used to guarantee constant latency in multimedia playback and the performance of a file system, as explained in Sect In the experiment to show an effectiveness

8 1232 IEICE TRANS. INF. & SYST., VOL.E83 D, NO.6 JUNE 2000 Fig. 8 Experiments on admission control. of the proposed admission control scheme, each process selects one file from 60 multimedia files randomly, and reads the file at a given data rate as the previous experiments. Figure 8 shows an experimental result on the average hit ratios of several opened multimedia files when a FreeBSD (or 4.4BSD) system kernel and the proposed kernel with the admission control mechanism are used. Since in FreeBSD all processes share the readahead buffers and the global LRU is used as a buffer replacement algorithm, if too many multimedia files are serviced simultaneously, it produces a lower hit ratio because of buffer contention. Especially in the case of an overload (that is the server accepts and services more requests than it can handle), the hit ratio is dramatically decreased in proportion to the number of files it services simultaneously, as shown in Fig. 8. On the other hand, since the proposed buffer management scheme rejects the requests when it decides it may cause an overload because of the lack of buffers (in Fig. 8, if more than 50 files are serviced simultaneously), the hit ratio of all files could be kept as high as possible. This scheme helps to give a fair service to all the accepted requests with constant latency, although it may limit the number of requests that could be serviced simultaneously. 4.2 Experimental Comparison with CTL-Based Model In this subsection, an experimental comparison with recently proposed buffer management scheme, CTL (Constant Time Length), is described to show the effectiveness of proposed scheme when building a running multimedia server with existing operating systems. Let us first explain the recently proposed scheduling and buffer management scheme. The schemes to store and retrieve VBR data like MPEG can be categorized as follows [4]. The CTL scheme stores and retrieves data in unequal amounts to conform to the real-time playback duration. On the other hand, the CDL (Constant Data Length) scheme stores and retrieves the data in equal-sized units for each user, using buffer memory to provide real-time variable bit rate for playback. The third scheme, hybrid, stores CDL units but retrieves a variable number of units for each user in each round. The CTL scheme is more cost effective than the CDL scheme, since the CDL scheme tends to require much more buffer space. In each round, the CTL scheme needs exactly the same amount of buffers as a user will consume during the round. But the CDL scheme needs additional buffers because the amount of data required by user may not be times of data unit size. This is the reason why we have select the CTL-based model as a model for the experimental comparisons. We have implemented a simple CTL scheme using the server-push architecture. That is, the server periodically reads data into buffer caches without the explicit read request from clients, and then the clients fetch the data from buffer caches. The server-push architecture is better suited for continuous media applications [13]. In our simple CTL implementation, we implemented cm ctl() system call for periodic scheduling. One real-time user process (just having the highest priority, but not being guaranteed to run in real-time) calls cm ctl() periodically. Then cache manager in the kernel starts to read blocks asynchronously. This system call also can be used to simulate the meta data. That is, the real-time process calls cm ctl() with parameters, which are the numbers of requested blocks for clients. The real-time process determines the number of request blocks for a client and the cache manager reads ahead the number of blocks. And the client finally gets the blocks. Disk driver for the CTL model is not implemented in the simple CTL implementation. Contiguous block placement and real-time disk I/O are beyond this paper. The simple CTL scheme has one parameter, prefetch level P, which means the distance between the production of server and the consumption of clients. For example, if P = 0, the client tries to read data from the buffer cache in the same round as the server retrieves the data from disk into buffer cache. If P =1, then client tries to read data one round after the server read the data from disk. In this experiment, each client reads files with KBytes data rate, as the previous experiments. We select the number of used buffers, deadline miss ratios, and laxities as the evaluation criteria. Figures 9 and 10 show the number of used buffers and deadline miss ratios of proposed and CTL-based models, respectively. Furthermore, Figs. 12 and 13 show the laxities of multimedia file accesses when the proposed and CTL-based model are implemented on FreeBSD. The laxity is measured from a file whose data rate is 50 KBytes/sec while other 39 clients are accessing other multimedia files. In this experiment, we do not use the admission control scheme to investigate the performance at a high load situation. As shown in these figures, although the pure CTL model (p = 0) requires

9 NANG and HEO: AN EFFICIENT BUFFER MANAGEMENT SCHEME FOR MULTIMEDIA FILE SYSTEM 1233 Fig. 12 Experiments on laxity of proposed scheme. Fig. 9 The number of used buffers. (a) CTL with Prefetch Level 0 Fig. 10 Deadline miss ratio. (b) CTL with Prefetch Level 1 (a) Ideal CTL Scheduling Fig. 13 (c) CTL with Prefetch Level 2 Experiments on laxity of CTL-base models. (b) Problem of CTL Scheduling in Real Implementation Fig. 11 The problem of CTL model. the least buffers, it produces a high deadline miss ratios when the number of clients exceeds 30. The reason that the pure CTL model produces a lower hit ratio is that the system is not deterministic. It means that the operating system (in this experiment, FreeBSD) does not guarantee that the server finishes the disk I/O within a round and client processes are scheduled in a periodic fashion. Figure 11 explains the situation when deadline missing can happen when a CTL-model is implemented on the existing operating systems. The cause for delay may be disk I/O for non-multimedia files, network I/O, or other exclusive time-consuming job. From the experiments of pure CTL model on FreeBSD, we have learned that the CTL-based system suffers from the jitter on a non-deterministic system and needs some buffers to absorb the jitter. Then, how many buffers are needed? The CTL scheme with prefetch level 1 (P = 1) has double buffers. However, Figs. 10 and 13 (b) show that it also could not produce a satisfiable performance on FreeBSD. There are still some situations that the continuity is not satisfied even when a double buffers is used. Furthermore, for the

10 1234 IEICE TRANS. INF. & SYST., VOL.E83 D, NO.6 JUNE 2000 double buffer mechanism to work correctly, a synchronization technique between server (producer) and client (consumer) is required. In the CTL model, the synchronization is based on time, or round. The most of existing systems do not guarantee the timely scheduling so that they do not provide the synchronization interface between buffer manager in kernel and client processes in user space. Therefore, the CTL scheme with double buffers is not also adequate to the existing operating systems that have no supports for realtime scheduling. According to our experiments, the CTL-based model with the triple buffers (p = 2) could produce a satisfiable performance on FreeBSD. Compared with CTL-based models, the proposed scheme shows a comparable performance to the CTL models when system load is not so high (when the number of clients is less than 45), although the proposed scheme uses less buffers than the CTL-based model with triple buffers. When the system load becomes high, the proposed scheme tries to allocate more buffers to absorb the jitter. As a result, the proposed scheme produces the less miss ratio when the number of clients is larger than 45. Another merit of the proposed scheme is that it does not requires the meta data for each multimedia files being serviced. 5. Comparison with Related Work In this section we compare the proposed scheme with related work, and analyze the differences from the view point of the implementation, the disk I/O scheduling, buffer allocation and admission control. Table 1 summarizes a comparison with related work with respect to the implementation, disk scheduling, buffer allocation and admission control. Some early works on multimedia server schedule the disk I/O using a slack time [1], [9]. That is, they calculate the spare time of each multimedia stream in each round and select the stream that has the least spare time to be scheduled first. They allocate a static amount of buffers to each stream. If a new request is arrived, they examine whether they can schedule all the steams including new one in real-time to determine the admission. Another approach is the interval cache [10], [11], [15]. Its purpose is a sharing of buffers among mul- tiple clients to reduce the number of disk I/O. In this case, the disk I/O is triggered by cache miss, and the buffers are managed globally so that the block replacement algorithm is the main topic. Recently proposed buffer management schemes [2], [4], [6], [7], [12], [15] are based on the CTL (Constant Time Length) model, in which the server retrieves exactly the same amount of data as the process will consume during each round of multimedia playback. They also assume the data blocks consumed in a round are stored continuously in disk to exclude uncertain retrieving time of each block. Therefore, the buffers for each client are reallocated every round and a newly arrived request is admitted only when there are enough disk bandwidth and buffers in each round. In order to support a continuous playback and maximize the utilization of system resources in the CTL-based models, additional ideas were proposed. Among them, recent works concentrated on two methods: double buffering [2], [12], [15] and prefetching [2], [6], [7]. In double buffering model, each multimedia stream requires two buffers: a producing buffer and a consuming buffer. On the other hands, in prefetching models, the spare disk bandwidth of each round caused by the VBR nature of multimedia files is used to prefetch data blocks. Although the CTL-based model is a well-designed scheduling and buffer management scheme for the analytic modeling and the performance evaluation, much effort is still required for building a CTL-based running multimedia server. Only a few works [3], [13] tried to implement this scheme to build the file systems of multimedia server. There are some explicit limitations in the existing operating systems to adopt the CTL model as a basic scheduling scheme of file system. First of all, the freely available operating system does not guarantee that client processes are scheduled in a periodic fashion and does not consider deadlines of each client. Moreover, the CPU scheduler gives a lower priority to the process that accesses the file at a higher data rate. That is because existing operating systems including 4.4BSD give a lower priority to the process that uses more CPU and sleeps less, biased to interactive programs. Therefore, to implement CTL-based buffer manager, we have to implement another CPU scheduler suitable for a multimedia system, like Rate Monotonic or EDF scheduler. Table 1 Comparison with related work. Easy to Disk I/O Buffer Admission Implement Scheduling Allocation Control Early Works No by Slack Time Static by Scheduling [1], [9] Interval Cache Yes by Cache Miss Shared globally N/A [10], [11], [15] CTL Model No using metadata Dynamic by Spare Buffers and [2], [4], [6], [7], [12], [15] Disk Bandwidth Ours Yes by Cluster Dynamic by Spare Buffers Read-ahead

11 NANG and HEO: AN EFFICIENT BUFFER MANAGEMENT SCHEME FOR MULTIMEDIA FILE SYSTEM 1235 Furthermore, the operating system does not guarantee that the server finishes the disk I/O within a round. This situation is because the disk device driver of existing operating systems including 4.4BSD has only a single request queue and schedules the requests in SCAN or C-SCAN method. Moreover, the CTL model assumes that blocks accessed in a round are stored continuously in disk. But existing operating systems do not support this. Therefore, to implement CTL-based buffer manager precisely, we have to implement another disk scheduler, like SCAN-EDF scheduler and continuous placement of blocks that is accessed in a round. More importantly, a CTL server requires a meta data that contains the information about media to be played - e.g. the amount of data that is accessed in each round. Although some researches make use of metadata generator [6], [16], it is impossible for server administrator to have the tools for all kind of multimedia file formats to be serviced. This paper did not propose a time-driven scheme based on CTL model, but proposed a remaining bufferdriven scheme based on cluster read-ahead, which is already adopted by the most of modern operating systems. It enables the proposed scheme to be easily implemented on the existing operating systems with a slight modification. 6. Conclusion Since multimedia data has different referencing characteristics such as realtime playback, the file system of a multimedia server should be designed to meet these requirements. This paper proposes a new buffer cache management scheme and implements it on a FreeBSD system to show its usefulness. This includes a private read-ahead buffers allocation scheme with FIFO replacement algorithm, a dynamic buffer adjustment, and an admission control scheme based on the availability of the next cluster. Extensive experiments on a FreeBSD system show that the proposed buffer management scheme can produce a higher hit ratio using even less buffers than the FreeBSD scheme by removing buffer contention and adjusting the number of allocated buffers dynamically. This higher hit ratio contributes to reducing jitter when playing multimedia files by reducing disk I/O time and making processes not to wait for I/O. We also compare the performance of proposed schemes with CTL-based model. From the analysis of experimental results, we can argue that recently proposed CTL-based model is very hard to implement on existing operating system including FreeBSD because of their lacks of the realtime scheduling capabilities for CPU and disk. On the other hands, since our proposed scheme requires a slight modification of file system kernel only, it can be easily implemented with a freely-available operating system with a comparable performance. Furthermore, since proposed scheme does not require the meta data for each multimedia file, it is more applicable to build a real running multimedia server. Since private buffers are allocated to each multimedia file and these buffers are replaced locally, the proposed scheme may cause a problem so that the data block in a buffer can t be shared among processes when the same file is opened and referenced by several processes. However, since the probability that several processes reference the same data blocks of a multimedia file in a given period of time may be very low, it would not be a critical problem in real domain applications. Furthermore, since the replacement algorithm of the proposed scheme is a FIFO, that is the simplest one, other overheads in the proposed scheme such as time to adjust the number of buffers dynamically can be offset. References [1] D.P. Anderson, Y. Osawa, and R. Govindan, A file system for continuous media, ACM Trans. Computer System, vol.10, no.4, pp , Nov [2] N.H. Balkir and G. Ozsoyoglu, Multimedia presentation servers: Buffer management and admission control, Proceedings of the International Workshop on Multimedia Database Management Systems, IEEE, pp , June [3] M.M. Buddhikot, X.J. Chen, D. Wu, and G.M. Parulkar, Enhancements to 4.4BSD UNIX for efficient networked multimedia in project MARS, Proceedings of the International Conference on Multimedia Computing and System, IEEE, pp , June [4] E. Chang and A. Zakhor, Cost analyses for VBR video servers, IEEE Multimedia, vol.3, no.4, pp.56 71, [5] D.J Gemmell, H.M. Vin, D.D. Kandlur, P.V. Rangan, and L.A. Rowe, Multimedia storage servers: A tutorial, IEEE Computer, vol.28, no.5, pp.40 49, May [6] I.H. Kim, J.W. Kim, S.W. Lee, and K.D. Chung, VBR video data scheduling using window-based prefetching, Proceedings of the International Conference on Multimedia Computing and System, IEEE, vol.1, pp , June [7] K.O. Lee, J.B. Kwon, and H.Y. Yeom, Exploiting caching for realtime multimedia systems, Proceedings of the International Conference on Multimedia Computing and System, IEEE, vol.1, pp , June [8] S.J. Leffler, M.K. McKusick, M.J. Karels, and J.S. Quarterman, The Design and Implementation of the 4.4BSD UNIX Operating System, Addison-Wesley, [9] P. Lougher and D. Shepherd, The design of a storage server for continuous media, The Computer Journal, Feb [10] B.T. Ng and J. Yang, An analysis of buffer sharing and prefetching techniques for multimedia systems, Multimedia Systems, vol.4, no.2, pp.55 69, Springer-Verlag, April [11] B. Ozden, R. Rastogi, and A. Silberschatz, Buffer replacement algorithm for multimedia storage systems, Proceedings of the International Conference on Multimedia Computing and System, IEEE, pp , June [12] H. Pan, L.H. Ngoh, and A.A. Lazar, A buffer-inventorybased dynamic scheduling algorithm for multimedia-ondemand servers, Multimedia Systems, vol.6, no.2, pp , Springer-Verlag, March [13] P.J. Shenoy, P. Goyal, S.S. Rao, and H.M. Vin, Symphony:

12 1236 IEICE TRANS. INF. & SYST., VOL.E83 D, NO.6 JUNE 2000 An integrated multimedia file system, Technical Report TR-97-09, Department of Computer Science, Univ. of Texas at Austin, March [14] R. Steinmetz and K. Nahrstedt, Multimedia: Computing, Communications and Applications, Prentice Hall, [15] W.J. Tsai and S.Y. Lee, Dynamic buffer management for near video-on-demand systems, Multimedia Tools and Applications, vol.6, no.1, pp.61 83, [16] S.R. Yeon and K. Koh, A dynamic buffer management technique for minimizing the necessary buffer space in a continuous media server, Proceedings of the International Conference on Multimedia Computing and System, IEEE, pp , June Jongho Nang is an associative professor in the Department of Computer Science and Engineering at Sogang University. He received his B.S. degree from Sogang University, Korea, in 1986 and M.S. and Ph.D. degree from KAIST, in 1988 and in 1992, respectively. His research interests are in the field of multimedia systems, digital video library, and Internet technologies. He is a member of KISS, ACM, and IEEE. Sungkwan Heo received his B.S. degree from Dankook University, Seoul, Korea, in 1995, and his M.S. degree from Sogang University, Seoul, Korea, in In 1997, he joined Turbotek Corporation. His research interests include operating systems, multimedia systems, real-time embedded systems, and computer networks.

Multimedia Systems 2011/2012

Multimedia Systems 2011/2012 Multimedia Systems 2011/2012 System Architecture Prof. Dr. Paul Müller University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY http://www.icsy.de Sitemap 2 Hardware

More information

Symphony: An Integrated Multimedia File System

Symphony: An Integrated Multimedia File System Symphony: An Integrated Multimedia File System Prashant J. Shenoy, Pawan Goyal, Sriram S. Rao, and Harrick M. Vin Distributed Multimedia Computing Laboratory Department of Computer Sciences, University

More information

Design Considerations for the Symphony Integrated Multimedia File System

Design Considerations for the Symphony Integrated Multimedia File System Design Considerations for the Symphony Integrated Multimedia File System Prashant Shenoy Pawan Goyal Sriram Rao Harrick M. Vin Department of Computer Science, IBM Research Division Department of Computer

More information

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Modified by Rana Forsati for CSE 410 Outline Principle of locality Paging - Effect of page

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (8 th Week) (Advanced) Operating Systems 8. Virtual Memory 8. Outline Hardware and Control Structures Operating

More information

Chapter 20: Multimedia Systems

Chapter 20: Multimedia Systems Chapter 20: Multimedia Systems, Silberschatz, Galvin and Gagne 2009 Chapter 20: Multimedia Systems What is Multimedia? Compression Requirements of Multimedia Kernels CPU Scheduling Disk Scheduling Network

More information

Chapter 20: Multimedia Systems. Operating System Concepts 8 th Edition,

Chapter 20: Multimedia Systems. Operating System Concepts 8 th Edition, Chapter 20: Multimedia Systems, Silberschatz, Galvin and Gagne 2009 Chapter 20: Multimedia Systems What is Multimedia? Compression Requirements of Multimedia Kernels CPU Scheduling Disk Scheduling Network

More information

Memory Management Virtual Memory

Memory Management Virtual Memory Memory Management Virtual Memory Part of A3 course (by Theo Schouten) Biniam Gebremichael http://www.cs.ru.nl/~biniam/ Office: A6004 April 4 2005 Content Virtual memory Definition Advantage and challenges

More information

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Operating Systems: Internals and Design Principles You re gonna need a bigger boat. Steven

More information

A Proxy Caching Scheme for Continuous Media Streams on the Internet

A Proxy Caching Scheme for Continuous Media Streams on the Internet A Proxy Caching Scheme for Continuous Media Streams on the Internet Eun-Ji Lim, Seong-Ho park, Hyeon-Ok Hong, Ki-Dong Chung Department of Computer Science, Pusan National University Jang Jun Dong, San

More information

A System Software Structure for Distributed Multimedia Systems

A System Software Structure for Distributed Multimedia Systems A System Software Structure for Distributed Multimedia Systems Ralf Guido Herrtwich Lars Wolf IBM European Networking Center Tiergartenstr. 8 D-6900 Heidelberg 1 rgh@dhdibm1.bitnet lwolf@dhdibm1.bitnet

More information

1993 Paper 3 Question 6

1993 Paper 3 Question 6 993 Paper 3 Question 6 Describe the functionality you would expect to find in the file system directory service of a multi-user operating system. [0 marks] Describe two ways in which multiple names for

More information

Probability Admission Control in Class-based Video-on-Demand System

Probability Admission Control in Class-based Video-on-Demand System Probability Admission Control in Class-based Video-on-Demand System Sami Alwakeel and Agung Prasetijo Department of Computer Engineering College of Computer and Information Sciences, King Saud University

More information

Process size is independent of the main memory present in the system.

Process size is independent of the main memory present in the system. Hardware control structure Two characteristics are key to paging and segmentation: 1. All memory references are logical addresses within a process which are dynamically converted into physical at run time.

More information

a process may be swapped in and out of main memory such that it occupies different regions

a process may be swapped in and out of main memory such that it occupies different regions Virtual Memory Characteristics of Paging and Segmentation A process may be broken up into pieces (pages or segments) that do not need to be located contiguously in main memory Memory references are dynamically

More information

MEMORY MANAGEMENT/1 CS 409, FALL 2013

MEMORY MANAGEMENT/1 CS 409, FALL 2013 MEMORY MANAGEMENT Requirements: Relocation (to different memory areas) Protection (run time, usually implemented together with relocation) Sharing (and also protection) Logical organization Physical organization

More information

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced? Chapter 10: Virtual Memory Questions? CSCI [4 6] 730 Operating Systems Virtual Memory!! What is virtual memory and when is it useful?!! What is demand paging?!! When should pages in memory be replaced?!!

More information

Memory Design. Cache Memory. Processor operates much faster than the main memory can.

Memory Design. Cache Memory. Processor operates much faster than the main memory can. Memory Design Cache Memory Processor operates much faster than the main memory can. To ameliorate the sitution, a high speed memory called a cache memory placed between the processor and main memory. Barry

More information

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao File Systems Hai Tao File System File system is used to store sources, objects, libraries and executables, numeric data, text, video, audio, etc. The file system provide access and control function for

More information

Cache Management for Shared Sequential Data Access

Cache Management for Shared Sequential Data Access in: Proc. ACM SIGMETRICS Conf., June 1992 Cache Management for Shared Sequential Data Access Erhard Rahm University of Kaiserslautern Dept. of Computer Science 6750 Kaiserslautern, Germany Donald Ferguson

More information

A Scalable Multiprocessor for Real-time Signal Processing

A Scalable Multiprocessor for Real-time Signal Processing A Scalable Multiprocessor for Real-time Signal Processing Daniel Scherrer, Hans Eberle Institute for Computer Systems, Swiss Federal Institute of Technology CH-8092 Zurich, Switzerland {scherrer, eberle}@inf.ethz.ch

More information

LETTER Paging out Multiple Clusters to Improve Virtual Memory System Performance

LETTER Paging out Multiple Clusters to Improve Virtual Memory System Performance IEICE TRANS. INF. & SYST., VOL.E97 D, NO.7 JULY 2014 1905 LETTER Paging out Multiple Clusters to Improve Virtual Memory System Performance Woo Hyun AHN a), Joon-Woo CHOI, Jaewon OH b), Nonmembers, Seung-Ho

More information

Design Patterns for Real-Time Computer Music Systems

Design Patterns for Real-Time Computer Music Systems Design Patterns for Real-Time Computer Music Systems Roger B. Dannenberg and Ross Bencina 4 September 2005 This document contains a set of design patterns for real time systems, particularly for computer

More information

Role of OS in virtual memory management

Role of OS in virtual memory management Role of OS in virtual memory management Role of OS memory management Design of memory-management portion of OS depends on 3 fundamental areas of choice Whether to use virtual memory or not Whether to use

More information

Distributed Video Systems Chapter 3 Storage Technologies

Distributed Video Systems Chapter 3 Storage Technologies Distributed Video Systems Chapter 3 Storage Technologies Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 3.1 Introduction 3.2 Magnetic Disks 3.3 Video

More information

Dynamic Broadcast Scheduling in DDBMS

Dynamic Broadcast Scheduling in DDBMS Dynamic Broadcast Scheduling in DDBMS Babu Santhalingam #1, C.Gunasekar #2, K.Jayakumar #3 #1 Asst. Professor, Computer Science and Applications Department, SCSVMV University, Kanchipuram, India, #2 Research

More information

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL 2006 243 Segment-Based Streaming Media Proxy: Modeling Optimization Songqing Chen, Member, IEEE, Bo Shen, Senior Member, IEEE, Susie Wee, Xiaodong

More information

Improving VoD System Efficiency with Multicast and Caching

Improving VoD System Efficiency with Multicast and Caching Improving VoD System Efficiency with Multicast and Caching Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 1. Introduction 2. Previous Works 3. UVoD

More information

BiHOP: A Bidirectional Highly Optimized Pipelining Technique for Large-Scale Multimedia Servers

BiHOP: A Bidirectional Highly Optimized Pipelining Technique for Large-Scale Multimedia Servers : A Bidirectional Highly Optimized Pipelining Technique for Large-Scale Multimedia Servers Kien A. Hua James Z. Wang Simon Sheu Department of Computer Science University of Central Florida Orlando, FL

More information

LETTER Solid-State Disk with Double Data Rate DRAM Interface for High-Performance PCs

LETTER Solid-State Disk with Double Data Rate DRAM Interface for High-Performance PCs IEICE TRANS. INF. & SYST., VOL.E92 D, NO.4 APRIL 2009 727 LETTER Solid-State Disk with Double Data Rate DRAM Interface for High-Performance PCs Dong KIM, Kwanhu BANG, Seung-Hwan HA, Chanik PARK, Sung Woo

More information

Performance Modeling and Evaluation of Web Systems with Proxy Caching

Performance Modeling and Evaluation of Web Systems with Proxy Caching Performance Modeling and Evaluation of Web Systems with Proxy Caching Yasuyuki FUJITA, Masayuki MURATA and Hideo MIYAHARA a a Department of Infomatics and Mathematical Science Graduate School of Engineering

More information

Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ

Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ Dongkun Shin Jihong Kim School of CSE School of CSE Seoul National University Seoul National University Seoul, Korea

More information

Application DBMS. Media Server

Application DBMS. Media Server Scheduling and Optimization of the Delivery of Multimedia Streams Using Query Scripts Scott T. Campbell (scott@cc-campbell.com) Department of Computer Science and Systems Analysis, Miami University, Oxford,

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

RESPONSIVENESS IN A VIDEO. College Station, TX In this paper, we will address the problem of designing an interactive video server

RESPONSIVENESS IN A VIDEO. College Station, TX In this paper, we will address the problem of designing an interactive video server 1 IMPROVING THE INTERACTIVE RESPONSIVENESS IN A VIDEO SERVER A. L. Narasimha Reddy ABSTRACT Dept. of Elec. Engg. 214 Zachry Texas A & M University College Station, TX 77843-3128 reddy@ee.tamu.edu In this

More information

Lecture: Large Caches, Virtual Memory. Topics: cache innovations (Sections 2.4, B.4, B.5)

Lecture: Large Caches, Virtual Memory. Topics: cache innovations (Sections 2.4, B.4, B.5) Lecture: Large Caches, Virtual Memory Topics: cache innovations (Sections 2.4, B.4, B.5) 1 More Cache Basics caches are split as instruction and data; L2 and L3 are unified The /L2 hierarchy can be inclusive,

More information

Virtual Memory. Chapter 8

Virtual Memory. Chapter 8 Virtual Memory 1 Chapter 8 Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time E.g., process may be swapped in and out of main memory

More information

Adaptive QoS Platform in Multimedia Networks

Adaptive QoS Platform in Multimedia Networks Adaptive QoS Platform in Multimedia Networks Mahmoud Sherif, Ibrahim Habib, Mahmoud Naghshineh, and Parviz Kermani CUNY Graduate School and Department of Electrical Engineering The City College of New

More information

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358 Virtual Memory Reading: Silberschatz chapter 10 Reading: Stallings chapter 8 1 Outline Introduction Advantages Thrashing Principal of Locality VM based on Paging/Segmentation Combined Paging and Segmentation

More information

Chapter 13: I/O Systems

Chapter 13: I/O Systems Chapter 13: I/O Systems Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Streams Performance 13.2 Silberschatz, Galvin

More information

Lecture: Large Caches, Virtual Memory. Topics: cache innovations (Sections 2.4, B.4, B.5)

Lecture: Large Caches, Virtual Memory. Topics: cache innovations (Sections 2.4, B.4, B.5) Lecture: Large Caches, Virtual Memory Topics: cache innovations (Sections 2.4, B.4, B.5) 1 Techniques to Reduce Cache Misses Victim caches Better replacement policies pseudo-lru, NRU Prefetching, cache

More information

COMP 346 WINTER 2018 MEMORY MANAGEMENT (VIRTUAL MEMORY)

COMP 346 WINTER 2018 MEMORY MANAGEMENT (VIRTUAL MEMORY) COMP 346 WINTER 2018 1 MEMORY MANAGEMENT (VIRTUAL MEMORY) VIRTUAL MEMORY A process may be broken up into pieces (pages or segments) that do not need to be located contiguously in main memory. Memory references

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory Chapter 8 Virtual Memory Contents Hardware and control structures Operating system software Unix and Solaris memory management Linux memory management Windows 2000 memory management Characteristics of

More information

Frank Miller, George Apostolopoulos, and Satish Tripathi. University of Maryland. College Park, MD ffwmiller, georgeap,

Frank Miller, George Apostolopoulos, and Satish Tripathi. University of Maryland. College Park, MD ffwmiller, georgeap, Simple Input/Output Streaming in the Operating System Frank Miller, George Apostolopoulos, and Satish Tripathi Mobile Computing and Multimedia Laboratory Department of Computer Science University of Maryland

More information

P2FS: supporting atomic writes for reliable file system design in PCM storage

P2FS: supporting atomic writes for reliable file system design in PCM storage LETTER IEICE Electronics Express, Vol.11, No.13, 1 6 P2FS: supporting atomic writes for reliable file system design in PCM storage Eunji Lee 1, Kern Koh 2, and Hyokyung Bahn 2a) 1 Department of Software,

More information

Research Article A Two-Level Cache for Distributed Information Retrieval in Search Engines

Research Article A Two-Level Cache for Distributed Information Retrieval in Search Engines The Scientific World Journal Volume 2013, Article ID 596724, 6 pages http://dx.doi.org/10.1155/2013/596724 Research Article A Two-Level Cache for Distributed Information Retrieval in Search Engines Weizhe

More information

Main Points of the Computer Organization and System Software Module

Main Points of the Computer Organization and System Software Module Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a

More information

Maximizing the Number of Users in an Interactive Video-on-Demand System

Maximizing the Number of Users in an Interactive Video-on-Demand System IEEE TRANSACTIONS ON BROADCASTING, VOL. 48, NO. 4, DECEMBER 2002 281 Maximizing the Number of Users in an Interactive Video-on-Demand System Spiridon Bakiras, Member, IEEE and Victor O. K. Li, Fellow,

More information

Efficient Media Synchronization Method for Video Telephony System

Efficient Media Synchronization Method for Video Telephony System IEICE TRANS. INF. & SYST., VOL.E89 D, NO.6 JUNE 2006 1901 LETTER Special Section on Human Communication II Efficient Media Synchronization Method for Video Telephony System Chanwoo KIM, Nonmember, Kwang-DeokSEO

More information

Operating System Concepts

Operating System Concepts Chapter 9: Virtual-Memory Management 9.1 Silberschatz, Galvin and Gagne 2005 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped

More information

Study of Load Balancing Schemes over a Video on Demand System

Study of Load Balancing Schemes over a Video on Demand System Study of Load Balancing Schemes over a Video on Demand System Priyank Singhal Ashish Chhabria Nupur Bansal Nataasha Raul Research Scholar, Computer Department Abstract: Load balancing algorithms on Video

More information

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses. 1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and

More information

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapter 8: Main Memory. Operating System Concepts 9 th Edition Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel

More information

Data Placement for Continuous Media in Multimedia DBMS

Data Placement for Continuous Media in Multimedia DBMS Data Placement for Continuous Media in Multimedia DBMS Taeck-Geun Kwon 1;2 Sukho Lee 2 1 R&D Center 2 Dept. of Computer Engineering LG Info. & Communications, Ltd., Seoul National University Anyang 430-080,

More information

Virtual Memory Management

Virtual Memory Management Virtual Memory Management CS-3013 Operating Systems Hugh C. Lauer (Slides include materials from Slides include materials from Modern Operating Systems, 3 rd ed., by Andrew Tanenbaum and from Operating

More information

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 8: Virtual Memory. Operating System Concepts Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2009 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

Stretch-Optimal Scheduling for On-Demand Data Broadcasts

Stretch-Optimal Scheduling for On-Demand Data Broadcasts Stretch-Optimal Scheduling for On-Demand Data roadcasts Yiqiong Wu and Guohong Cao Department of Computer Science & Engineering The Pennsylvania State University, University Park, PA 6 E-mail: fywu,gcaog@cse.psu.edu

More information

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System Ubaid Abbasi and Toufik Ahmed CNRS abri ab. University of Bordeaux 1 351 Cours de la ibération, Talence Cedex 33405 France {abbasi,

More information

Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part I - The Single-Disk Case

Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part I - The Single-Disk Case Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part I - he Single-Disk Case Jack Yiu-bun Lee Department of Information Engineering he Chinese University of Hong Kong Contents

More information

IN recent years, the amount of traffic has rapidly increased

IN recent years, the amount of traffic has rapidly increased , March 15-17, 2017, Hong Kong Content Download Method with Distributed Cache Management Masamitsu Iio, Kouji Hirata, and Miki Yamamoto Abstract This paper proposes a content download method with distributed

More information

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation

More information

Chapter 8: Memory- Management Strategies

Chapter 8: Memory- Management Strategies Chapter 8: Memory Management Strategies Chapter 8: Memory- Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and

More information

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium Main Memory Outlook Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium 2 Backgound Background So far we considered how to share

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

Lecture 14 Page Replacement Policies

Lecture 14 Page Replacement Policies CS 423 Operating Systems Design Lecture 14 Page Replacement Policies Klara Nahrstedt Fall 2011 Based on slides by YY Zhou and Andrew S. Tanenbaum Overview Administrative Issues Page Replacement Policies

More information

CSE 120 PRACTICE FINAL EXAM, WINTER 2013

CSE 120 PRACTICE FINAL EXAM, WINTER 2013 CSE 120 PRACTICE FINAL EXAM, WINTER 2013 For each question, select the best choice. In the space provided below each question, justify your choice by providing a succinct (one sentence) explanation. 1.

More information

Temporal Protection in Real-Time Operating Systems. Abstract

Temporal Protection in Real-Time Operating Systems. Abstract Temporal Protection in Real-Time Operating Systems Cliff Mercer 1, Ragunathan Rajkumar 2 and Jim Zelenka 1 1 Department of Computer Science 2 Software Engineering Institute Carnegie Mellon University Pittsburgh,

More information

Symphony: An Integrated Multimedia File System æ

Symphony: An Integrated Multimedia File System æ Symphony: An Integrated Multimedia File System æ Prashant J. Shenoy, Pawan Goyal, Sriram S. Rao, and Harrick M. Vin Distributed Multimedia Computing Laboratory Department of Computer Sciences, University

More information

Prefetch Threads for Database Operations on a Simultaneous Multi-threaded Processor

Prefetch Threads for Database Operations on a Simultaneous Multi-threaded Processor Prefetch Threads for Database Operations on a Simultaneous Multi-threaded Processor Kostas Papadopoulos December 11, 2005 Abstract Simultaneous Multi-threading (SMT) has been developed to increase instruction

More information

Performance Study of a QoS scheduling algorithm over wireless networks

Performance Study of a QoS scheduling algorithm over wireless networks 1 Performance Study of a QoS scheduling algorithm over wireless networks Siew HuiPei Joanna, Student Member, IEEE Abstract With the proliferation of wireless networks, consumers are increasingly aware

More information

Chapter 8: Memory-Management Strategies

Chapter 8: Memory-Management Strategies Chapter 8: Memory-Management Strategies Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and

More information

The Case for Reexamining Multimedia File System Design

The Case for Reexamining Multimedia File System Design The Case for Reexamining Multimedia File System Design Position Statement Prashant Shenoy Department of Computer Science, University of Massachusetts, Amherst, MA 01003. shenoy@cs.umass.edu Research in

More information

Efficient support for interactive operations in multi-resolution video servers

Efficient support for interactive operations in multi-resolution video servers Multimedia Systems 7: 241 253 (1999) Multimedia Systems c Springer-Verlag 1999 Efficient support for interactive operations in multi-resolution video servers Prashant J. Shenoy, Harrick M. Vin Distributed

More information

Volume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering

Volume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Optimal Round

More information

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition Chapter 7: Main Memory Operating System Concepts Essentials 8 th Edition Silberschatz, Galvin and Gagne 2011 Chapter 7: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure

More information

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory Chapter 9: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

Periodic Thread A. Deadline Handling Thread. Periodic Thread B. Periodic Thread C. Rate Change. Deadline Notification Port

Periodic Thread A. Deadline Handling Thread. Periodic Thread B. Periodic Thread C. Rate Change. Deadline Notification Port A Continuous Media Application supporting Dynamic QOS Control on Real-Time Mach Tatsuo Nakajima Hiroshi Tezuka Japan Advanced Institute of Science and Technology 15 Asahidai, Tatsunokuchi, Ishikawa, 923-12

More information

Temporal Protection in Real-Time Operating Systems. Abstract

Temporal Protection in Real-Time Operating Systems. Abstract Temporal Protection in Real-Time Operating Systems Cliff Mercer*, Ragunathan Rajkumar+ and Jim Zelenka* *Department of Computer Science +Software Engineering Institute Carnegie Mellon University Pittsburgh,

More information

Lecture 18 File Systems and their Management and Optimization

Lecture 18 File Systems and their Management and Optimization CS 423 Operating Systems Design Lecture 18 File Systems and their Management and Optimization Klara Nahrstedt Fall 2011 Based on slides by YY Zhou and Andrew S. Tanenbaum Overview Administrative announcements

More information

Why Study Multimedia? Operating Systems. Multimedia Resource Requirements. Continuous Media. Influences on Quality. An End-To-End Problem

Why Study Multimedia? Operating Systems. Multimedia Resource Requirements. Continuous Media. Influences on Quality. An End-To-End Problem Why Study Multimedia? Operating Systems Operating System Support for Multimedia Improvements: Telecommunications Environments Communication Fun Outgrowth from industry telecommunications consumer electronics

More information

PAPER Practical Issues Related to Disk Scheduling for Video-On-Demand Services

PAPER Practical Issues Related to Disk Scheduling for Video-On-Demand Services 2156 PAPER Practical Issues Related to Disk Scheduling for Video-On-Demand Services Ilhoon SHIN a), Student Member,KernKOH, and Youjip WON, Nonmembers SUMMARY This paper discusses several practical issues

More information

G Robert Grimm New York University

G Robert Grimm New York University G22.3250-001 Receiver Livelock Robert Grimm New York University Altogether Now: The Three Questions What is the problem? What is new or different? What are the contributions and limitations? Motivation

More information

Chapter 9: Virtual-Memory

Chapter 9: Virtual-Memory Chapter 9: Virtual-Memory Management Chapter 9: Virtual-Memory Management Background Demand Paging Page Replacement Allocation of Frames Thrashing Other Considerations Silberschatz, Galvin and Gagne 2013

More information

Computer System Overview. Chapter 1

Computer System Overview. Chapter 1 Computer System Overview Chapter 1 Operating System Exploits the hardware resources of one or more processors Provides a set of services to system users Manages secondary memory and I/O devices Basic Elements

More information

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts Memory management Last modified: 26.04.2016 1 Contents Background Logical and physical address spaces; address binding Overlaying, swapping Contiguous Memory Allocation Segmentation Paging Structure of

More information

Feedback-based dynamic proportion allocation for disk I

Feedback-based dynamic proportion allocation for disk I Oregon Health & Science University OHSU Digital Commons CSETech December 1998 Feedback-based dynamic proportion allocation for disk I Dan Revel Dylan McNamee Calton Pu David Steere Jonathan Walpole Follow

More information

Chapter 8: Main Memory

Chapter 8: Main Memory Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel

More information

UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD

UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD UBC: An Efficient Unified I/O and Memory Caching Subsystem for NetBSD Chuck Silvers The NetBSD Project chuq@chuq.com, http://www.netbsd.org/ Abstract This paper introduces UBC ( Unified Buffer Cache ),

More information

Introduction. Application Performance in the QLinux Multimedia Operating System. Solution: QLinux. Introduction. Outline. QLinux Design Principles

Introduction. Application Performance in the QLinux Multimedia Operating System. Solution: QLinux. Introduction. Outline. QLinux Design Principles Application Performance in the QLinux Multimedia Operating System Sundaram, A. Chandra, P. Goyal, P. Shenoy, J. Sahni and H. Vin Umass Amherst, U of Texas Austin ACM Multimedia, 2000 Introduction General

More information

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed. CHAPTER 8: MEMORY MANAGEMENT By I-Chen Lin Textbook: Operating System Concepts 9th Ed. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the

More information

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES OBJECTIVES Detailed description of various ways of organizing memory hardware Various memory-management techniques, including paging and segmentation To provide

More information

Lecture: Cache Hierarchies. Topics: cache innovations (Sections B.1-B.3, 2.1)

Lecture: Cache Hierarchies. Topics: cache innovations (Sections B.1-B.3, 2.1) Lecture: Cache Hierarchies Topics: cache innovations (Sections B.1-B.3, 2.1) 1 Types of Cache Misses Compulsory misses: happens the first time a memory word is accessed the misses for an infinite cache

More information

Nowadays data-intensive applications play a

Nowadays data-intensive applications play a Journal of Advances in Computer Engineering and Technology, 3(2) 2017 Data Replication-Based Scheduling in Cloud Computing Environment Bahareh Rahmati 1, Amir Masoud Rahmani 2 Received (2016-02-02) Accepted

More information

Memory. Objectives. Introduction. 6.2 Types of Memory

Memory. Objectives. Introduction. 6.2 Types of Memory Memory Objectives Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured. Master the concepts

More information

A Fault Tolerant Video Server Using Combined Raid 5 and Mirroring

A Fault Tolerant Video Server Using Combined Raid 5 and Mirroring Proceedings of Multimedia Computing and Networking 1997 (MMCN97), San Jose, CA, February 1997 A Fault Tolerant Video Server Using Combined Raid 5 and Mirroring Ernst W. BIERSACK, Christoph BERNHARDT Institut

More information

Parallelizing Inline Data Reduction Operations for Primary Storage Systems

Parallelizing Inline Data Reduction Operations for Primary Storage Systems Parallelizing Inline Data Reduction Operations for Primary Storage Systems Jeonghyeon Ma ( ) and Chanik Park Department of Computer Science and Engineering, POSTECH, Pohang, South Korea {doitnow0415,cipark}@postech.ac.kr

More information

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution Datorarkitektur Fö 2-1 Datorarkitektur Fö 2-2 Components of the Memory System The Memory System 1. Components of the Memory System Main : fast, random access, expensive, located close (but not inside)

More information

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1 Paging algorithms CS 241 February 10, 2012 Copyright : University of Illinois CS 241 Staff 1 Announcements MP2 due Tuesday Fabulous Prizes Wednesday! 2 Paging On heavily-loaded systems, memory can fill

More information

Chapter 13: I/O Systems

Chapter 13: I/O Systems Chapter 13: I/O Systems Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Streams Performance 13.2 Silberschatz, Galvin

More information