BiHOP: A Bidirectional Highly Optimized Pipelining Technique for Large-Scale Multimedia Servers

Size: px
Start display at page:

Download "BiHOP: A Bidirectional Highly Optimized Pipelining Technique for Large-Scale Multimedia Servers"

Transcription

1 : A Bidirectional Highly Optimized Pipelining Technique for Large-Scale Multimedia Servers Kien A. Hua James Z. Wang Simon Sheu Department of Computer Science University of Central Florida Orlando, FL , U. S. A. kienhua, zwang, Abstract We present a technique, called Bidirectional Highly Optimized Pipelining (), for managing disks as a buffer for the tertiary storage of a multimedia server. We implement a simulator to compare its performance to that of a recently proposed scheme called. The results show that performs significantly better. Its superior performance is attributed to a novel caching approach which caches every other data fragment of the multimedia file, rather than caches consecutive fragments as in traditional practice. This new approach allows us to use tiny staging buffers for pipelining, which can be implemented in the memory to conserve disk bandwidth. Furthermore, the whole disk space can be dedicated for caching purposes to improve hit ratio. Another important advantage of is its ability to pipeline in either forward or reverse direction with the same efficiency. This unique feature, not possible with existing schemes, makes it natural for implementing VCR functions.. Introduction The high storage cost of video files is a major concern for many potential multimedia applications. Even with the most sophisticated compression, video has a voracious appetite for storage. For example, to compress a 30-second ad with MPEG-2 running at 4 megabits per second. we would need 5 MBytes, and a 00-minute movie would take 3 GByte. One way to reduce the storage costs is to organize the storage subsystem as a hierarchy, in which the magnetic disks are used as a cache for the tertiary storage devices (e.g., optical disk arrays). When a cache miss occurs in a hierarchical storage subsystem, the simplest way to deal with this is to materialize the whole object onto the disk before sending it to the station. This approach, however, will result in unacceptable latencies. A pipelining technique, called PIRATE, was proposed in [4] by Ghandeharizadeh and Shahabi to address this problem. In their scheme, a video file is divided into a sequence of slices 0 such that the time of eclipses the time required to materialize (i.e., loading into disk), where. This strategy ensures a continuous while reducing the latency time because the system can initiate the of an object as soon as a fraction of the object (i.e., 0) is disk resident. In this paper, 0 is referred to as the HEAD, and the following slices (i.e., ) are collectively referred to as the TAIL of the object. A drawback of this scheme is that sufficient disk space large enough to contain the entire video file must be reserved before the pipelining mechanism can take place. Waiting for the availability of such a large disk space can lengthen the access latency. The demand for the large buffer space will also flush out many potentially useful data. To address the afordmentioned issues, (Space Efficient Pipelining), proposed by Wang, Hua and Young in [8], pipelines the slices in the TAIL through a staging buffer equal the size of. As soon as the pipelining is completed, the space occupied by this buffer is immediately returned to the buffer pool. To further improve the performance, three additional features were used in [8]: buffer shrinking, space stealing and object pinning. Although we showed in [8] that significantly improved the long latency times of PIRATE, its space requirement is still very significant. Another disadvantage of this scheme is that pipelining can only be done in the forward direction. This drawback makes it unsuitable for implementing VCR functions. In this paper, we propose a different pipelining approach, called (Bidirectional Highly Optimized Pipelining). Our design is motivated by the following two factors:

2 ' '. Reducing Pipelining Cost: The admission cost for a request under is still quite expensive. Even when the HEAD of an object is already disk resident, it needs to reserve a staging buffer about the size of the second slice of the object before pipelining can take place. We should be able to reduce this size to only a few disk blocks. If this goal is achieved, we will be able to free up a lot of disk space to support a larger number of concurrent users; the lower admission costs will also help to reduce the access latencies. 2. Supporting VCR functions: It is highly desirable to provide common VCR functions such as fast-forward and fast-reverse. These features are not well supported by PIRATE or. We need to design a bidirectional pipelining strategy. Such a symmetric approach will allow the user to scan in either direction with the same efficiency. Preferably, the caching strategy can effectively support fast-forward and fast-reverse without having to involve the tertiary devices because their bandwidths are very limited. Thus our ambition is to address both the performance and functionality issues. Both of these objectives were achieved in which is bidirectional in functionality and optimal in space utilization. The remainder of this paper is organized as follows. Bi- HOP and its benefits are discussed in Section 2. In Section 3, we describe our simulation model. The results of the performance study are examined in Section 4. In Section 5, we focus on the implementation of VCR functions. Finally, we give the conclusions and discuss our future research in Section Approach We describe the pipelining strategy and data replacement policy used by in the following subsections. Discussions on the VCR functions are deferred until Section Intelligent Pipelining In, we divide the whole object into two categories of fragments. One category is called disk-resident fragment (or D-fragment) and the other is called tertiarydevice-resident fragment (or T-fragment). The D and T- fragments interleave the data file as illustrated in Figure. With this file organization, the pipelining is performed as follows. As the system s the first D-fragment 0, it materializes the next T-fragment from tertiary devices. For 0, it materializes while ing and. Obviously, to maintain the continuous, The elapsed time of ing 0 should be equal to the elapsed time of materializing ; and the elapsed time of ing n-2 n- n n+ 0 time D0 T D T2 D2 T3 D3 T4 Tn Dn... : cached data : data loaded on demand... Figure. pipelining technique. and should be equal to the elapsed time of materializing for 0. Mathematically, we can express the above requirements as follows: 0 for 0 2 where and denote the materialize rate of an object from tertiary devices to disks and the transfer rate of data from disks to a station, respectively. We make the sizes of the fragments more uniform by letting and for 0; and 0. Substituting the values and into Equation (2), we have: where (Production Consumption Rate) is defined as can the ratio of to. The size of the entire object then be computed in terms of and as follows: "! # %$ Let and represent the accumulative size of all the D- fragments and & the accumulative %$ size of all the T-fragments, respectively. and can be computed as follows: # ) %$ #! ( 0 ( If +*, we have the following approximation: #-,. /02 (3) %$3, /02 "! The equations derived above serve as the foundation for the subsequent discussions. We note that unlike the data fragments in PIRATE and which are monotonously decreasing in size, there are only two types of fragments in. Its regular design makes it possible to pipeline in either forward or reverse direction. We will discuss the implementation of VCR functions in details later.

3 $ $ 2.2. Space Optimization With the new pipelining scheme, we must load all the D-fragments into the disk system before the can start. We note that the size of & is equal to the size of the HEAD (i.e., 0) in PIRATE and for a given file. In, D-fragments (disk-resident fragments) are kept in the disk buffer for as long as possible. We will discuss the replacement policy in the next subsection. For the moment, let us focus on the space required by the staging buffers to retrieve the T-fragments from the tertiary devices. There are two ways to implement the staging buffers: Double Buffer: We maintain two buffers, one for reading and one for writing. While the data of fragment is being transferred to the station from one buffer (consumption buffer), the tertiary device writes into another buffer (production buffer). These two buffers switch roles when the current consumption buffer is exhausted. Obviously, the size of these two buffers is T, and the total space required by this scheme is 2. Single Buffer: This approach uses a circular buffer shared by both the consumption and production procedures as illustrated in Figure 2. The space requirement for this approach is. CD Read Data Memory Buffers Used Free Magnetic Disk Buffer Display Data Load into Memory Figure 2. Circular staging buffer. Display Device When either approach is used, we should minimize to keep the size of the staging buffer minimal. Equation (2.) is repeated below: We can reduce the fraction + $ to its irreducible form $, such that is prime to. Thus, the minimum size for the T-fragments is blocks, where block is an efficient unit for I/O operations and. Accordingly, the size for! D-fragments, except the first one, should be.. The size for the first D-fragment is or blocks. For instance, if 0! 6, we have 3 and 5. The sizes of D-fragments and T-fragments, therefore, are 2 blocks and 3 blocks, respectively. This example is illustrated in Figure. Let a block be 4 KBytes. The size of the staging buffer is 2 blocks or 24 KBytes if double buffering is used, and is blocks or 6 KBytes if circular buffering is used. would have required a staging buffer as large as 363 MBytes. PIRATE does not use a staging buffer. In this case, it would have required a disk space of 907 MBytes in order to retrieve the TAIL of the object. Obviously, the savings due to are tremendous. The tiny size of the staging buffers used in offers many benefits: Since the size is so small, the staging buffer can be implemented in the memory. This approach allows the pipelining to bypass the disk subsystem leaving all the disk bandwidth to the replacement activities of the D-fragments. Since the staging buffers require no disk space, the disk space saved can be used to support more users, and therefore improve the throughput of the system. A smaller staging buffer translates into a lower admission cost. Users, therefore, will experience better access latencies. We note that one can consider using the technique proposed in [7] to manage the in-memory staging buffers. This scheme takes advantage of the fact that each staging buffer shrinks as its data are being forwarded to the station. Since the storage subsystem must multiplex its bandwidth to refresh these buffers in a round-robin fashion, the space released by the shrinking buffers can be given to the ones which are being refreshed. Since staging buffers take turns to use the same memory space, reduction in memory requirement is possible. The performance study in [7] shows that up to 50% savings in memory space is achievable Replacement Policy As we have mentioned, pipelining is done through the memory system bypassing the disk units. The disk buffer is used exclusively for caching the D-fragments. The replacement policy for the D-fragments is presented in Figure 3. The following notations are used in the algorithm: : Size of the requested object in block. : Size of the disk-resident portion of object. : Access frequency of object. : The set of disk resident objects not currently ed. Algorithm Reserve is based on Equation (3). For each video object requested, it computes the additional

4 Algorithm: REPLACE :! " #%$ &('!)*'%&%+(', - - free disk space needed space if (needed space. repeat 0) then victim / the object (0 ) with lowest HEAT in if ( 22 is null) then return(failure) else if (3 45)* :*5 ) then 3 45)*67 22 ;<3 45)*67 22 >=-:*5 free the last :*5 amount of 22 s space for :*5 0 else displace 22 to make room for remove 22 from :*5 <:*5 =-3 45)*67 22 until (:*5 0) Allocate amount of free disk space for object return(success) Algorithm: RESERVE if no D-fragments of is disk resident : then (@?2 =-$%A&%>B )>45C9', -ED else (@?2 = $%A&%FB)>45C9', -G= 3 45)*67 -ED return( ) Figure 3. replacement algorithm. amount of disk space required to load the D-fragments not currently in the disk buffer. Once this has been determined, Algorithm REPLACE tries to satisfy this requirement by using as much of the free disk space as possible. If there is not enough free disk space, it casts out as many objects as necessary to make room for the request. We note that the unit for replacement is a D-fragment, not a whole object. Thus, some of the disk-resident objects may have some, but not all, of its D-fragments in the buffer. We note that LRU policy can be used to select victims for replacement. Alternatively, access frequencies of video objects are usually known beforehand [2]. This information can also be used to select victims. Without loss of generality, the latter approach is used in the presentation of Algorithm REPLACE. 3. Simulation Model In the previous sections, we have analyzed the advantages of in terms of disk space utilization. Although this metric has the most direct impact on the system performance, it is still worthwhile to investigate the ultimate performance metrics, namely access delay and system throughput. To do this, we decided to use a simulation model since it becomes too complex to do it analytically. The simulation environment is presented in the following. We will examine the simulation results in the next section. Our simulation model is similar to the one used in [8]. The Request Generator generates requests for multimedia objects and submits them to the Waiting Queue. The Scheduler examines the requests in the queue in a FCFS manner. When bandwidths become available to serve the pending request at the head of the queue, the scheduler forwards the request to the Serving Unit. Serving Unit then allocates a playback stream to serve this request. Serving Unit simulates a hierarchical storage system and the playback mechanism. The buffer manager was implemented using the replacement policies presented in Figure 3. We note that the requests arriving at the Waiting Queue can be viewed as coming from different users. Our simulator allows multiple requests to be served simultaneously by different playback streams. This model is different from the single-user environment modeled in [4], which does not allow concurrent playback of several video files. In terms of the workloads, each user request is characterized by an interarrival time and choice of object. User request interarrivals were modeled using a Poisson process. The access frequencies of objects in the database follow a Zipf-like distribution [5, 6, 9]. Let be the total number of requests for a simulation run. The number of requests for each object H is determined as follows: where R 2I5J KML NO PQ I, is the number of objects in the system, and 0 + is the skew factor. A larger value corresponds to a more skew condition, i.e., some objects are accessed considerably more frequently than other objects. When 0, the distribution is uniform, i.e. all the objects have the same access frequency. This Zipf-like distributionis similar to the distribution used in [2]. Each workload consists of 20,000 requests. A workload, called a job sequence, is generated for each skew condition. For each simulation run, the same sequence is used for both and. Thus, the Request Generator does not really generate requests on the fly. Instead, it keeps a database of these request sequences. For each simulation run, it scans the appropriate sequence, and appends the next request from the sequence to the Waiting Queue when the corresponding inter-arrival time is up. Without loss of generality, we assume that all client devices have the same rate (i.e., 2 is constant). is determined from the PCR. The default values for the system and workload parameters are given in Table. In our experiments, many of these parameters were also varied to perform various sensitivity analyses. In this study, the system throughput is computed by dividing the number of requests in the job sequence (i.e., 20,000) by the total simulated time (i.e., the time it takes to serve the 20,000 requests). The average latency is computed as the mean of the 20,000 individual latencies. To avoid the buffer warm-up effect, we actually ran another short sequence of requests to fill up the disk buffer before the actual run takes

5 Block size 4 KBytes Disk space blocks 00 blocks/sec 80 blocks/sec ( 0 8) Zipf factor 0 7 Requests per minute 30 (average interarrival time is 2 sec) Number of objects 600 Minimum object size blocks Maximum object size blocks Number of requests Table. Simulation parameters. place. The requests in the short sequence were randomly selected from the long sequence to ensure that the data initially cached in the buffer (to simulate the steady-state condition) were relevant and truthfully reflected the distribution of the requests in the workload (long sequence). 4. Simulation Results We present the simulation results in the following subsections. 4.. Effect of Request Rate The effect of the request rate on and is plotted in Figures 4(a) and (b). In this experiment, the size of the disk buffer was set at 5% of the database size. We gradually increased the request rate from 20 requests/minute to 50 requests/minute, and observed how well these two schemes could sustain the faster request rates. Figure 4(a) shows that consistently provides better average latency than can. The savings range from 300% (under 20 requests per minute) to,000% (under 40 requests per minute). In terms of system throughput, although both schemes perform comparably under slow request rates (less than 30 requests per minute), only can continue to extend its good performance beyond 30 requests/minute. When the request rate is 50 requests/minutes, we see that outperforms by 54% in terms of system throughput. The improvement on system throughput is really a lot more significant because applications normally have requirements on the maximum access latency. Let us say that the required average access latency for some video-on-demand application is two minutes. Under this condition, can handle no more than 20 requests/minute. This limits its performance to less than,200 services/hour. On the contrary, can sustain request rates well beyond 40 requests/minute. This allows to offer substantially better system throughput. For instance, if we let operate at 40 requests/minute while is constrained to 20 requests/minute, the difference in system throughput is more than double. Obviously, much better throughput is achievable by further increasing the request rate for. We note that the dramatic improvement, due to, observed here is consistent with the analytical results discussed previously Effect of Space Ratio We define the space ratio, Space Ratio, as the ratio of the disk size to the database size. In this experiment, we want to investigate the effect of this ratio on the performance of the two disk-buffer designs. A good technique should be able to achieve good performance using a reasonably small buffer. In other words, we want to keep the Space Ratio as small as possible without compromising too much performance. The results of this study is plotted in Figures 4(c) and (d). We varied the space ratio from 0% to 30%. Figure 4(c) shows that consistently outperforms by a significant margin for practical buffer sizes (i.e., space ratio is less than 5%). For instance, the average latency of is more than eight times better than that of when the space ratio is 5%. In terms of system throughput, the performance difference is not significant under this workload because the performance of is unfairly constrained to the 30-requests/minutes request rate. We decided not to run this experiment under a higher request rate, say 40, because the average latency for would have been too high for most applications. This issue was discussed in the last subsection Effect of Access Skew Although movie-on-demand and many multimedia applications are known to have a skew factor of around 0.7 (which is used in the above experiments), other applications can have very different access patterns (i.e., different skew factors.) We investigate this effect on and in this subsection. The results of this study is shown in Figure 5. We varied the skew factor between 0.0 (a uniform pattern) and.0 (a severe skew condition). It shows that the performance of and improves as we increase the skew factor. This behavior is due to the improvement in the temporal locality of reference causing the hit ratio of the disk buffer to improve. In comparison, we observe that tremendously outperforms in terms of average access latency. is around 50% better than in terms of system throughput when the workload is uniform. The differences in system throughput are not significant under the severe skew workload due to the same reasons explained in Section 4.2.

6 Latency Time(Seconds) Request Rate(Requests/Min) (a) Latency times for different request rates. Throughput(Services/Hour) Request Rate(Requests/Min) (b) Throughputs for different request rates. Latency Time(Seconds) Zipf Factor (a) Latency times for different Zipf factors. Throughput(Services/Hour) Zipf Factor (b) Throughputs for different Zipf factors. Figure 5. Skew effect on performance Latency Time(Seconds) Space Ratio(Disk Space/Database Size) (c) Latency times for different space ratio. Throughput(Services/Hour) Space Ratio(Disk Space/Database Size) (d) Throughputs for different space ratio. Figure 4. Performance comparison. 5. Support VCR Functions Another important feature of the approach is its efficient support for VCR functions, such as random access, fast-forward and fast-reverse. We discuss these features in the following subsections. 5.. Random access Let us first examine PIRATE and. Since both of these techniques cache the HEADs in the disk buffer, let us consider the case when the HEAD of the object being used, say, is already disk resident. If the random access starts at some point in the HEAD, then the delay time for setting up the pipeline is equal to the time it takes to the portion in front of (see Figure 6). This is due to the fact that the duration for playing the HEAD starting from will not be long enough to eclipse the time it takes to materialize the entire. The time difference is the playback time of. Thus, we have to spend that amount of time (i.e., the delay) to load the leading blocks of (i.e., in Figure 6) before the pipelining can start. Hence the delay is computed as follows: 4

7 where 0. denotes the fraction of the data file preceding the start point. For instance, 0! 5 if one starts the playback at the middle of the file. X: D X 0 Start Point M X Figure 6. Random access in the HEAD under PIRATE and. X: Z = x Size(X) X 2 X X X 0 k Y D M Start Point (-x)size(x) Figure 7. Random access in the TAIL under PIRATE and. Now, let us consider the case of starting the playback at some random point in a non-head fragment. As illustrated in Figure 7, the delay for setting up the pipeline is equal to the time to materialize the portions and. To ensure a continuous playback, we must have the time to materialize the portion equal the time to play back the portion. Thus, Delay time can be computed as: J for 0. Delay J (J J for.! 5 Let be the total playback time of the entire video file, i.e., Delay. Substitute into Equation (5), we have: for 0. J J for.! 6 If every point in the object is equally likely to be accessed as the starting point, the average delay for random access can be computed as follows: Average Delay ! (7) Assuming 0! 6, the delay is about 20% of the playback time of the whole file. Thus, if a movie is hour long, the average delay for random access will be 2 minutes. This is certainly not tolerable by most viewers. Let us now turn our attention to. If the starting point is in a T-fragment, the delay for setting up the pipeline is the time to materialize areas and 2 as illustrated in Figure 8. We need to materialize area because the data is not yet in the disk buffer. After area has been materialized, the duration for playing this area and the next D-fragment is not long enough to eclipse the time to materialize the next T-fragment. The time difference is the time to materialize area 2. Let the size of area be., where is the portion of the T-fragment excluded from the playback. The size of area 2 can then be computed as follows. To ensure a continuous playback, the following relationship must hold:! 2 or! 2! Thus the delay time is: Delay.#" J $! That is, the maximum delay time cannot be longer than the time to materialize a T-fragment which is only a few blocks. If the starting point is in a D-fragment, the delay time is the time to materialize the portion in the next T-fragment as shown in Figure 9. To ensure a continuous playback, we must have the following relationship: Thus the delay time is: Delay! $ $ $! Again, the maximum delay time cannot be longer than the time to materialize a T-fragment. X: t T D 2 Start Point Figure 8. Start the playback in some T-fragment. X: d t T Start Point Figure 9. Start the playback in some D-fragment. We have shown that the delay is only a few blocks for independent of where one wants to start the playback of a file. Such a tiny delay is generally unnoticeable. D T T (8)

8 For instance, let us consider a -hour MPEG-2 video with 0! 6 and a block size of 4 KBytes. The average delay, in this case, is less than 0.05 second. The average delay would have been 2 minutes if PIRATE or were used Fast-Forward/Reverse Although normal playback is the most important function for all multimedia applications, providing the user with VCR capabilities such as fast-forward and fast-reverse is also highly desired. Several approaches have been proposed for implementing these special functions [3, ]. The most straight forward technique is to retrieve and transmit the multimedia stream in higher speed, say times the normal playback rate. It is apparent that this simple scheme requires times the system resources. Obviously, all PIRATE, and will not be able to avoid these extra costs if this approach is used. Alternatively, the Loss scheme in [3] or the Frame Skipping scheme in [] can be used to reduce the requirement on system resources. The idea behind these methods is to skip forward or backward through the video file showing one out of several blocks. This strategy is not suitable for PIRATE or due to their unidirectional nature. For instance, neither PIRATE nor can efficiently support a fast-reverse soon after a random access. In this case, some of the data needed might not be in the buffer, and reverse pipelining is not an option for these techniques. We note that this sequence of two operations is very commonly used to search in a video file. Although fast-reverse during a normal play is possible with PIRATE, this function is provided at the cost of retaining all the data in the buffer until the end of the session. If the normal play is resumed after a fast-forward, this sequence of operations has the same effect as a random access. The intolerable delay as discussed in the last subsection is inevitable for either PIRATE or. On the contrary, the methods presented in [3, ] are natural for because it also skips through the file and caches in the disk buffer only every other fragment, i.e., skipping T-fragments and caching D-fragments. Interestingly, when all the D-fragments are disk resident, the symmetrical nature of allows fast-forward and fast-reverse be done without even involving the tertiary devices. If one needs to resume the normal play after a fast-forward or fast-reverse, the delay is essentially unnoticeable since this can be treated as a random access. 6. Conclusions and Future Studies This study focuses on disk space management of hierarchical storage in multimedia systems. We have proposed a novel technique called Bidirectional Highly Optimized Pipelining (). Our simulation results indicate that significantly outperforms a recently proposed technique called. This result can be attributed to our caching technique which caches every other data fragment of the multimedia file, rather than caching consecutive fragments as in traditional practice. This new approach allows us to use tiny staging buffers for pipelining. Their small sizes allow them to be implemented in the server memory to conserve disk bandwidth. The whole disk space, therefore, can be dedicated for caching purposes to improve the hit ratio. Another important benefit of is its bidirectional nature. While the sizes of the data fragments are nonuniform in existing techniques, the symmetrical file organization (i.e., uniform pattern) of allows it to pipeline in either forward or reverse direction with the same efficiency. This unique feature, not possible with other schemes, makes natural for implementing VCR functions. In this study, we focused our attention on disk space management. Other scarce resources include the transmission bandwidths between different levels of the hierarchical storage. We are currently investigating techniques to make more efficient use of these resources. Admission control is another issue needed to be studied more carefully. Our simulator does not currently implement an admission-control policy. Instead, each request is automatically accepted. References [] M. Chen, D. Kandlur, and P. S. Yu. Support for fully interactive playout in a disk-array-based video server. In Proc. of ACM Multimedia, pages , 994. [2] A. Dan, D. Sitaram, and P. Shahabuddin. Scheduling policies for an on-demand video server with batching. In Proc. of ACM Multimedia, pages 5 23, October 994. [3] J. K. Dey-Sircar et al. Providing VCR capabilities in largescale video servers. In Proc. of ACM Multimedia, pages 25 32, October 994. [4] S. Ghandeharizadehand C. Shahabi. On multimedia repositories, personal computers, and hierarcical storage systems. In Proc. of ACM Multimedia, pages , October 994. [5] K. A. Hua, C. Lee, and C. M. Hua. Dynamic load balancing in multicomputer database systems using partition tuning. IEEE Trans. on Knowledge and Data Engineering, 7(6): , December 995. [6] D. E. Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching. Addison Wesley, Reading, Massachusetts, 973. [7] R. T. Ng and J. Yang. Maximizing buffer and disk utilizations for news-on-demand. In Proc. of the 20th VLDB Conference, Santiago, Chile, 994. [8] J. Z. Wang, K. A. Hua, and H. C. Young. : a space efficient pipelining technique for managing disk buffers in multimedia servers. In Proc. of the IEEE int l Conf. on Multimedia Computing and Systems, Hiroshima, Japan, June 996. [9] G. K. Zipf. Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology. Addison-Wesley, Reading, Mass., 949.

A Bandwidth Management Technique for Hierarchical Storage in Large-Scale Multimedia Servers

A Bandwidth Management Technique for Hierarchical Storage in Large-Scale Multimedia Servers A Bandwidth Management Technique for Hierarchical Storage in Large-Scale Multimedia Servers James Z Wang Kien A Hua Department of Computer Science University of Central Florida Orlando, FL 38-36, USA Abstract

More information

Improving VoD System Efficiency with Multicast and Caching

Improving VoD System Efficiency with Multicast and Caching Improving VoD System Efficiency with Multicast and Caching Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 1. Introduction 2. Previous Works 3. UVoD

More information

A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers

A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers Nabil J. Sarhan Chita R. Das Department of Computer Science and Engineering The Pennsylvania State University University Park,

More information

Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part I - The Single-Disk Case

Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part I - The Single-Disk Case Distributed Video Systems Chapter 5 Issues in Video Storage and Retrieval Part I - he Single-Disk Case Jack Yiu-bun Lee Department of Information Engineering he Chinese University of Hong Kong Contents

More information

RECURSIVE PATCHING An Efficient Technique for Multicast Video Streaming

RECURSIVE PATCHING An Efficient Technique for Multicast Video Streaming ECUSIVE ATCHING An Efficient Technique for Multicast Video Streaming Y. W. Wong, Jack Y. B. Lee Department of Information Engineering The Chinese University of Hong Kong, Shatin, N.T., Hong Kong Email:

More information

Threshold-Based Multicast for Continuous Media Delivery y

Threshold-Based Multicast for Continuous Media Delivery y Threshold-Based Multicast for Continuous Media Delivery y Lixin Gao? Department of Electrical and Computer Engineering University of Massachusetts Amherst, Mass. 01003, USA lgao@ecs.umass.edu Don Towsley

More information

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang CHAPTER 6 Memory 6.1 Memory 233 6.2 Types of Memory 233 6.3 The Memory Hierarchy 235 6.3.1 Locality of Reference 237 6.4 Cache Memory 237 6.4.1 Cache Mapping Schemes 239 6.4.2 Replacement Policies 247

More information

Cost Effective and Scalable Video Streaming Techniques

Cost Effective and Scalable Video Streaming Techniques 25 Cost Effective and Scalable Video Streaming Techniques Kien A. Hua Mounir Tantaoui School of Electrical Engineering and Computer Science University of Central Florida Orlando, Florida, USA kienhua@cs.ucf.edu,tantaoui@cs.ucf.edu

More information

Threshold-Based Multicast for Continuous Media Delivery Ý

Threshold-Based Multicast for Continuous Media Delivery Ý Threshold-Based Multicast for Continuous Media Delivery Ý Lixin Gao Department of Electrical and Computer Engineering University of Massachusetts Amherst, Mass. 01003, USA lgao@ecs.umass.edu Don Towsley

More information

Energy-Efficient Mobile Cache Invalidation

Energy-Efficient Mobile Cache Invalidation Distributed and Parallel Databases 6, 351 372 (1998) c 1998 Kluwer Academic Publishers. Manufactured in The Netherlands. Energy-Efficient Mobile Cache Invalidation KUN-LUNG WU, PHILIP S. YU AND MING-SYAN

More information

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang CHAPTER 6 Memory 6.1 Memory 341 6.2 Types of Memory 341 6.3 The Memory Hierarchy 343 6.3.1 Locality of Reference 346 6.4 Cache Memory 347 6.4.1 Cache Mapping Schemes 349 6.4.2 Replacement Policies 365

More information

Multimedia Storage Servers

Multimedia Storage Servers Multimedia Storage Servers Cyrus Shahabi shahabi@usc.edu Computer Science Department University of Southern California Los Angeles CA, 90089-0781 http://infolab.usc.edu 1 OUTLINE Introduction Continuous

More information

A COOPERATIVE DISTRIBUTION PROTOCOL FOR VIDEO-ON-DEMAND

A COOPERATIVE DISTRIBUTION PROTOCOL FOR VIDEO-ON-DEMAND Proc. Mexican International Conference on Computer Science (ENC 2005), Puebla, Mexico, pages 240 246, Sep. 2005 A COOPERATIVE DISTRIBUTION PROTOCOL FOR VIDEO-ON-DEMAND Jehan-François Pâris Department of

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

FuxiSort. Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc

FuxiSort. Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc Fuxi Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc {jiamang.wang, yongjun.wyj, hua.caihua, zhipeng.tzp, zhiqiang.lv,

More information

Shaking Service Requests in Peer-to-Peer Video Systems

Shaking Service Requests in Peer-to-Peer Video Systems Service in Peer-to-Peer Video Systems Ying Cai Ashwin Natarajan Johnny Wong Department of Computer Science Iowa State University Ames, IA 500, U. S. A. E-mail: {yingcai, ashwin, wong@cs.iastate.edu Abstract

More information

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System Ubaid Abbasi and Toufik Ahmed CNRS abri ab. University of Bordeaux 1 351 Cours de la ibération, Talence Cedex 33405 France {abbasi,

More information

Distributed Video Systems Chapter 3 Storage Technologies

Distributed Video Systems Chapter 3 Storage Technologies Distributed Video Systems Chapter 3 Storage Technologies Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 3.1 Introduction 3.2 Magnetic Disks 3.3 Video

More information

Silberschatz, et al. Topics based on Chapter 13

Silberschatz, et al. Topics based on Chapter 13 Silberschatz, et al. Topics based on Chapter 13 Mass Storage Structure CPSC 410--Richard Furuta 3/23/00 1 Mass Storage Topics Secondary storage structure Disk Structure Disk Scheduling Disk Management

More information

A Proxy Caching Scheme for Continuous Media Streams on the Internet

A Proxy Caching Scheme for Continuous Media Streams on the Internet A Proxy Caching Scheme for Continuous Media Streams on the Internet Eun-Ji Lim, Seong-Ho park, Hyeon-Ok Hong, Ki-Dong Chung Department of Computer Science, Pusan National University Jang Jun Dong, San

More information

To reduce the cost of providing video on demand (VOD), one must lower the cost of operating each channel. This can be achieved by allowing many users

To reduce the cost of providing video on demand (VOD), one must lower the cost of operating each channel. This can be achieved by allowing many users Leverage Client Bandwidth to Improve Service Latency in a Periodic Broadcast Environment Λ Kien A. Hua 1 Ying Cai 1 Simon Sheu 2 1 School of Computer Science University of Central Florida Orlando, FL 32816,

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

UNIT 4 Device Management

UNIT 4 Device Management UNIT 4 Device Management (A) Device Function. (B) Device Characteristic. (C) Disk space Management. (D) Allocation and Disk scheduling Methods. [4.1] Device Management Functions The management of I/O devices

More information

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies Anna Satsiou and Michael Paterakis Laboratory of Information and Computer Networks Department

More information

Performance and Waiting-Time Predictability Analysis of Design Options in Cost-Based Scheduling for Scalable Media Streaming

Performance and Waiting-Time Predictability Analysis of Design Options in Cost-Based Scheduling for Scalable Media Streaming Performance and Waiting-Time Predictability Analysis of Design Options in Cost-Based Scheduling for Scalable Media Streaming Mohammad A. Alsmirat and Nabil J. Sarhan Department of Electrical and Computer

More information

SamKnows test methodology

SamKnows test methodology SamKnows test methodology Download and Upload (TCP) Measures the download and upload speed of the broadband connection in bits per second. The transfer is conducted over one or more concurrent HTTP connections

More information

UNIT I (Two Marks Questions & Answers)

UNIT I (Two Marks Questions & Answers) UNIT I (Two Marks Questions & Answers) Discuss the different ways how instruction set architecture can be classified? Stack Architecture,Accumulator Architecture, Register-Memory Architecture,Register-

More information

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable What s An OS? Provides environment for executing programs Process abstraction for multitasking/concurrency scheduling Hardware abstraction layer (device drivers) File systems Communication Do we need an

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

Memory. Objectives. Introduction. 6.2 Types of Memory

Memory. Objectives. Introduction. 6.2 Types of Memory Memory Objectives Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured. Master the concepts

More information

Analysis of Resource Sharing and Cache Management in Scalable Video-on-Demand

Analysis of Resource Sharing and Cache Management in Scalable Video-on-Demand Analysis of Resource Sharing and Cache Management in Scalable Video-on-Demand Bashar Qudah and Nabil J. Sarhan Department of Electrical and Computer Engineering Wayne State University Detroit, MI 482,

More information

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition

More information

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced? Chapter 10: Virtual Memory Questions? CSCI [4 6] 730 Operating Systems Virtual Memory!! What is virtual memory and when is it useful?!! What is demand paging?!! When should pages in memory be replaced?!!

More information

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System Preview Process Scheduler Short Term Scheduler Long Term Scheduler Process Scheduling Algorithms for Batch System First Come First Serve Shortest Job First Shortest Remaining Job First Process Scheduling

More information

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction Chapter 6 Objectives Chapter 6 Memory Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured.

More information

Operating Systems Unit 6. Memory Management

Operating Systems Unit 6. Memory Management Unit 6 Memory Management Structure 6.1 Introduction Objectives 6.2 Logical versus Physical Address Space 6.3 Swapping 6.4 Contiguous Allocation Single partition Allocation Multiple Partition Allocation

More information

Chapter 4. Cache Memory. Yonsei University

Chapter 4. Cache Memory. Yonsei University Chapter 4 Cache Memory Contents Computer Memory System Overview Cache Memory Principles Elements of Cache Design Pentium 4 and Power PC Cache 4-2 Key Characteristics 4-3 Location Processor Internal (main)

More information

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke

CSCI-GA Operating Systems. I/O : Disk Scheduling and RAID. Hubertus Franke CSCI-GA.2250-001 Operating Systems I/O : Disk Scheduling and RAID Hubertus Franke frankeh@cs.nyu.edu Disks Scheduling Abstracted by OS as files A Conventional Hard Disk (Magnetic) Structure Hard Disk

More information

Eastern Mediterranean University School of Computing and Technology CACHE MEMORY. Computer memory is organized into a hierarchy.

Eastern Mediterranean University School of Computing and Technology CACHE MEMORY. Computer memory is organized into a hierarchy. Eastern Mediterranean University School of Computing and Technology ITEC255 Computer Organization & Architecture CACHE MEMORY Introduction Computer memory is organized into a hierarchy. At the highest

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

Maximizing the Number of Users in an Interactive Video-on-Demand System

Maximizing the Number of Users in an Interactive Video-on-Demand System IEEE TRANSACTIONS ON BROADCASTING, VOL. 48, NO. 4, DECEMBER 2002 281 Maximizing the Number of Users in an Interactive Video-on-Demand System Spiridon Bakiras, Member, IEEE and Victor O. K. Li, Fellow,

More information

Operating System Performance and Large Servers 1

Operating System Performance and Large Servers 1 Operating System Performance and Large Servers 1 Hyuck Yoo and Keng-Tai Ko Sun Microsystems, Inc. Mountain View, CA 94043 Abstract Servers are an essential part of today's computing environments. High

More information

Proxy Caching for Video on Demand Systems in Multicasting Networks

Proxy Caching for Video on Demand Systems in Multicasting Networks MER A MITSUBISHI EECTRIC RESEARCH ABORATORY http://wwwmerlcom roxy Caching for Video on Demand Systems in Multicasting Networks i Zhu, Gang Cheng, Nirwan Ansari, Zafer Sahinoglu, Anthony Vetro, and Huifang

More information

Study of Load Balancing Schemes over a Video on Demand System

Study of Load Balancing Schemes over a Video on Demand System Study of Load Balancing Schemes over a Video on Demand System Priyank Singhal Ashish Chhabria Nupur Bansal Nataasha Raul Research Scholar, Computer Department Abstract: Load balancing algorithms on Video

More information

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2 Introduction :- Today single CPU based architecture is not capable enough for the modern database that are required to handle more demanding and complex requirements of the users, for example, high performance,

More information

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems Processes CS 475, Spring 2018 Concurrent & Distributed Systems Review: Abstractions 2 Review: Concurrency & Parallelism 4 different things: T1 T2 T3 T4 Concurrency: (1 processor) Time T1 T2 T3 T4 T1 T1

More information

CSE 421/521 - Operating Systems Fall Lecture - XXV. Final Review. University at Buffalo

CSE 421/521 - Operating Systems Fall Lecture - XXV. Final Review. University at Buffalo CSE 421/521 - Operating Systems Fall 2014 Lecture - XXV Final Review Tevfik Koşar University at Buffalo December 2nd, 2014 1 Final Exam December 4th, Thursday 11:00am - 12:20pm Room: 110 Knox Chapters

More information

Buffer Management Scheme for Video-on-Demand (VoD) System

Buffer Management Scheme for Video-on-Demand (VoD) System 2012 International Conference on Information and Computer Networks (ICICN 2012) IPCSIT vol. 27 (2012) (2012) IACSIT Press, Singapore Buffer Management Scheme for Video-on-Demand (VoD) System Sudhir N.

More information

On a Unified Architecture for Video-on-Demand Services

On a Unified Architecture for Video-on-Demand Services 38 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 4, NO. 1, MARCH 2002 On a Unified Architecture for Video-on-Demand Services Jack Y. B. Lee Abstract Current video-on-demand (VoD) systems can be classified into

More information

a process may be swapped in and out of main memory such that it occupies different regions

a process may be swapped in and out of main memory such that it occupies different regions Virtual Memory Characteristics of Paging and Segmentation A process may be broken up into pieces (pages or segments) that do not need to be located contiguously in main memory Memory references are dynamically

More information

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL 2006 243 Segment-Based Streaming Media Proxy: Modeling Optimization Songqing Chen, Member, IEEE, Bo Shen, Senior Member, IEEE, Susie Wee, Xiaodong

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

Volume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering

Volume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Optimal Round

More information

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Process Scheduling. Copyright : University of Illinois CS 241 Staff Process Scheduling Copyright : University of Illinois CS 241 Staff 1 Process Scheduling Deciding which process/thread should occupy the resource (CPU, disk, etc) CPU I want to play Whose turn is it? Process

More information

Multimedia Streaming. Mike Zink

Multimedia Streaming. Mike Zink Multimedia Streaming Mike Zink Technical Challenges Servers (and proxy caches) storage continuous media streams, e.g.: 4000 movies * 90 minutes * 10 Mbps (DVD) = 27.0 TB 15 Mbps = 40.5 TB 36 Mbps (BluRay)=

More information

Stretch-Optimal Scheduling for On-Demand Data Broadcasts

Stretch-Optimal Scheduling for On-Demand Data Broadcasts Stretch-Optimal Scheduling for On-Demand Data roadcasts Yiqiong Wu and Guohong Cao Department of Computer Science & Engineering The Pennsylvania State University, University Park, PA 6 E-mail: fywu,gcaog@cse.psu.edu

More information

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array

More information

Dynamic Broadcast Scheduling in DDBMS

Dynamic Broadcast Scheduling in DDBMS Dynamic Broadcast Scheduling in DDBMS Babu Santhalingam #1, C.Gunasekar #2, K.Jayakumar #3 #1 Asst. Professor, Computer Science and Applications Department, SCSVMV University, Kanchipuram, India, #2 Research

More information

LECTURE 11. Memory Hierarchy

LECTURE 11. Memory Hierarchy LECTURE 11 Memory Hierarchy MEMORY HIERARCHY When it comes to memory, there are two universally desirable properties: Large Size: ideally, we want to never have to worry about running out of memory. Speed

More information

Fall COMP3511 Review

Fall COMP3511 Review Outline Fall 2015 - COMP3511 Review Monitor Deadlock and Banker Algorithm Paging and Segmentation Page Replacement Algorithms and Working-set Model File Allocation Disk Scheduling Review.2 Monitors Condition

More information

RESPONSIVENESS IN A VIDEO. College Station, TX In this paper, we will address the problem of designing an interactive video server

RESPONSIVENESS IN A VIDEO. College Station, TX In this paper, we will address the problem of designing an interactive video server 1 IMPROVING THE INTERACTIVE RESPONSIVENESS IN A VIDEO SERVER A. L. Narasimha Reddy ABSTRACT Dept. of Elec. Engg. 214 Zachry Texas A & M University College Station, TX 77843-3128 reddy@ee.tamu.edu In this

More information

WEEK 7. Chapter 4. Cache Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved.

WEEK 7. Chapter 4. Cache Memory Pearson Education, Inc., Hoboken, NJ. All rights reserved. WEEK 7 + Chapter 4 Cache Memory Location Internal (e.g. processor registers, cache, main memory) External (e.g. optical disks, magnetic disks, tapes) Capacity Number of words Number of bytes Unit of Transfer

More information

Symphony: An Integrated Multimedia File System

Symphony: An Integrated Multimedia File System Symphony: An Integrated Multimedia File System Prashant J. Shenoy, Pawan Goyal, Sriram S. Rao, and Harrick M. Vin Distributed Multimedia Computing Laboratory Department of Computer Sciences, University

More information

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS Kevin Streeter Adobe Systems, USA ABSTRACT While HTTP adaptive streaming (HAS) technology has been very successful, it also generally introduces

More information

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses. 1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and

More information

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution Datorarkitektur Fö 2-1 Datorarkitektur Fö 2-2 Components of the Memory System The Memory System 1. Components of the Memory System Main : fast, random access, expensive, located close (but not inside)

More information

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

Chapter Seven. Large & Fast: Exploring Memory Hierarchy Chapter Seven Large & Fast: Exploring Memory Hierarchy 1 Memories: Review SRAM (Static Random Access Memory): value is stored on a pair of inverting gates very fast but takes up more space than DRAM DRAM

More information

Chapter 11. I/O Management and Disk Scheduling

Chapter 11. I/O Management and Disk Scheduling Operating System Chapter 11. I/O Management and Disk Scheduling Lynn Choi School of Electrical Engineering Categories of I/O Devices I/O devices can be grouped into 3 categories Human readable devices

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Cache memory. Lecture 4. Principles, structure, mapping

Cache memory. Lecture 4. Principles, structure, mapping Cache memory Lecture 4 Principles, structure, mapping Computer memory overview Computer memory overview By analyzing memory hierarchy from top to bottom, the following conclusions can be done: a. Cost

More information

Due to the rapid evolution in high-bandwidth networks and

Due to the rapid evolution in high-bandwidth networks and Caching in Distributed Systems Stavros Harizopoulos Carnegie Mellon University Costas Harizakis and Peter Triantafillou Technical University of Crete To avoid the throughput limitations of traditional

More information

Movie 1, Seg. 1 Movie 2, Seg. 1. Group 0. Movie 1, Seg. 3 Movie 2, Seg. 3. Movie 1, Seg. 2 Movie 2, Seg. 2. Group 1. Movie 1, Seg. 4 Movie 2, Seg.

Movie 1, Seg. 1 Movie 2, Seg. 1. Group 0. Movie 1, Seg. 3 Movie 2, Seg. 3. Movie 1, Seg. 2 Movie 2, Seg. 2. Group 1. Movie 1, Seg. 4 Movie 2, Seg. Pipelined Disk Arrays for Digital Movie Retrieval Ariel Cohen Walter A Burkhard P Venkat Rangan Gemini Storage Systems Laboratory & Multimedia Laboratory Department of Computer Science & Engineering University

More information

Cache Management for Shared Sequential Data Access

Cache Management for Shared Sequential Data Access in: Proc. ACM SIGMETRICS Conf., June 1992 Cache Management for Shared Sequential Data Access Erhard Rahm University of Kaiserslautern Dept. of Computer Science 6750 Kaiserslautern, Germany Donald Ferguson

More information

Plot SIZE. How will execution time grow with SIZE? Actual Data. int array[size]; int A = 0;

Plot SIZE. How will execution time grow with SIZE? Actual Data. int array[size]; int A = 0; How will execution time grow with SIZE? int array[size]; int A = ; for (int i = ; i < ; i++) { for (int j = ; j < SIZE ; j++) { A += array[j]; } TIME } Plot SIZE Actual Data 45 4 5 5 Series 5 5 4 6 8 Memory

More information

5. Conclusion. 6. References

5. Conclusion. 6. References Delivery Techniques Developing hybrid bandwidth smoothing techniques that are aimed for both VCR interactivity as well as high-utilization of networ channels are required. This involves both the interaction

More information

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013 CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 27, SPRING 2013 CACHING Why: bridge speed difference between CPU and RAM Modern RAM allows blocks of memory to be read quickly Principle

More information

William Stallings Computer Organization and Architecture 10 th Edition Pearson Education, Inc., Hoboken, NJ. All rights reserved.

William Stallings Computer Organization and Architecture 10 th Edition Pearson Education, Inc., Hoboken, NJ. All rights reserved. + William Stallings Computer Organization and Architecture 10 th Edition 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. 2 + Chapter 4 Cache Memory 3 Location Internal (e.g. processor registers,

More information

Map-Reduce. Marco Mura 2010 March, 31th

Map-Reduce. Marco Mura 2010 March, 31th Map-Reduce Marco Mura (mura@di.unipi.it) 2010 March, 31th This paper is a note from the 2009-2010 course Strumenti di programmazione per sistemi paralleli e distribuiti and it s based by the lessons of

More information

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 13 Virtual memory and memory management unit In the last class, we had discussed

More information

Design Considerations for the Symphony Integrated Multimedia File System

Design Considerations for the Symphony Integrated Multimedia File System Design Considerations for the Symphony Integrated Multimedia File System Prashant Shenoy Pawan Goyal Sriram Rao Harrick M. Vin Department of Computer Science, IBM Research Division Department of Computer

More information

ECE 341. Lecture # 18

ECE 341. Lecture # 18 ECE 341 Lecture # 18 Instructor: Zeshan Chishti zeshan@ece.pdx.edu December 1, 2014 Portland State University Lecture Topics The Memory System Cache Memories Performance Considerations Hit Ratios and Miss

More information

Contents. Main Memory Memory access time Memory cycle time. Types of Memory Unit RAM ROM

Contents. Main Memory Memory access time Memory cycle time. Types of Memory Unit RAM ROM Memory Organization Contents Main Memory Memory access time Memory cycle time Types of Memory Unit RAM ROM Memory System Virtual Memory Cache Memory - Associative mapping Direct mapping Set-associative

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

Multimedia Storage Servers: A Tutorial

Multimedia Storage Servers: A Tutorial Multimedia Storage Servers: A Tutorial D. James Gemmell Department of Computer Science, Simon Fraser University E-mail: gemmell@cs.sfu.ca Harrick M. Vin Department of Computer Sciences, The University

More information

Chapter 2: Memory Hierarchy Design Part 2

Chapter 2: Memory Hierarchy Design Part 2 Chapter 2: Memory Hierarchy Design Part 2 Introduction (Section 2.1, Appendix B) Caches Review of basics (Section 2.1, Appendix B) Advanced methods (Section 2.3) Main Memory Virtual Memory Fundamental

More information

Parallel-computing approach for FFT implementation on digital signal processor (DSP)

Parallel-computing approach for FFT implementation on digital signal processor (DSP) Parallel-computing approach for FFT implementation on digital signal processor (DSP) Yi-Pin Hsu and Shin-Yu Lin Abstract An efficient parallel form in digital signal processor can improve the algorithm

More information

Disk-oriented VCR operations for a multiuser VOD system

Disk-oriented VCR operations for a multiuser VOD system J. Indian Inst. Sci., Sept. Oct. 2004, 84, 123 140 Indian Institute of Science. Disk-oriented VCR operations for a multiuser VOD system P. VENKATARAM*, SHASHIKANT CHAUDHARI**, R. RAJAVELSAMY, T. R. RAMAMOHAN**

More information

Chapter 20: Multimedia Systems

Chapter 20: Multimedia Systems Chapter 20: Multimedia Systems, Silberschatz, Galvin and Gagne 2009 Chapter 20: Multimedia Systems What is Multimedia? Compression Requirements of Multimedia Kernels CPU Scheduling Disk Scheduling Network

More information

Chapter 20: Multimedia Systems. Operating System Concepts 8 th Edition,

Chapter 20: Multimedia Systems. Operating System Concepts 8 th Edition, Chapter 20: Multimedia Systems, Silberschatz, Galvin and Gagne 2009 Chapter 20: Multimedia Systems What is Multimedia? Compression Requirements of Multimedia Kernels CPU Scheduling Disk Scheduling Network

More information

MULTIMEDIA INFORMATION STORAGE AND MANAGEMENT

MULTIMEDIA INFORMATION STORAGE AND MANAGEMENT MULTIMEDIA INFORMATION STORAGE AND MANAGEMENT EDITED BY Soon M. Chung Dept. of Computer Science and Engineering Wright State University Dayton, Ohio 45435, USA KLUWER ACADEMIC PUBLISHERS Boston/London/Dordrecht

More information

Improving object cache performance through selective placement

Improving object cache performance through selective placement University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2006 Improving object cache performance through selective placement Saied

More information

UNIT-V MEMORY ORGANIZATION

UNIT-V MEMORY ORGANIZATION UNIT-V MEMORY ORGANIZATION 1 The main memory of a computer is semiconductor memory.the main memory unit is basically consists of two kinds of memory: RAM (RWM):Random access memory; which is volatile in

More information

Performance Monitoring

Performance Monitoring Performance Monitoring Performance Monitoring Goals Monitoring should check that the performanceinfluencing database parameters are correctly set and if they are not, it should point to where the problems

More information

Today: Secondary Storage! Typical Disk Parameters!

Today: Secondary Storage! Typical Disk Parameters! Today: Secondary Storage! To read or write a disk block: Seek: (latency) position head over a track/cylinder. The seek time depends on how fast the hardware moves the arm. Rotational delay: (latency) time

More information

Lecture notes for CS Chapter 2, part 1 10/23/18

Lecture notes for CS Chapter 2, part 1 10/23/18 Chapter 2: Memory Hierarchy Design Part 2 Introduction (Section 2.1, Appendix B) Caches Review of basics (Section 2.1, Appendix B) Advanced methods (Section 2.3) Main Memory Virtual Memory Fundamental

More information

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Modified by Rana Forsati for CSE 410 Outline Principle of locality Paging - Effect of page

More information

Application DBMS. Media Server

Application DBMS. Media Server Scheduling and Optimization of the Delivery of Multimedia Streams Using Query Scripts Scott T. Campbell (scott@cc-campbell.com) Department of Computer Science and Systems Analysis, Miami University, Oxford,

More information

Towards Scalable Delivery of Video Streams to Heterogeneous Receivers

Towards Scalable Delivery of Video Streams to Heterogeneous Receivers Towards Scalable Delivery of Video Streams to Heterogeneous Receivers Bashar Qudah bqudah@wayne.edu Nabil J. Sarhan nabil@ece.eng.wayne.edu Department of Electrical and Computer Engineering Wayne State

More information

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao File Systems Hai Tao File System File system is used to store sources, objects, libraries and executables, numeric data, text, video, audio, etc. The file system provide access and control function for

More information