Application DBMS. Media Server

Size: px
Start display at page:

Download "Application DBMS. Media Server"

Transcription

1 Scheduling and Optimization of the Delivery of Multimedia Streams Using Query Scripts Scott T. Campbell Department of Computer Science and Systems Analysis, Miami University, Oxford, Ohio Soon M. Chung Λ Dept. of Computer Science and Engineering, Wright State University, Dayton, Ohio 45435, USA Abstract. New techniques are necessary to satisfy the high bandwidth requirement and temporal relationships of multimedia data streams in a network environment. Clients can experience gaps between the multimedia data streams during presentations as the multimedia server services multiple clients. This variable delay occurs between the end of one multimedia stream and the beginning of the next multimedia stream because client requests are queued awaiting service. This leads to interruptions and discontinuities of the client's presentation. Special techniques are necessary to manage the temporal relationships between multimedia streams in distributed environments. In this paper we propose two scheduling algorithms for delivering multimedia streams by using the query script, which is a multimedia database interface for clients. A client can specify all the multimedia objects that make up the presentation and their temporal relationships in a query script. Once submitted, the information in the query script is used by the multimedia database system to schedule and optimize delivery. Using simulations we analyzed the performance of the proposed delivery scheduling algorithms and the predelivery optimization method. The simulation results show that delivery scheduling algorithms satisfy the specified temporal relationships between multimedia streams while maintaining better use of system resources. Keywords: multimedia streams, delivery scheduling, optimization, simulation, query script 1. Introduction Multimedia information systems require new data modeling and delivery capabilities to specify and guarantee temporal relationships between streams [7]. These capabilities are more critical in distributed computing systems due to multimedia data's high resource requirements. In this paper we propose the use of query scripts as the interface between clients and multimedia database systems for specifying the delivery of multimedia data. Query scripts allow clients to make a retrieval Λ This research was supported in part by NCR, Lexis-Nexis, and NSF under Grant No.CDA cfl 2001 Kluwer Academic Publishers. Printed in the Netherlands. paper.tex; 25/05/2001; 16:29; p.1

2 2 S. T. Campbell and S. M. Chung request consisting of a set of multimedia objects and their temporal ordering to the multimedia database system. The information given in the query script enables the database system to reserve sufficient disk bandwidth, memory, and other system resources in order to meet the client's request. In our approach, a database management system (DBMS) is combined with a multimedia server. The DBMS provides the data modeling and manages key information about the multimedia data, while the media server stores and delivers multimedia data as atomic objects. For example, relational database systems can support multimedia through the use of binary large objects (BLOBs). A BLOB is an unstructured storage string that the database system treats as an atomic object. The database schema manages access to the data in the BLOB by storing a pointer to the BLOB. Object-oriented database systems can support more accurate modeling and better integration of multimedia data, but again they eventually store each multimedia data object as a sequence of data. In both cases, in order to satisfy the delivery timing requirements of multimedia streams, usually specialized media servers handle the actual storage and delivery of multimedia objects. A media server is a shared storage facility that is capable of isochronous delivery of multimedia data. Isochronous delivery guarantees that a new packet of data is available at the client in time to present the next video frame or audio data. Additionally the media server incorporates the server-push methodology as opposed to the more traditional client-pull methodology to minimize network traffic and extraneous client read requests [23]. In the client-pull case, the client makes separate requests for each block of data. On the other hand, in the case of server-push, a single request is made by the client for a stream delivery, then the server continually transmits data blocks of the stream. Media servers extend existing file system capabilities by providing multimedia data placement strategies, bounded delivery timing, guaranteed buffer and media management, and special disk retrieval techniques. Through these techniques the media server is able to handle concurrent retrieval and transmission of multimedia streams and supports multiple clients. We claim that the relationship between a media server and a multimedia database system develops just as the relationship between file systems and traditional database systems has developed. Originally the file system was the central data management component and applications managed their own logical view of data. Gradually database systems emerged to provide common data models, catalogs, dictionaries, indices and other tools [8]. Thus the database system extended the file system's management capability. We feel that a similar architecpaper.tex; 25/05/2001; 16:29; p.2

3 Delivery Scheduling of Multimedia Streams 3 tural hierarchy exists between multimedia database systems and media servers, as shown in Figure 1. The media server provides basic storage and delivery functions for multimedia data while the database system adds necessary modeling and management functions. Additionally the media server can deliver information directly to the client application bypassing the database system. Application Application DBMS File System Media Server DBMS File System Traditional Figure 1. Hierarchy of database systems Multimedia Specialized media servers are necessary to handle the delivery of multimedia data since audio and video data are continuous. Video consists of a continuous sequence of frames, and each frame needs retrieval, processing and delivery within a strict fixed time interval. Audio is a continuous sequence of samples that needs conversion in a sample interval. Delivery of multimedia data is different from the delivery of traditional data since multimedia data is inherently presentational. In other words, usually the purpose of multimedia data is for presentation to users rather than for additional computational processing. Since multimedia data is large, for example 10 seconds of VCR quality video is about 1.5 MB, the clientcannotwait for the complete retrieval of the object before beginning the presentation. Instead, the client processes multimedia data in blocks as they arrive. This sequence of data blocks delivered in regular time intervals constitutes a multimedia stream. Delivery problems are related to many subsystems, but a major bottleneck is the disk [17]. Disk subsystems do not guarantee data delivery within the bounds necessary for the presentation of video and audio. This is because the disk orders the requests to minimize seek times. Hence the exact data delivery timing is dependent upon the current set of requests and the locations of corresponding disk blocks. This leads to random delivery timing, which is unacceptable for multimedia data. Proper decoding of multimedia data requires that data always be present for decoding when needed. The most common solution is to retrieve one unit of data from the disk for each multimedia stream during each fixed disk service round, as in [9]. This scheme guarantees paper.tex; 25/05/2001; 16:29; p.3

4 4 S. T. Campbell and S. M. Chung timely delivery as long as the total number of requested streams is below a certain limit. The problem with the delivery scheme using a fixed disk service round is that it is characterized as best-effort delivery and can lead to a loss of the temporal ordering of the streams. With best-effort delivery the media server attempts to satisfy all accepted requests but offers no guarantees due to the stochastic nature of the retrieval process. For example, if a client wants to display two streams simultaneously and makes two separate requests, the first can be accepted and the second rejected. This loses the desired temporal ordering between the two multimedia streams. The other major problem is large gaps between streams which are supposed to be sequential. A best-effort delivery system may accept the request for the first stream from the client and then immediately receive several requests from other clients which fill the media server's capacity. Then, when the second multimedia stream is needed by the client, the media server is busy and can not deliver it immediately. This results in an undesired gap between the two streams (called interstream latency), as shown in Figure 2. Figure 2. Loss of temporal order Some multimedia presentation systems address these temporal synchronization problems [4, 25], but they do not deal with network based delivery systems. They assume local storage systems in which they can exert strict control over disk accesses. This assumption is not feasible in a network environment where multiple requests arrive at a media server from several clients. In the network environment, best-effort depaper.tex; 25/05/2001; 16:29; p.4

5 Delivery Scheduling of Multimedia Streams 5 livery systems can not provide the necessary synchronization between multimedia streams. Our approach enables the client to request a set of multimedia objects and specify their temporal order by using query scripts. Query scripts contain enough information for the multimedia database system to maintain the requested temporal delivery order. A query script specifies the entire set of multimedia objects and their temporal ordering in one request so that the system can create a delivery schedule and ensure proper delivery. This delivery scheduling minimizes the unacceptable latencies experienced with best-effort delivery systems. In the remainder of this paper, we introduce the query script and then propose two delivery scheduling algorithms, which are named scan scheduling and group scheduling. The simulation results demonstrate that the proposed delivery scheduling algorithms satisfy the specified temporal ordering between streams while maintaining high system resource utilization. We also analyze the effect of predelivery optimization on the performance of the proposed scheduling algorithms. 2. Scheduling of Delivery Using Query Scripts A query script has two parts, declaration and temporal ordering. The declaration part uses object identifiers to specify the multimedia data objects to be delivered to the client. The temporal ordering part specifies the timing relationships between the multimedia objects. There are three basic temporal ordering actions: initiate streams, wait for streams to complete, and terminate streams. Figure 3 shows an example of a query script that declares four multimedia data objects for delivery. Here video A plays to completion, then videos B and C play simultaneously. When video B completes, video C terminates even though it is not finished yet, and then video D plays. More details about query scripts are given in [5]. The query script can be used for the specification of synchronized presentation of multimedia objects. The temporal relationships between continuous media objects can be easily specified by using the query script because it covers all the event-based synchronization requirements [3]. Compared to other synchronization specification schemes, the query script is quite easy to use, and it can be easily extended by adding other synchronization constructs. For example, some timing operations can be added to cover interval-based synchronizations. An extensive survey and comparisons of different synchronization specification methods are given in [3]. paper.tex; 25/05/2001; 16:29; p.5

6 6 S. T. Campbell and S. M. Chung Figure 3. Query script example A query script does not support detailed temporal synchronization semantics because it is not a presentation language such as Firefly [4], MHEG [12, 19, 20], or HyTime [11]. We expect the client applications to use these schemes to provide richer temporal and physical modeling semantics and to use query scripts to make delivery requests that include temporal relationships. The client application manages the final presentation timing using the presentation language while the multimedia delivery system uses the query script to schedule and manage the delivery timing. The information in the query script provides the database system with enough information to properly schedule the delivery. Scheduling ensures that the query script's temporal ordering is met, no overcommitment of system resources occurs, and optimized usage of system resources. Best-effort delivery systems can not perform scheduling since only the current set of requests is known and, once the system accepts the requests, there is a contract to maintain their delivery. This leads to random rejection of streams or lengthy gaps between multimedia streams. With query scripts, the delivery manager can create a delivery schedule since the entire delivery request of each client is specified. Then the delivery manager is able to schedule the delivery of multimedia objects and guarantee the desired performance by creating a feasible schedule and using the predelivery optimization technique. The predelivery optimization consists of prefetching disk blocks during the periods of disk underutilization, so that we can start some query scripts earlier. paper.tex; 25/05/2001; 16:29; p.6

7 2.1. Scan Scheduling Delivery Scheduling of Multimedia Streams 7 Delivery scheduling is the main reason for using query scripts. A delivery schedule is a consolidated list of service intervals for the streams of accepted query scripts. The delivery manager uses the information in the query script to schedule the delivery of its streams by integrating their service intervals into the current delivery schedule. A feasible schedule is one that does not have any overcommitment of system resources during the delivery of all the scheduled multimedia streams, while satisfying the temporal orderings specified in the query scripts. The media server uses this delivery schedule to control the delivery of multimedia streams to the client workstations. There are three major parts in the delivery manager of the media server: the parser, the scheduler and the retrieval manager, as shown in Figure 4. The scheduling process starts when a client sends a query script to the media server. The query script is parsed into a graph called the Script Realization Graph (SRG) [5]. The scheduler takes the request from the queue and finds a starting time that will allow all streams to be delivered according to the client's temporal ordering requirements.it does this by creating a series of test schedules in which it integrates the new request with the current delivery schedule. The scheduler selects a feasible test schedule to be the new delivery schedule. The retrieval manager reads the delivery schedule at every fixed disk service round to determine which streams to service. It then identifies a set of disk blocks to retrieve from the disk. Once these disk blocks are retrieved, the delivery manager then sends them to the clients. Isochronous delivery is guaranteed for each client request because all the scheduled query scripts are served at every fixed disk service round. The main part of a schedule is the list of service intervals and the resource needs for each service interval. A service interval represents a period of time where the same set of streams is delivered. Therefore, the same level of resource utilization is required during the service interval. Figure 5(a) shows five service intervals for a delivery schedule. A new service interval begins with any change in the set of active multimedia streams. Figure 5(b) shows the service intervals of a new query script. Adding the new query script to the existing delivery schedule results in the set of service intervals in Figure 5(c). The list of service intervals in the delivery schedule maintains information about future resource needs in a fashion that can be examined quickly. We developed two scheduling algorithms, which are named scan scheduling and group scheduling. We present the scan scheduling algorithm and its simulation results before introducing the group scheduling. paper.tex; 25/05/2001; 16:29; p.7

8 8 S. T. Campbell and S. M. Chung Figure 4. Delivery manager Figure 5. Service intervals The scan scheduling algorithm schedules one query script at a time by progressively slipping the new query script's start time until finding the first feasible schedule, so it is named scan scheduling. The algorithm creates a test schedule by integrating the service intervals of the new query script with the current schedule, as shown in Figure 6. This test schedule becomes the new delivery schedule if it is feasible. A feasible schedule is one in which all service intervals can receive adequate disk bandwidth, buffer memory, network bandwidth and other system resources required. If the test schedule is not feasible, the algorithm repeatedly finds the next possible start time for the new query script until obtaining a feasible schedule. The algorithm can always find a start time that results in a feasible schedule since the media server paper.tex; 25/05/2001; 16:29; p.8

9 Delivery Scheduling of Multimedia Streams 9 admits only the query script that is feasible by itself. This ensures that a feasible schedule exists, even though it may start the new query script at the end of all currently accepted query scripts. Start Apply the admission test Pass Integrate into a test schedule Fail Reject the query script Is the test schedule feasible? Yes No Try the predelivery optimization Successful Unsuccessful Locate the next possible start time and update the test schedule Update the test schedule Figure 6. Scan scheduling algorithm Accept the test schedule A key to the efficient execution of this scan scheduling algorithm is how to limit the number of possible test schedules. This requires finding a minimum set of possible start times for each new query script. Instead of sequentially incrementing the possible start time, the scan algorithm examines only the time instances where the current schedule's resource utilization level changes, i.e. the start and end points of the service intervals in the schedule. This significantly reduces the number of test schedules. However, since only one service interval requiring an excessive amount of resources can make a test schedule infeasible, we mayhave an unduly long delay in starting the new query script. Thus, we need to optimize the schedule to reduce the total service time for the query scripts. paper.tex; 25/05/2001; 16:29; p.9

10 10 S. T. Campbell and S. M. Chung 2.2. Predelivery Optimization Optimization of delivery scheduling is required to reduce the total delivery time for the query script while making better use of system resources. The delivery manager uses prefetching and buffering techniques to overcome the overcommitment of system resources and to provide earlier delivery of requests. Idle disk bandwidth and buffer memory are used to prefetch and buffer some of the multimedia streams so that there will be no overcommitted service intervals during the delivery of query scripts. The first step is to check the resource requirements of each service interval of the test schedule. If there is a problem, the algorithm calculates the total amount of memory space needed to correct the overcommitment. This amount is proportional to the length of the overcommitted service interval and the amount of overcommitted disk bandwidth. The algorithm then scans back through the test schedule's service intervals looking for available disk bandwidth and memory space. When it finds available resources, the test schedule is updated to include the prefetch operations. If all overcommitted service intervals are corrected, the updated test schedule is accepted. Otherwise a new test schedule is generated by selecting the next possible start time for the new query script. Is this media server based optimization necessary since the client workstation can perform local optimization for the delivery of its own requests? While the client workstation can perform local optimization, it can not perform global optimization because it is unaware of other client requests. Also the client may not be fully aware of the length, bandwidth requirement, or composition of the requested multimedia objects. For complete scheduling and optimization, the scheduling algorithm needs to know the properties of all multimedia objects. While the client can get this information, the delivery manager, as part of the multimedia database system, can better perform the optimization since it has detailed knowledge of the actual multimedia content. 3. Simulation To evaluate the performance of the scheduling and optimization algorithms based on the query script, we created a simulation environment. The simulation is implemented using CSIM [18], a discrete event simulator supporting concurrent processes written in C++. CSIM primitives provide multitasking capabilities, handle interprocess communications, and manage data collection. The resulting simulation is very close to the actual implementation in C++. paper.tex; 25/05/2001; 16:29; p.10

11 Delivery Scheduling of Multimedia Streams 11 For comparison, we also evaluate the cases where clients release an individual request for each multimedia stream, which is called baseline approach in this article Simulation Model The simulation model includes three major components: clients, delivery manager, and disk manager. The client processes have three stages: request generation, request submission, and data reception. For our simulations, a client's request consists of four playback intervals where the length of each interval is determined by the longest multimedia stream in that interval. All the streams in a playback interval start together at the beginning of the interval. Each playback interval has up to three multimedia streams that are selected from a pool of 180 multimedia streams, which consists of 120 small streams and 60 large streams. The length of each stream is randomly selected from a uniformly distributed set of values where the distribution for small streams ranges from 5 to 15 seconds and the distribution for large streams ranges from 45 to 75 seconds. For the baseline cases, the client makes a request for each multimedia stream at the start of each playback interval. For the query script cases, each client submits a single query script for all the multimedia streams involved. In all the experiments, the clients wait and receive the multimedia data from the delivery manager, record the delivery times, and then initiate another cycle of requests. Both the delivery manager and the disk manager are parts of the media server. When the media server receives a query script, the delivery manager first parses the query script and then executes the delivery scheduling routine. The scheduling routine merges the query script into the current delivery schedule by using the proposed scan scheduling algorithm. The delivery manager generates a list of disk blocks to be retrieved according to the schedule, and sends it to the disk manager. The disk manager retrieves the data blocks into memory buffers, then transfers them to the client through the network. This simulation does not use network delay or network congestion models since appropriate multimedia networks should be able to handle the maximum number of 150 KB/sec streams that the current disk model supports. The disk manager simulates a HP disk following the model and methodology used in [24]. The disk model calculates seek time, rotational delay, head switch time and data transfer time for each disk request. A disk request consists of an entire track of data which is36 Kbytes. The model uses a piecewise linear approximation of the disk's paper.tex; 25/05/2001; 16:29; p.11

12 12 S. T. Campbell and S. M. Chung Table I. Disk drive simulation parameters Number of Cylinders 1962 Number of Tracks per Cylinder 19 Number of Data Sectors per Track 72 Sector Size (bytes) 512 Data Transfer Rate Rotation Speed Seek Time (ms) Head Switch Time Controller Overhead 2.3 MB/sec 4002 RPM 3:24 + 0:400 p d, d<383 cylinders 8:00 + 0:008d, d 383 cylinders 1.6 ms 2.2 ms actual seek time with seek distances determined by the placement of multimedia streams on the disk. In our simulation, to determine whether a schedule is feasible or not, the delivery manager uses the fixed maximum number of continuous multimedia streams that the disk model can deliver concurrently. The theoretical maximum number of streams that the disk can deliver is limited by the disk bandwidth. In practice, the actual number of concurrent streams that can be delivered is significantly lower than this due to the random distribution of seek delay. We experimentally determined the maximum number of streams that the disk can continuously support by increasing the number of streams being delivered while monitoring disk bandwidth utilization and the number of data underruns. In our simulation, each stream is stored as a sequence of randomly selected tracks. We select the maximum number of serviceable streams to be one less than the point where disk bandwidth utilization reaches 100% or a data underrun error occurs. Figure 7 shows the maximum and average disk bandwidth utilization recorded during a simulation run of 10,000 seconds. During the simulation, one track for each stream is delivered during each fixed disk service round. One disk service round is the period of time to playback the data in one buffer. In our simulation a buffer stores a whole disk track, and the playback rate is assumed to be 150 KB/sec. Disk head movement is based on the Grouped Sweeping Scheduling paper.tex; 25/05/2001; 16:29; p.12

13 Delivery Scheduling of Multimedia Streams 13 (GSS) algorithm [27], where the number of groups is one. The amount of extra time in each disk service round after delivering one track for each stream gives us the disk bandwidth utilization percentage. The average disk bandwidth utilization is the statistical average from all the disk service rounds, and the maximum disk bandwidth utilization is the largest of all the disk bandwidth utilizations of disk service rounds during the simulation. From Figure 7 we selected eight as the maximum number of simultaneous streams that can be serviced continuously. Figure 7. Determining the maximum number of serviceable streams The disk model uses multiple buffers for retrieving each multimedia stream. One buffer is loaded with the data from the disk while the data in another buffer is delivered to the client workstation. Multiple buffering results in efficient continuous data retrievals from the disk but forces a startup delay of at least one disk service round as the first buffer is filled for each stream. Startup delay is the amount of time between the client request and the delivery of the first data block to the client. Usually two buffers are allocated for each stream for double buffering, except for the case of prefetching multiple tracks of a stream for scheduling optimization. During the simulation we capture disk bandwidth utilization, startup delayand interstream latency,whichisthe gapbetween two consecutive streams. For the experiments using the proposed predelivery optimization, we also capture memory usage. The resulting output data are collected after a fixed length warm-up period to remove startup transients. We also perform 10 simulation runs of 10,000 seconds to obtain the average values considering the stochastic nature of the simulation. paper.tex; 25/05/2001; 16:29; p.13

14 14 S. T. Campbell and S. M. Chung Each run consists of a fixed number of clients and each client makes a new request when its current request is completely served Performance of Scan Scheduling The goal of using query scripts is to eliminate the gap between two consecutive streams that we mayhave with the baseline approach. This gap is a result of the client individually requesting each stream. The delivery manager queues the requests if the required disk bandwidth is not available, and streams are then serviced on a first-come-first-served basis. The length of this gap increases as the number of streams increases because system work load increases. Figure 8 shows the average and maximum latency between streams for the baseline experiments. The baseline approach has an average of 0.75 second latency for three to five clients. However, the average interstream latency increases rapidly as disk contention becomes higher, which is after six clients. At this higher load there are more requests than available resources and the system queues the requests for streams. The interstream latency has the effect of creating gaps and problems in the delivery ordering. Figure 8. Interstream latencies With larger loads the interstream latency can increase to 30 to 65 seconds between every two streams. In this simulation, where the client request has four playback intervals, the total interstream latency can easily be several minutes long. With query scripts there are no gaps between streams as shown in Figure 8. The scheduling algorithm ensures sufficient resources for the delivery of all the streams in the query scripts. paper.tex; 25/05/2001; 16:29; p.14

15 Delivery Scheduling of Multimedia Streams 15 The benefit of contiguous presentation of each query script comes at the cost of increased startup delay. Figures 9 and 10 show the average and maximum startup delay of the baseline and scan scheduling experiments. The startup delay of the baseline experiments follows the pattern of the interstream latency since there is no scheduling. Delivery begins whenever system resources become available. The scan scheduling experiments show an increase in startup delay since the system starts the delivery of a query script when it maintains a feasible schedule. The startup delay becomes high as the contention for system resources grows, because the scheduler needs to delay the start times of query scripts to manage contention. Figure 9. Average startup delay Figure 10. Maximum startup delay paper.tex; 25/05/2001; 16:29; p.15

16 16 S. T. Campbell and S. M. Chung Figure 11 illustrates the delivery characteristics of the baseline approach and the scan scheduling based on the query script. The baseline approach experiences both interstream latency and loss of synchronization due to the heavy system work load. The scan scheduling approach may experience a larger startup delay since the system creates a delivery schedule that ensures sufficient resources for the complete delivery of the query script. The baseline approach can start the delivery earlier since no such guarantee exists. Figure 11. Delivery timing Predelivery optimization lowers the startup delay as can be seen from Figure 9 and Figure 10. This benefit comes from prefetching the data and thus being able to find a feasible schedule with earlier start times of query scripts. The optimizer uses memory to hold prefetched blocks of the multimedia streams. Figure 12 shows the memory utilization for the predelivery optimization of the scan scheduling algorithm. Memory usage quickly grows as the number of clients increases. Our next simulation experiment limited the amount of memory to 20 MB for the predelivery. Figure 13 shows little change in the average and maximum startup delay with this memory limitation because most of the infeasible schedules require only a small amount of predelivery to become feasible. Figure 14 shows the distribution of actual startup delays for a simulation run of scan scheduling with optimization. This distribution shows that a high percentage of requests start within a short period. With five clients, 65% of all requests are started within 9 seconds, and with six clients, 65% of them are started within 28 seconds. The next analysis examines the disk bandwidth utilization. We measure the disk bandwidth utilization as a function of the disk's idle time during each disk service round. In Figure 15 we can see that the average disk utilization is highest for the baseline cases. Without scheduling, paper.tex; 25/05/2001; 16:29; p.16

17 Delivery Scheduling of Multimedia Streams 17 Figure 12. Memory usage for the predelivery optimization Figure 13. Startup delay with/without memory limitation the system service requests are made as quickly as possible, and results in a higher disk utilization level. However, since these simulations were performed for a fixed continuous work load, lower disk bandwidth utilization, while maintaining the same delivery throughput, indicates a more efficient scheduling methodology. It implies that more time remains in each disk service round for additional requests including non-stream requests. The approach of interleaving the access of continuous streams with non-continuous data allows the same disk system to service multimedia objects and traditional file accesses. Figure 15 also shows that limiting the buffer memory size to 20 MB for the predelivery has no effect on the disk bandwidth utilization. paper.tex; 25/05/2001; 16:29; p.17

18 18 S. T. Campbell and S. M. Chung Figure 14. Distribution of startup delay Figure 15. Average disk bandwidth utilization In this article, we considered constant-bit-rate (CBR) streams with the constant playback rate of 150 KB/sec. However, modern digital videos use variable-bit-rate (VBR) compression, such as MPEG [13, 15, 21, 22], where the data consumption rate during the playback is different for different frames. For the retrieval of VBR streams, we can use Constant Time Length (CTL) retrieval or Constant Data Length (CDL) retrieval [6]. With CTL retrieval, variable amount of data is retrieved for a stream during each disk service round, but the playback time is the same during the next service round. On the other hand, with CDL retrieval, fixed amount of data is retrieved for a stream during paper.tex; 25/05/2001; 16:29; p.18

19 Delivery Scheduling of Multimedia Streams 19 each disk service round, but the playback time is different during the next disk service round. If the buffer has enough data for the playback during the next disk service round, no data is retrieved during the current disk service round [1]. If we use CDL retrieval, the scan scheduling algorithm can be applied as described above because fixed amount of data is retrived for a stream during each disk service round. However, to use CTL retrieval, we can adopt the Generalized CTL (GCTL) proposed in [2]. In GCTL, the duration of the CTL retrieval round is an integer multiple of the duration of the disk service round, so that fixed amount of data is retrieved for a stream during each disk service round. Thus, the Scan scheduling algorithm can be used along with the GCTL retrieval. Networking is a critical issue in the delivery of multimedia streams, especially for VBR streams [10, 14, 26]. However, this article is focused on the retrieval scheduling of multimedia streams from a disk subsystem within a media server, and networking issues are beyond the scope of this article. As the main goal of scan scheduling algorithm is to satisfy the interstream synchronization specifications, supporting video-like operations, such as rewind and fast-forward operations, on a specific stream is not easy. If a user initiates a video-like operation on a stream being presented, that stream is removed from the corresponding query script, and the system should change the current delivery schedule. The videolike operation should be regarded as a separate request, and its disk bandwidth requirement should be considered to generate a new schedule. If there are many query scripts being serviced, so that there is not enough disk bandwidth available, the start of the requested video-like operation may be delayed. 4. Group Scheduling In this section we look at an additional optimization technique that extends the scan scheduling algorithm in order to improve system resource utilization. The scan scheduling algorithm does not perform any optimization after it finds a feasible schedule with the earliest start time for a new query script. However, if we find other feasible schedules, we can select a schedule that better uses resources and reduces the overall query script startup delay. For example, if the first feasible schedule for a new query script happens to have a set of intervals with high disk bandwidth utilization late in the schedule, then all following query scripts must start after this point of high utilization. However, if the new query script's start time is slightly delayed, then a new schedule is paper.tex; 25/05/2001; 16:29; p.19

20 20 S. T. Campbell and S. M. Chung created that might avoid the intervals with high utilization levels, and hence allow other query scripts to start earlier. Start Apply the Scan scheduiling algorithm to find a feasible schedule Select another start time in the time window Pass Fail Integrate into a test schedule No Is the test schedule feasible? Yes Include the test schedule in the set of candidate schedules Figure 16. Group scheduling algorithm Apply the metrics to the candidate schedules and select the best one 4.1. Creation of Candidate Schedules To find multiple feasible candidate schedules for a new query script, we check all possible start times within a fixed time span from the earliest start time of the query script that we obtained by applying the scan scheduling algorithm. The fixed time span is called time-window and it prevents us from considering too many candidate schedules with different start times for the new query script. Moreover, it is not desirable to delay the start time too much in favor of other performance metrics. Once we have a set of candidate schedules, we can select a schedule based on some performance metrics. This scheduling algorithm is named group scheduling and is shown Figure 16. The scan scheduling algorithm always minimizes the startup latency of a new query script since it selects the first feasible start time. Group scheduling may increase the startup delay of the new query script by selecting a start time that better uses system resources. However, limiting the additional latency is important so that the process of optimizing paper.tex; 25/05/2001; 16:29; p.20

21 Delivery Scheduling of Multimedia Streams 21 the delivery schedule of multiple query scripts does not inadvertently make the new query script unduly suffer with a long startup delay.some metrics could select a start time that maximizes the startup delay of the new query script. For example, selecting a schedule with the lowest maximum disk bandwidth utilization level may be desirable so that we can accommodate more query scripts later. However, this strategy leads to starting the new query script at the end of all current scheduled query scripts. Figure 17 shows a sample case where this situation occurs. Figure 17(a) depicts the existing schedule and the new query script. Immediately starting the new query script results in a delivery schedule with a maximum utilization of 8, as shown in Figure 17(b). Starting the new query script at the end of the current schedule, as in Figure 17(c), results in a maximum utilization of 5. The strategy will thus select the schedule in Figure 17(c) which maximally delays the start time of the new query script. 2 3 New Query Script Current Schedule (a) Existing Delivery Schedule and a New Query Script (b) Test Schedule -- New Query Script Starts Immediately (c) Test Schedule -- New Query Script Starts at the End Figure 17. Increased startup delay problem The time-window limits the additional startup delay of the new query script because only feasible schedules with a start time falling within the time-window are allowed to be candidate schedules. Varying the length of the time-window controls the maximum startup delay for the new query script. The trade-off is that larger time-window size provides more candidate schedules for selection at the cost of potentially increasing the startup delay of the new query script. In our simulation we used 10 seconds as the time-window size. paper.tex; 25/05/2001; 16:29; p.21

22 22 S. T. Campbell and S. M. Chung 4.2. Selection Criteria Once a set of candidate schedules is created, the next task is to select the best schedule. The selection algorithm ranks each candidate schedule based on a metric and then selects the schedule with the highest rank as the new delivery schedule. The metric identifies schedules that exhibit desirable properties based on the optimization goals. One optimization goal is to reduce the sum of all query scripts' startup delay. Another goal is to better use system resources such as disk bandwidth. We use disk bandwidth utilization since it measures the efficiency of data transfer between the disk drive and memory. Basically we seek schedules that minimize spikes and other big changes in disk bandwidth utilization so that the startup delay of later query scripts will be reduced. The example in Figure 18 depicts how poorselectionof a schedule impacts the query scripts submitted later. In Figure 18(b) the scheduler integrates the new query script into the existing schedule such that one service interval has a high utilization of 8. As a result, the start of the next query script is delayed until after that interval. However, if the schedule integrates the new query script as in Figure 18(c) by delaying its start, then the maximum utilization level becomes 6 and the next query script may start earlier. 2 3 New Query Script Current Schedule (a) Existing Delivery Schedule and a New Query Script (b) Test Schedule -- New Query Script Starts Immediately 2 3 New Query Script Starts Here (c) Starting New QS Later Avoids a High Utilization Level Figure 18. Optimizing query script integration With these basic optimization goals in mind, we identified the following optimization strategies that are summarized in Table II and fully described below. paper.tex; 25/05/2001; 16:29; p.22

23 Delivery Scheduling of Multimedia Streams 23 Table II. Summary of selection metrics Smallest Startup Delay Biggest Soonest Monotonic Decreasing Highest Floor Lowest Ceiling Minimum Differential Time at Minimum Scan scheduling algorithm Highest utilization level early in the schedule Consistently decreasing utilization level Schedule with the highest minimum utilization level Schedule with the lowest maximum utilization level Schedule with the smallest difference between the maximum and minimum utilization levels Schedule with the largest total time period at the minimum utilization level Smallest Startup Delay The first strategy corresponds to the scan scheduling algorithm where we select the first feasible schedule. The advantages of this methodology are that each query script starts as quickly as possible and the optimization overhead is low because other candidate schedules are not considered. Biggest Soonest In this strategy, we rank the schedules based on the end time of the last interval with the highest disk bandwidth utilization. The scheduler then selects the schedule with the smallest end time, which corresponds to the schedule with its highest utilization level ending sooner than other schedules. As a result, we can avoid cases where an interval late in the schedule delays other query scripts because of its high utilization level. Selecting schedules that have their high utilization levels early allows other query scripts to start earlier. Monotonic Decreasing In this strategy, we record the earliest time within each schedule from which the bandwidth utilization remains the same or decreases. The earlier this time, the better the schedule. The biggest-soonest strategy is based upon only the last time of the highest bandwidth utilization. Thus it ignores the behavior of all other intervals in the schedule. On the other hand, the monotonic decreasing strategy measures the tendency paper.tex; 25/05/2001; 16:29; p.23

24 24 S. T. Campbell and S. M. Chung to have higher bandwidth utilization levels sooner and lower utilization levels later in the schedule. Selecting a schedule that exhibits decreased bandwidth utilization levels late in the schedule again makes it easier to schedule other query scripts with smaller startup latencies and increases the near-term disk bandwidth utilization. Highest Floor This strategy looks at the minimum disk bandwidth utilization as a measure of the schedule's efficiency. The floor" represents the lowest bandwidth utilization level of the schedule. Since most schedules have at least one interval with a low utilization level, the lowest floor is not considered. Instead, in this strategy we select a schedule with the highest floor. The idea is that schedules with consistently high utilization levels can retrieve more data in the same period of time, and hence make better use of the resources. Lowest Ceiling The ceiling" represents the highest bandwidth utilization level of the schedule, and this strategy selects the schedule with the lowest ceiling. This is an attempt to select a schedule with consistent disk bandwidth utilization. Selecting a schedule with a high ceiling does not make sense because a single short interval with maximum utilization occurs frequently. Schedules with consistent utilization as characterized by low ceilings can allow other query scripts to be scheduled at the earliest possible time, which reduces their startup delay. Minimum Differential A logical combination of the highest floor and lowest ceiling strategies, described above, is to select a schedule that has the smallest difference between its ceiling and floor. These schedules have consistent performance which makes it easier to integrate other query scripts into them. Time at Minimum The final metric that we considered is the amount of time a schedule spends at the floor. The longer the schedule uses the floor utilization, the more disk bandwidth is available to other query scripts Adding Predelivery Optimization to the Schedule Selection Let's consider the role of predelivery optimization in creating candidate schedules. There are three basic approaches. The first is to perform paper.tex; 25/05/2001; 16:29; p.24

25 Delivery Scheduling of Multimedia Streams 25 predelivery optimization only on the finally selected delivery schedule. The second is to perform predelivery optimization on all potential schedules one for each possible start time of the new query script to identify the feasible candidate schedules. The final approach is to make the predelivery overhead a part of the metric. The first case, performing predelivery optimization only on the selected schedule, reduces the optimization time, but it decreases the possibility of finding the best. Predelivery optimization makes more schedules feasible at the cost of adding more time to the group scheduling process. The additional time necessary for predelivery optimization is around 100 ms per schedule in our simulation. However, the simulation results show that we can reduce the average startup delay of query scripts by about 5 seconds if we apply the predelivery optimization (with a time-window of 10 seconds) on all potential schedules. Thus performing predelivery optimization on all potential schedules is beneficial. The last option, including the predelivery overhead into the metric, can be simply done by using the total amount of memory used by the predelivery as a tie-breaker. When the selection metric values are equal, the memory usage for predelivery determines which schedule to select Performance of Group Scheduling We use the same simulation environment and disk model for the simulation of group scheduling as were used in the simulation of scan scheduling. For each query script requested from a client, the new group scheduling algorithm first creates a set of candidate schedules by selecting feasible start times for the new query script. As with the scan scheduling algorithm, only the start and end points of the service intervals in the currentschedule are the potential start times of the new query script. This dramatically cuts down on the number of candidate schedules to be evaluated. The scheduler then creates a test schedule for each potential start time for the new query script. If this test schedule is not feasible, predelivery optimization is applied as an attempt to make it feasible. The collection of all feasible schedules found from this trial integration process becomes the set of candidate schedules. Then one of the candidate schedules is selected for the new query script based on the selection metric. In our simulations, we limited the buffer memory size to 20 MB for the predelivery optimization. First we compare various selection strategies (using different metrics) in terms of the average startup delay of the scheduled query scripts. Figure 19 shows the results of experiments with four to ten clients. The scan algorithm has slightly higher average startup delay for paper.tex; 25/05/2001; 16:29; p.25

26 26 S. T. Campbell and S. M. Chung many cases. With six clients the resource contention begins, and it is at this point that group scheduling begins to influence the results. However, since the average startup delays with different selection strategies are quite similar, we can conclude that much of the improvement comes from the small delay added to the start time of the new query script. Selecting the best schedule from the candidate schedules with different start times provides a 5 7% improvement in the average startup delay. Figure 19. Average startup delay Figure 20. Maximum startup delay A larger improvement is seen in the maximum startup delay as shown in Figure 20. Compared to the scan scheduling, the group schedulpaper.tex; 25/05/2001; 16:29; p.26

Multimedia Systems 2011/2012

Multimedia Systems 2011/2012 Multimedia Systems 2011/2012 System Architecture Prof. Dr. Paul Müller University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY http://www.icsy.de Sitemap 2 Hardware

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Distributed Video Systems Chapter 3 Storage Technologies

Distributed Video Systems Chapter 3 Storage Technologies Distributed Video Systems Chapter 3 Storage Technologies Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 3.1 Introduction 3.2 Magnetic Disks 3.3 Video

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao File Systems Hai Tao File System File system is used to store sources, objects, libraries and executables, numeric data, text, video, audio, etc. The file system provide access and control function for

More information

Disks and I/O Hakan Uraz - File Organization 1

Disks and I/O Hakan Uraz - File Organization 1 Disks and I/O 2006 Hakan Uraz - File Organization 1 Disk Drive 2006 Hakan Uraz - File Organization 2 Tracks and Sectors on Disk Surface 2006 Hakan Uraz - File Organization 3 A Set of Cylinders on Disk

More information

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory Operating Systems: Internals and Design Principles Chapter 8 Virtual Memory Seventh Edition William Stallings Operating Systems: Internals and Design Principles You re gonna need a bigger boat. Steven

More information

Chapter 7 Multimedia Operating Systems

Chapter 7 Multimedia Operating Systems MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 7 Multimedia Operating Systems Introduction To Multimedia (1) Figure 7-1. Video on demand using different local distribution technologies.

More information

Multimedia Storage Servers

Multimedia Storage Servers Multimedia Storage Servers Cyrus Shahabi shahabi@usc.edu Computer Science Department University of Southern California Los Angeles CA, 90089-0781 http://infolab.usc.edu 1 OUTLINE Introduction Continuous

More information

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University I/O, Disks, and RAID Yi Shi Fall 2017 Xi an Jiaotong University Goals for Today Disks How does a computer system permanently store data? RAID How to make storage both efficient and reliable? 2 What does

More information

MULTIMEDIA INFORMATION STORAGE AND MANAGEMENT

MULTIMEDIA INFORMATION STORAGE AND MANAGEMENT MULTIMEDIA INFORMATION STORAGE AND MANAGEMENT EDITED BY Soon M. Chung Dept. of Computer Science and Engineering Wright State University Dayton, Ohio 45435, USA KLUWER ACADEMIC PUBLISHERS Boston/London/Dordrecht

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

Chapter 11. I/O Management and Disk Scheduling

Chapter 11. I/O Management and Disk Scheduling Operating System Chapter 11. I/O Management and Disk Scheduling Lynn Choi School of Electrical Engineering Categories of I/O Devices I/O devices can be grouped into 3 categories Human readable devices

More information

Network Layer Enhancements

Network Layer Enhancements Network Layer Enhancements EECS 122: Lecture 14 Department of Electrical Engineering and Computer Sciences University of California Berkeley Today We have studied the network layer mechanisms that enable

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

Symphony: An Integrated Multimedia File System

Symphony: An Integrated Multimedia File System Symphony: An Integrated Multimedia File System Prashant J. Shenoy, Pawan Goyal, Sriram S. Rao, and Harrick M. Vin Distributed Multimedia Computing Laboratory Department of Computer Sciences, University

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

Operating Systems 2010/2011

Operating Systems 2010/2011 Operating Systems 2010/2011 Input/Output Systems part 2 (ch13, ch12) Shudong Chen 1 Recap Discuss the principles of I/O hardware and its complexity Explore the structure of an operating system s I/O subsystem

More information

SamKnows test methodology

SamKnows test methodology SamKnows test methodology Download and Upload (TCP) Measures the download and upload speed of the broadband connection in bits per second. The transfer is conducted over one or more concurrent HTTP connections

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

Module 10 MULTIMEDIA SYNCHRONIZATION

Module 10 MULTIMEDIA SYNCHRONIZATION Module 10 MULTIMEDIA SYNCHRONIZATION Lesson 36 Packet architectures and audio-video interleaving Instructional objectives At the end of this lesson, the students should be able to: 1. Show the packet architecture

More information

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9 OPERATING SYSTEMS CS3502 Spring 2018 Input/Output System Chapter 9 Input/Output System Major objectives: An application s I/O requests are sent to the I/O device. Take whatever response comes back from

More information

Network-Adaptive Video Coding and Transmission

Network-Adaptive Video Coding and Transmission Header for SPIE use Network-Adaptive Video Coding and Transmission Kay Sripanidkulchai and Tsuhan Chen Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213

More information

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2 Introduction :- Today single CPU based architecture is not capable enough for the modern database that are required to handle more demanding and complex requirements of the users, for example, high performance,

More information

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS Kevin Streeter Adobe Systems, USA ABSTRACT While HTTP adaptive streaming (HAS) technology has been very successful, it also generally introduces

More information

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI CMPE 655- MULTIPLE PROCESSOR SYSTEMS OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI What is MULTI PROCESSING?? Multiprocessing is the coordinated processing

More information

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy Chapter 5 Large and Fast: Exploiting Memory Hierarchy Principle of Locality Programs access a small proportion of their address space at any time Temporal locality Items accessed recently are likely to

More information

UbiqStor: Server and Proxy for Remote Storage of Mobile Devices

UbiqStor: Server and Proxy for Remote Storage of Mobile Devices UbiqStor: Server and Proxy for Remote Storage of Mobile Devices MinHwan Ok 1, Daegeun Kim 2, and Myong-soon Park 1,* 1 Dept. of Computer Science and Engineering / Korea University Seoul, 136-701, Korea

More information

Switched FC-AL: An Arbitrated Loop Attachment for Fibre Channel Switches

Switched FC-AL: An Arbitrated Loop Attachment for Fibre Channel Switches Switched FC-AL: An Arbitrated Loop Attachment for Fibre Channel Switches Vishal Sinha sinha@cs.umn.edu Department of Computer Science and Engineering University of Minnesota Minneapolis, MN 55455 7481

More information

File Structures and Indexing

File Structures and Indexing File Structures and Indexing CPS352: Database Systems Simon Miner Gordon College Last Revised: 10/11/12 Agenda Check-in Database File Structures Indexing Database Design Tips Check-in Database File Structures

More information

UNIT 4 Device Management

UNIT 4 Device Management UNIT 4 Device Management (A) Device Function. (B) Device Characteristic. (C) Disk space Management. (D) Allocation and Disk scheduling Methods. [4.1] Device Management Functions The management of I/O devices

More information

Database Architecture 2 & Storage. Instructor: Matei Zaharia cs245.stanford.edu

Database Architecture 2 & Storage. Instructor: Matei Zaharia cs245.stanford.edu Database Architecture 2 & Storage Instructor: Matei Zaharia cs245.stanford.edu Summary from Last Time System R mostly matched the architecture of a modern RDBMS» SQL» Many storage & access methods» Cost-based

More information

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space.

Virtual Memory - Overview. Programmers View. Virtual Physical. Virtual Physical. Program has its own virtual memory space. Virtual Memory - Overview Programmers View Process runs in virtual (logical) space may be larger than physical. Paging can implement virtual. Which pages to have in? How much to allow each process? Program

More information

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto

Ricardo Rocha. Department of Computer Science Faculty of Sciences University of Porto Ricardo Rocha Department of Computer Science Faculty of Sciences University of Porto Slides based on the book Operating System Concepts, 9th Edition, Abraham Silberschatz, Peter B. Galvin and Greg Gagne,

More information

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access File File System Implementation Operating Systems Hebrew University Spring 2009 Sequence of bytes, with no structure as far as the operating system is concerned. The only operations are to read and write

More information

Chapter 20: Multimedia Systems

Chapter 20: Multimedia Systems Chapter 20: Multimedia Systems, Silberschatz, Galvin and Gagne 2009 Chapter 20: Multimedia Systems What is Multimedia? Compression Requirements of Multimedia Kernels CPU Scheduling Disk Scheduling Network

More information

Chapter 20: Multimedia Systems. Operating System Concepts 8 th Edition,

Chapter 20: Multimedia Systems. Operating System Concepts 8 th Edition, Chapter 20: Multimedia Systems, Silberschatz, Galvin and Gagne 2009 Chapter 20: Multimedia Systems What is Multimedia? Compression Requirements of Multimedia Kernels CPU Scheduling Disk Scheduling Network

More information

1 What is an operating system?

1 What is an operating system? B16 SOFTWARE ENGINEERING: OPERATING SYSTEMS 1 1 What is an operating system? At first sight, an operating system is just a program that supports the use of some hardware. It emulates an ideal machine one

More information

Input Output (IO) Management

Input Output (IO) Management Input Output (IO) Management Prof. P.C.P. Bhatt P.C.P Bhatt OS/M5/V1/2004 1 Introduction Humans interact with machines by providing information through IO devices. Manyon-line services are availed through

More information

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL 2006 243 Segment-Based Streaming Media Proxy: Modeling Optimization Songqing Chen, Member, IEEE, Bo Shen, Senior Member, IEEE, Susie Wee, Xiaodong

More information

CPS104 Computer Organization and Programming Lecture 18: Input-Output. Outline of Today s Lecture. The Big Picture: Where are We Now?

CPS104 Computer Organization and Programming Lecture 18: Input-Output. Outline of Today s Lecture. The Big Picture: Where are We Now? CPS104 Computer Organization and Programming Lecture 18: Input-Output Robert Wagner cps 104.1 RW Fall 2000 Outline of Today s Lecture The system Magnetic Disk Tape es DMA cps 104.2 RW Fall 2000 The Big

More information

Role of OS in virtual memory management

Role of OS in virtual memory management Role of OS in virtual memory management Role of OS memory management Design of memory-management portion of OS depends on 3 fundamental areas of choice Whether to use virtual memory or not Whether to use

More information

HTRC Data API Performance Study

HTRC Data API Performance Study HTRC Data API Performance Study Yiming Sun, Beth Plale, Jiaan Zeng Amazon Indiana University Bloomington {plale, jiaazeng}@cs.indiana.edu Abstract HathiTrust Research Center (HTRC) allows users to access

More information

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel Chapter-6 SUBJECT:- Operating System TOPICS:- I/O Management Created by : - Sanjay Patel Disk Scheduling Algorithm 1) First-In-First-Out (FIFO) 2) Shortest Service Time First (SSTF) 3) SCAN 4) Circular-SCAN

More information

Process size is independent of the main memory present in the system.

Process size is independent of the main memory present in the system. Hardware control structure Two characteristics are key to paging and segmentation: 1. All memory references are logical addresses within a process which are dynamically converted into physical at run time.

More information

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017 CS 471 Operating Systems Yue Cheng George Mason University Fall 2017 Review: Disks 2 Device I/O Protocol Variants o Status checks Polling Interrupts o Data PIO DMA 3 Disks o Doing an disk I/O requires:

More information

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9

OPERATING SYSTEMS CS3502 Spring Input/Output System Chapter 9 OPERATING SYSTEMS CS3502 Spring 2017 Input/Output System Chapter 9 Input/Output System Major objectives: An application s I/O requests are sent to the I/O device. Take whatever response comes back from

More information

I/O Management and Disk Scheduling. Chapter 11

I/O Management and Disk Scheduling. Chapter 11 I/O Management and Disk Scheduling Chapter 11 Categories of I/O Devices Human readable used to communicate with the user video display terminals keyboard mouse printer Categories of I/O Devices Machine

More information

MobiLink Performance. A whitepaper from ianywhere Solutions, Inc., a subsidiary of Sybase, Inc.

MobiLink Performance. A whitepaper from ianywhere Solutions, Inc., a subsidiary of Sybase, Inc. MobiLink Performance A whitepaper from ianywhere Solutions, Inc., a subsidiary of Sybase, Inc. Contents Executive summary 2 Introduction 3 What are the time-consuming steps in MobiLink synchronization?

More information

A Study of the Performance Tradeoffs of a Tape Archive

A Study of the Performance Tradeoffs of a Tape Archive A Study of the Performance Tradeoffs of a Tape Archive Jason Xie (jasonxie@cs.wisc.edu) Naveen Prakash (naveen@cs.wisc.edu) Vishal Kathuria (vishal@cs.wisc.edu) Computer Sciences Department University

More information

Real-Time (Paradigms) (47)

Real-Time (Paradigms) (47) Real-Time (Paradigms) (47) Memory: Memory Access Protocols Tasks competing for exclusive memory access (critical sections, semaphores) become interdependent, a common phenomenon especially in distributed

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

DISTRIBUTED HIGH-SPEED COMPUTING OF MULTIMEDIA DATA

DISTRIBUTED HIGH-SPEED COMPUTING OF MULTIMEDIA DATA DISTRIBUTED HIGH-SPEED COMPUTING OF MULTIMEDIA DATA M. GAUS, G. R. JOUBERT, O. KAO, S. RIEDEL AND S. STAPEL Technical University of Clausthal, Department of Computer Science Julius-Albert-Str. 4, 38678

More information

Operating System Concepts

Operating System Concepts Chapter 9: Virtual-Memory Management 9.1 Silberschatz, Galvin and Gagne 2005 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (8 th Week) (Advanced) Operating Systems 8. Virtual Memory 8. Outline Hardware and Control Structures Operating

More information

Multiprocessor and Real-Time Scheduling. Chapter 10

Multiprocessor and Real-Time Scheduling. Chapter 10 Multiprocessor and Real-Time Scheduling Chapter 10 1 Roadmap Multiprocessor Scheduling Real-Time Scheduling Linux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor Systems

More information

Advantages and disadvantages

Advantages and disadvantages Advantages and disadvantages Advantages Disadvantages Asynchronous transmission Simple, doesn't require synchronization of both communication sides Cheap, timing is not as critical as for synchronous transmission,

More information

Data Storage and Query Answering. Data Storage and Disk Structure (2)

Data Storage and Query Answering. Data Storage and Disk Structure (2) Data Storage and Query Answering Data Storage and Disk Structure (2) Review: The Memory Hierarchy Swapping, Main-memory DBMS s Tertiary Storage: Tape, Network Backup 3,200 MB/s (DDR-SDRAM @200MHz) 6,400

More information

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

!! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced? Chapter 10: Virtual Memory Questions? CSCI [4 6] 730 Operating Systems Virtual Memory!! What is virtual memory and when is it useful?!! What is demand paging?!! When should pages in memory be replaced?!!

More information

CHAPTER 5 ANT-FUZZY META HEURISTIC GENETIC SENSOR NETWORK SYSTEM FOR MULTI - SINK AGGREGATED DATA TRANSMISSION

CHAPTER 5 ANT-FUZZY META HEURISTIC GENETIC SENSOR NETWORK SYSTEM FOR MULTI - SINK AGGREGATED DATA TRANSMISSION CHAPTER 5 ANT-FUZZY META HEURISTIC GENETIC SENSOR NETWORK SYSTEM FOR MULTI - SINK AGGREGATED DATA TRANSMISSION 5.1 INTRODUCTION Generally, deployment of Wireless Sensor Network (WSN) is based on a many

More information

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15 Introduction to Real-Time Communications Real-Time and Embedded Systems (M) Lecture 15 Lecture Outline Modelling real-time communications Traffic and network models Properties of networks Throughput, delay

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

CHAPTER 4 CALL ADMISSION CONTROL BASED ON BANDWIDTH ALLOCATION (CACBA)

CHAPTER 4 CALL ADMISSION CONTROL BASED ON BANDWIDTH ALLOCATION (CACBA) 92 CHAPTER 4 CALL ADMISSION CONTROL BASED ON BANDWIDTH ALLOCATION (CACBA) 4.1 INTRODUCTION In our previous work, we have presented a cross-layer based routing protocol with a power saving technique (CBRP-PS)

More information

Virtual Memory Outline

Virtual Memory Outline Virtual Memory Outline Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations Operating-System Examples

More information

36 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008

36 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008 36 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008 Continuous-Time Collaborative Prefetching of Continuous Media Soohyun Oh, Beshan Kulapala, Andréa W. Richa, and Martin Reisslein Abstract

More information

Configuring QoS. Finding Feature Information. Prerequisites for QoS. General QoS Guidelines

Configuring QoS. Finding Feature Information. Prerequisites for QoS. General QoS Guidelines Finding Feature Information, on page 1 Prerequisites for QoS, on page 1 Restrictions for QoS, on page 2 Information About QoS, on page 2 How to Configure QoS, on page 10 Monitoring Standard QoS, on page

More information

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory Chapter 9: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

Lotus Sametime 3.x for iseries. Performance and Scaling

Lotus Sametime 3.x for iseries. Performance and Scaling Lotus Sametime 3.x for iseries Performance and Scaling Contents Introduction... 1 Sametime Workloads... 2 Instant messaging and awareness.. 3 emeeting (Data only)... 4 emeeting (Data plus A/V)... 8 Sametime

More information

Page 1. Magnetic Disk Purpose Long term, nonvolatile storage Lowest level in the memory hierarchy. Typical Disk Access Time

Page 1. Magnetic Disk Purpose Long term, nonvolatile storage Lowest level in the memory hierarchy. Typical Disk Access Time Review: Major Components of a Computer Processor Control Datapath Cache Memory Main Memory Secondary Memory (Disk) Devices Output Input Magnetic Disk Purpose Long term, nonvolatile storage Lowest level

More information

Virtual Memory COMPSCI 386

Virtual Memory COMPSCI 386 Virtual Memory COMPSCI 386 Motivation An instruction to be executed must be in physical memory, but there may not be enough space for all ready processes. Typically the entire program is not needed. Exception

More information

The Google File System

The Google File System October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

Managing Caching Performance and Differentiated Services

Managing Caching Performance and Differentiated Services CHAPTER 10 Managing Caching Performance and Differentiated Services This chapter explains how to configure TCP stack parameters for increased performance ant throughput and how to configure Type of Service

More information

Outlines. Chapter 2 Storage Structure. Structure of a DBMS (with some simplification) Structure of a DBMS (with some simplification)

Outlines. Chapter 2 Storage Structure. Structure of a DBMS (with some simplification) Structure of a DBMS (with some simplification) Outlines Chapter 2 Storage Structure Instructor: Churee Techawut 1) Structure of a DBMS 2) The memory hierarchy 3) Magnetic tapes 4) Magnetic disks 5) RAID 6) Disk space management 7) Buffer management

More information

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage CSCI-GA.2433-001 Database Systems Lecture 8: Physical Schema: Storage Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com View 1 View 2 View 3 Conceptual Schema Physical Schema 1. Create a

More information

Chapter 7 CONCLUSION

Chapter 7 CONCLUSION 97 Chapter 7 CONCLUSION 7.1. Introduction A Mobile Ad-hoc Network (MANET) could be considered as network of mobile nodes which communicate with each other without any fixed infrastructure. The nodes in

More information

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 8: Virtual Memory. Operating System Concepts Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2009 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

I/O Buffering and Streaming

I/O Buffering and Streaming I/O Buffering and Streaming I/O Buffering and Caching I/O accesses are reads or writes (e.g., to files) Application access is arbitary (offset, len) Convert accesses to read/write of fixed-size blocks

More information

Database Systems. November 2, 2011 Lecture #7. topobo (mit)

Database Systems. November 2, 2011 Lecture #7. topobo (mit) Database Systems November 2, 2011 Lecture #7 1 topobo (mit) 1 Announcement Assignment #2 due today Assignment #3 out today & due on 11/16. Midterm exam in class next week. Cover Chapters 1, 2,

More information

Database Systems II. Secondary Storage

Database Systems II. Secondary Storage Database Systems II Secondary Storage CMPT 454, Simon Fraser University, Fall 2009, Martin Ester 29 The Memory Hierarchy Swapping, Main-memory DBMS s Tertiary Storage: Tape, Network Backup 3,200 MB/s (DDR-SDRAM

More information

Multimedia Streaming. Mike Zink

Multimedia Streaming. Mike Zink Multimedia Streaming Mike Zink Technical Challenges Servers (and proxy caches) storage continuous media streams, e.g.: 4000 movies * 90 minutes * 10 Mbps (DVD) = 27.0 TB 15 Mbps = 40.5 TB 36 Mbps (BluRay)=

More information

Disks & Files. Yanlei Diao UMass Amherst. Slides Courtesy of R. Ramakrishnan and J. Gehrke

Disks & Files. Yanlei Diao UMass Amherst. Slides Courtesy of R. Ramakrishnan and J. Gehrke Disks & Files Yanlei Diao UMass Amherst Slides Courtesy of R. Ramakrishnan and J. Gehrke DBMS Architecture Query Parser Query Rewriter Query Optimizer Query Executor Lock Manager for Concurrency Access

More information

Hash-Based Indexing 165

Hash-Based Indexing 165 Hash-Based Indexing 165 h 1 h 0 h 1 h 0 Next = 0 000 00 64 32 8 16 000 00 64 32 8 16 A 001 01 9 25 41 73 001 01 9 25 41 73 B 010 10 10 18 34 66 010 10 10 18 34 66 C Next = 3 011 11 11 19 D 011 11 11 19

More information

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc.

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc. High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc. Table of Contents Section I: The Need for Warm Standby...2 The Business Problem...2 Section II:

More information

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management Traffic and Congestion Management in ATM 3BA33 David Lewis 3BA33 D.Lewis 2007 1 Traffic Control Objectives Optimise usage of network resources Network is a shared resource Over-utilisation -> congestion

More information

Proxy Prefix Caching for Multimedia Streams

Proxy Prefix Caching for Multimedia Streams Proxy Prefix Caching for Multimedia Streams Subhabrata Seny, Jennifer Rexfordz, and Don Towsleyy ydept. of Computer Science znetworking & Distributed Systems University of Massachusetts AT&T Labs Research

More information

Cache Controller with Enhanced Features using Verilog HDL

Cache Controller with Enhanced Features using Verilog HDL Cache Controller with Enhanced Features using Verilog HDL Prof. V. B. Baru 1, Sweety Pinjani 2 Assistant Professor, Dept. of ECE, Sinhgad College of Engineering, Vadgaon (BK), Pune, India 1 PG Student

More information

Shaping Process Semantics

Shaping Process Semantics Shaping Process Semantics [Extended Abstract] Christoph M. Kirsch Harald Röck Department of Computer Sciences University of Salzburg, Austria {ck,hroeck}@cs.uni-salzburg.at Analysis. Composition of virtually

More information

4.1 Introduction 4.3 Datapath 4.4 Control 4.5 Pipeline overview 4.6 Pipeline control * 4.7 Data hazard & forwarding * 4.

4.1 Introduction 4.3 Datapath 4.4 Control 4.5 Pipeline overview 4.6 Pipeline control * 4.7 Data hazard & forwarding * 4. Chapter 4: CPU 4.1 Introduction 4.3 Datapath 4.4 Control 4.5 Pipeline overview 4.6 Pipeline control * 4.7 Data hazard & forwarding * 4.8 Control hazard 4.14 Concluding Rem marks Hazards Situations that

More information

Top-Level View of Computer Organization

Top-Level View of Computer Organization Top-Level View of Computer Organization Bởi: Hoang Lan Nguyen Computer Component Contemporary computer designs are based on concepts developed by John von Neumann at the Institute for Advanced Studies

More information

CERIAS Tech Report Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien

CERIAS Tech Report Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien CERIAS Tech Report 2003-56 Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien Center for Education and Research Information Assurance

More information

3. Evaluation of Selected Tree and Mesh based Routing Protocols

3. Evaluation of Selected Tree and Mesh based Routing Protocols 33 3. Evaluation of Selected Tree and Mesh based Routing Protocols 3.1 Introduction Construction of best possible multicast trees and maintaining the group connections in sequence is challenging even in

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

I/O CANNOT BE IGNORED

I/O CANNOT BE IGNORED LECTURE 13 I/O I/O CANNOT BE IGNORED Assume a program requires 100 seconds, 90 seconds for main memory, 10 seconds for I/O. Assume main memory access improves by ~10% per year and I/O remains the same.

More information

SmartSaver: Turning Flash Drive into a Disk Energy Saver for Mobile Computers

SmartSaver: Turning Flash Drive into a Disk Energy Saver for Mobile Computers SmartSaver: Turning Flash Drive into a Disk Energy Saver for Mobile Computers Feng Chen 1 Song Jiang 2 Xiaodong Zhang 1 The Ohio State University, USA Wayne State University, USA Disks Cost High Energy

More information

Maximizing the Number of Users in an Interactive Video-on-Demand System

Maximizing the Number of Users in an Interactive Video-on-Demand System IEEE TRANSACTIONS ON BROADCASTING, VOL. 48, NO. 4, DECEMBER 2002 281 Maximizing the Number of Users in an Interactive Video-on-Demand System Spiridon Bakiras, Member, IEEE and Victor O. K. Li, Fellow,

More information

Multimedia Systems Project 3

Multimedia Systems Project 3 Effectiveness of TCP for Video Transport 1. Introduction In this project, we evaluate the effectiveness of TCP for transferring video applications that require real-time guarantees. Today s video applications

More information

Efficient support for interactive operations in multi-resolution video servers

Efficient support for interactive operations in multi-resolution video servers Multimedia Systems 7: 241 253 (1999) Multimedia Systems c Springer-Verlag 1999 Efficient support for interactive operations in multi-resolution video servers Prashant J. Shenoy, Harrick M. Vin Distributed

More information

A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver

A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver 1 A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver Jungmin So Dept. of Computer Science, and Coordinated Science Laboratory University of Illinois

More information

Virtual Memory. Chapter 8

Virtual Memory. Chapter 8 Virtual Memory 1 Chapter 8 Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time E.g., process may be swapped in and out of main memory

More information