SLA-Aware Adaptive Data Broadcasting in Wireless Environments. Adrian Daniel Popescu

Size: px
Start display at page:

Download "SLA-Aware Adaptive Data Broadcasting in Wireless Environments. Adrian Daniel Popescu"

Transcription

1 SLA-Aware Adaptive Data Broadcasting in Wireless Environments by Adrian Daniel Popescu A thesis submitted in conformity with the requirements for the degree of Masters of Applied Science Graduate Department of Electrical and Computer Engineering University of Toronto Copyright c 2009 by Adrian Daniel Popescu

2 Abstract SLA-Aware Adaptive Data Broadcasting in Wireless Environments Adrian Daniel Popescu Masters of Applied Science Graduate Department of Electrical and Computer Engineering University of Toronto 2009 In mobile and wireless networks, data broadcasting for popular data items enables the efficient utilization of the limited wireless bandwidth. However, efficient data scheduling schemes are needed to fully exploit the benefits of data broadcasting. Towards this goal, several broadcast scheduling policies have been proposed. These existing schemes have mostly focused on either minimizing response time, or drop rate, when requests are associated with hard deadlines. The inherent inaccuracy of hard deadlines in a dynamic mobile environment motivated us to use Service Level Agreements (SLAs) where a user specifies the utility of data as a function of its arrival time. Moreover, SLAs provide the mobile user with an already familiar quality of service specification from wired environments. Hence, in this dissertation, we propose SAAB, an SLA-aware adaptive data broadcast scheduling policy for maximizing the system utility under SLA-based performance measures. To achieve this goal, SAAB considers both the characteristics of disseminated data objects as well as the SLAs associated with them. Additionally, SAAB automatically adjusts to the system workload conditions which enables it to constantly outperform existing broadcast scheduling policies. ii

3 Acknowledgements First of all, I would like to thank my supervisor Professor Cristiana Amza. Without her support and guidance none of this work would be possible. Secondly, I would like to thank Mohamed Sharaf for his involvement in this project. His knowledge in the field of broadcast scheduling together with his guidance contributed significantly to the outcome of this work. Thirdly, I would like to thank Daniel Lupei for its insightful comments and suggestions, that significantly contributed to the improvement of this work. iii

4 Contents 1 Introduction 1 2 Background Broadcasting Models On-demand Data Broadcasting Push-based Data Broadcasting Summary: On-demand vs. Push-based Scheduling Policies in Data Broadcast Systems Basic Scheduling Policies Specialized Scheduling Policies SLA-Aware Adaptive Broadcast Scheduling SAAB with Tardiness-Aware Scheduling The Tardiness Metric Tardiness-Aware SAAB SAAB with Utility-Aware Scheduling The Utility Metric Tardiness vs. Utility Utility-Aware SAAB Experimental Setup 29 iv

5 5 Experiments Pushed-based Data Broadcasting Tardiness - Default Settings Tardiness - Impact of Slack Factor Tardiness - Impact of Skewness in Item Size Distribution Tardiness - Impact of Item Size Bounds Tardiness - Impact of Deadlines Utility - Default Settings Utility - Impact of Correlation between Popularity and Size Distributions Utility - Impact of Skewness in Utility Distribution On-demand Data Broadcasting Tardiness - Default Settings Tardiness - Impact of Slack Factor Tardiness - Impact of Item Size Bounds Tardiness - Impact of Deadlines Utility - Default Settings Utility - Impact of Correlation between Popularity and Size Distributions Push vs Pull: Summary Related Work On-demand Broadcasting Minimizing Response Time Minimizing Drop Rate Pushed-based Broadcasting Hybrid Approaches Utility and Soft Deadlines v

6 7 Conclusions 62 Bibliography 62 vi

7 List of Tables 2.1 Notations in On-demand Data Broadcasting Workload Parameters vii

8 List of Figures 2.1 On-demand Data Broadcasting Example of the Scheduling Queue Notations associated with request R i Push-based Data Broadcasting Sample utility function Tardiness expressed as utility function Tardiness for Default Settings (SF max = 12) Impact of Slack Factor on Tardiness (SF max =18) Impact of Slack Factor on Tardiness (SF max =6) Impact of Skewness in Size Distribution on Tardiness (θ size =2) Impact of Item Size Bounds on Tardiness (s max = 250K) Impact of Item Size Bounds on Tardiness (s max = 500K) Impact of Deadlines on Tardiness (utilization = 0.6) Utility for Default Settings Impact of DEC Correlation on Utility Impact of INC Correlation on Utility Impact of Skewness in Utility Distribution on Utility Tardiness for Default Settings (SF max =4) Impact of Slack Factor on Tardiness (SF max =6) viii

9 5.14 Impact of Slack Factor on Tardiness (SF max =2) Impact of Item Size Bounds on Tardiness (s max = 250K) Impact of Item Size Bounds on Tardiness (s max = 500K) Impact of Deadlines on Tardiness (utilization = 0.9) Utility for Default Settings Impact of DEC Correlation on Utility Impact of INC Correlation on Utility ix

10 Chapter 1 Introduction With the continuous dependence on wireless data access, it is only natural that mobile users would expect the same Service Level Agreements (SLAs) currently provided to counterpart applications running on their stationary platforms in a wired environment. SLAs provide users with the flexibility to define the utility of delayed data. It also provides a concrete measure for system performance based on the users perceived overall utility [21]. In this dissertation, we argue that SLA is also a natural fit for expressing QoS for the demands of mobile applications accessing time-critical data, including traffic and weather conditions, news headlines, stock quotes, and RSS feeds. Providing SLA guarantees to such applications, together with mechanisms to enforce those SLAs ensures the usability of mobile applications. Indeed, SLA-based policies and mechanisms provide end users with useful information within their pre-specified tolerated delays. Enforcing SLAs for mobile applications requires re-thinking the design of current data dissemination techniques used in wireless networks, especially that of data broadcasting. Examples of data broadcasting systems include Multimedia Broadcast Multicast Services (MBMS) [28] and the Terestrial digital broadcasting (ISDB-T) [26]. Towards exploiting these broadcasting capabilities several data broadcasting schemes have been developed to reduce the delays of data retrieval (e.g., [1, 34, 20, 4, 5, 6]). Based on the data 1

11 Chapter 1. Introduction 2 transfer initiator, these schemes could be broadly classified into on-demand (i.e., pullbased) [4, 5, 6] and push-based [1, 34, 20]. Independently of the broadcasting method adopted, an efficient request scheduler is needed in order to decide the best dissemination order of requested data with the goal of optimizing a certain performance metric. Previous work on on-demand data broadcasting has focused mainly on optimizing the user perceived response time incurred in data retrieval (e.g., [4, 5, 6, 31]). However, optimizing response time is not sufficient to maximize data usability since it overlooks the user s requirements and expectations. For instance, a user posing a request for available restaurants off an upcoming highway exit would have more stringent time requirements than a user posing a request for the evening movie schedules. As an alternative to response time, recent research on wireless data broadcasting has also looked at optimizing drop rate (also known as miss rate) (e.g., [36, 11, 24]). Under such measure, each request is assigned a hard deadline and the system strives to minimize the number of requests missing their deadlines. However, one drawback of this approach is the complexity and inaccuracy entailed in setting those hard deadlines. Another problem with that approach is that it assumes data to be useless past the pre-specified hard deadline, hence it drops those requests which are prone to missing their deadlines. In this dissertation, we introduce a novel scheduler for data broadcasting that optimizes performance under SLA-based measures. This is particularly important in mobile environments where data is accessed under certain time constraints. Examples of such constraints include the battery lifetime of the mobile device [11] or the expected interval for data usability in Location-based services where data is valid only within a local area [36]. Additionally, the SLA requirement is often imposed by the data provider itself so that to maintain a certain performance target. Moreover, in a mobile environment, SLAs are inherently based on soft deadlines, rather than hard deadlines, due to the following reasons. First, the user is likely to set

12 Chapter 1. Introduction 3 request deadlines based on conservative time estimates e.g., the time required for reaching a particular target, such as an exit on a highway or based on remaining battery lifetime. Second, if the deadline is not met, the requested data is likely still useful, and should be delivered; however, in this case, the service provider may be expected to pay a penalty, due to the potential inconvenience of the delay for the user. In the simplest form of SLA, the utility of data is maximum if it arrives before the soft deadline, while decreasing linearly with time after the deadline had been passed. This simple form of SLA is known as tardiness [32, 15]. In the more general form, the utility could be expressed using any monotonic function. Towards maximizing the utility of data broadcasting, we propose an SLA-Aware Adaptive Broadcast scheduling policy called SAAB. In particular, we first propose SAAB for optimizing the simple form of SLA defined in terms of tardiness. Further, we extend SAAB to optimize the general form of SLA defined by monotonic utility functions. In both versions of SAAB, one important challenge is the impact of request aggregation on the scheduling problem. Request aggregation is efficient since it allows for fewer data broadcasts which in its turn saves bandwidth and reduces delays. However, in the presence of SLAs, different requests for the same data item might have different SLA specifications, which could conflict. SAAB exploits such variability in SLAs in order to maximize the overall user satisfaction as defined in terms of SLAs. Moreover, our proposed SAAB policy is adaptive to workload conditions, which allows it to provide the best achievable performance in most cases. In contrast, we show that existing schedulers for optimizing response time and drop rate are sometimes also successful in optimizing performance under SLAs for some workload conditions, but not all. In fact, such schedulers might perform the best under a certain workload but perform the worst under another workload. In our experimental evaluation we consider on-demand and aperiodic push-based broadcast methods. These methods are appropriate for disseminating time critical in-

13 Chapter 1. Introduction 4 formation for mobile applications (such as traffic / weather conditions or stock quote updates), but present different tradeoffs between performance and ease of specification for client preferences at the server. We show that SAAB achieves significant performance improvements in both cases when compared with state of the art scheduling policies used in broadcast systems. In on-demand broadcasting, clients explicitly send requests for data items of interest to the broadcasting server. The server aggregates requests for the same data item together, and schedules them for dissemination only once (e.g., [4, 5, 6]). On the other hand, in aperiodic push-based broadcasting, the server is initiating the data transfer without client intervention, each time an event, such as a data item update, occurs at the server (e.g., [23]). Hence, in this case the server is aware of client data interests by using data subscriptions registered on the server. The main advantage of on-demand broadcasting is that the client makes the server aware of its data interests directly. The disadvantage is that the server performance degrades when the number of concurrent requests issued to the server grows significantly. On the other hand, the advantage of push-based broadcasting is that it scales well with the increasing number of clients listening the broadcast channel. On the downside, the server is not always up to date with the client data preferences, and hence may broadcast useless data on the channel. The contributions of this work are summarized as: 1. A new SLA-Aware Adaptive Broadcast scheduling policy called SAAB for optimizing the tardiness incurred in data dissemination in wireless environments (Chapter 3). 2. An extended version of SAAB for maximizing the utility of data broadcast servers (Chapter 3). 3. An extensive experimental evaluation of SAAB and related scheduling policies for

14 Chapter 1. Introduction 5 both on-demand and push-based broadcasting, which shows that SAAB significantly outperforms existing policies for most workload conditions (Chapter 5). The rest of the dissertation is organized as follows. Chapter 2 presents the background material necessary to place our scheduling technique into the context. Chapter 3 introduces SAAB, our scheduling technique for optimizing data broadcasting. Chapter 4 follows with the experimental setup. Chapter 5 presents our experimental results. Related Work is presented in Chapter 6. Finally, Chapter 7 concludes the dissertation.

15 Chapter 2 Background In this chapter we present the background material necessary to introduce our scheduling technique. First, we introduce the data broadcasting models considered in our work. Then, we continue with the state of the art scheduling policies used in data broadcasting. 2.1 Broadcasting Models In the following, we describe the broadcasting models, specifically the on-demand and the push-based models. We present their functionality, the parameters of interest, and our assumptions. Finally, we introduce a summary, which reinforces the differences between these two models On-demand Data Broadcasting In this model we assume a typical on-demand data broadcasting environment where there is a single broadcast server providing information to multiple mobile clients. The communication between the server and clients is established through both an uplink and a downlink data channel. Specifically, clients send requests for data items on the uplink channel, whereas the server disseminates the corresponding data items on the downlink 6

16 Chapter 2. Background 7 Figure 2.1: On-demand Data Broadcasting (broadcast) channel. A typical on-demand broadcasting server is illustrated in Figure 2.1. Here, we can also notice the server s main components: the database, the request queue and the scheduler. We discuss all of these components in more detail next. The server maintains a database of N data items < I 1, I 2,...I N >, where each data item I x has a unique size L x. We consider a dynamical database where a data item is updated into the database each time its corresponding data source changes. Moreover, we assume that the size of the data item remains the same across updates. However, in this broadcasting model the server disseminates only the current version of the data item to the requesting client (i.e., that from the broadcast time). A request R i issued by a client for a certain data item is characterized by the following parameters:

17 Chapter 2. Background 8 Figure 2.2: Example of the Scheduling Queue 1. Data Item (I i ): is the data item corresponding to request R i, 2. Arrival Time (A i ): is the point of time when request R i is issued, and 3. Deadline (D i ): is the soft deadline associated with request R i. There are multiple alternatives for setting the soft deadline D i which could be generally categorized as either static or dynamic. In the static case, D i is set according to a given quality contract established between the user and the service provider. For instance, the contract might state that a stock quote retrieval request should be satisfied within one second. In the dynamic case, D i is set according to the user s current need for data. For instance, the time when the user is expected to reach a highway exit would act as a deadline for the exit-related information. As data requests arrive at the server, they are queued internally in a scheduling queue. In the scheduling queue, multiple requests for the same data item I j are aggregated together. In particular, each entry in the scheduling queue corresponds to a single data item in the database, where each entry is represented as follows: 1. Data Item (I j ): is the requested data item, 2. Service Time (C j ): is the time required for transmitting data item I j on the downlink channel,

18 Chapter 2. Background 9 Figure 2.3: Notations associated with request R i 3. Popularity (P j ): is the number of pending requests for data item I j, and 4. Request Vector (R j ): is a request vector of length P j, where each entry in R j corresponds to one of the P j pending requests for item I j (i.e., R j = R j,1, R j,2,..., R j,pj ). Figure 2.2 shows a sample scheduling queue which contains 3 data items with pending requests. For instance, the service time of data item I 5 is 10 time units and it has 5 pending requests < R 5,1,..., R 5,5 >. The time when request R i is satisfied is denoted as finish time (F i ). That is the time when the user finishes downloading the data item I i requested in R i. Ideally, the finish time F i should be less than or equal to the deadline D i. Equivalently, the ideal response time for request R i is Êi = D i A i. However, request R i might experience queuing delays in the server leading to delaying the finish time of R i beyond D i resulting in a perceived response time E i = F i A i (as shown in Figure 2.3). The delay beyond the request deadline, denoted as T i in Figure 2.3, represents the tardiness of request R i, and is formally defined as: T i = 0 iff F i D i, and T i = F i D i otherwise. All of the above parameters are also summarized in Table 2.1.

19 Chapter 2. Background 10 Table 2.1: Notations in On-demand Data Broadcasting N otation R i Description Request I i Data item requested by R i A i Arrival time of request R i D i Deadline of request R i E i Response time of request R i Ê i Ideal response time of request R i F i Finish time of request R i T i N Tardiness of request R i Database size C i Service time of item I i P i Popularity of item I i R i Request vector of I i Push-based Data Broadcasting In our second model, we assume an aperiodic push-based data broadcasting environment where there is a single broadcast server providing information to multiple mobile clients. In contrast with the on-demand model, the clients communicate their data interests to the server through the use of subscriptions stored in client profiles, while the server is the initiator of a data transfer every time an event, such as a data item update, occurs at the database. Specifically, the server broadcasts a data item update on the downlink channel provided that at least one subscription for that data item exists on the server. Figure 2.4 illustrates such an aperiodic push-based broadcasting system. As it can be noticed, there is no uplink data channel available from the clients to the server. The update is initiated by the data source and the broadcast server uses the client profiles in

20 Chapter 2. Background 11 Figure 2.4: Push-based Data Broadcasting order to match the data item updates that have to be broadcasted to the mobile clients. The server maintains a database of N data items < I 1, I 2,...I N >, where each data item I x has a unique size L x. Similar with the on-demand model we consider a dynamical database, where a data item is updated into the database each time its corresponding source value changes. Moreover, we assume that the size of the data items remains the same across updates. As opposed with the on-demand model, in this broadcasting model each version of a data item is disseminated on the broadcast channel. In push-based broadcasting, the data item update replaces the client request from ondemand broadcasting. Hence, an update U j for a particular data item is characterized

21 Chapter 2. Background 12 by the following parameters: 1. Data Item (I j ): is the data item corresponding to the update U j, 2. Update Arrival Time(Ua j ): is the point of time when the data item update becomes available to the database. We assume that client profiles are registered/unregistered on the broadcast server through a given method (e.g., contacting the service provider through a web interface). Further, each of these profiles registered on the broadcast server contains several data subscriptions. A subscription Sub k for a particular data item is characterized by the following parameters: 1. Data Item (I k ): is the data item corresponding to the subscription Sub k. 2. Relative Deadline (D rel k ): is the soft deadline associated with the subscription Sub k, relative to each update arrival time of data item I k. The relative deadline represents the time limit to disseminate each data item update to the subscriber without incurring any penalties, immediately after it becomes available to the database. In contrast with the on-demand model, in the pushed-based model the deadlines are mostly static, being set according to a quality contract established between the user and the service provider. For instance, the contract might state that a stock quote update should be disseminated to the subscribed user within one second. In order to use the same data structures as in Section 2.1.1, we can view each subscription Sub k matching a data item update U j, as a data update-request R i, which actually represents the implicit request made by the client subscriber to receive the update U j. A similar remark for the implicit request R i, is presented in [23], in the context of multicasting a changing repository to the subscribed clients. The data update-request R i is characterized by the following parameters:

22 Chapter 2. Background Data Item (I i ): is the data item corresponding to both the update and the subscription. 2. Arrival Time (A i ): is similar with the arrival time of the update U j. 3. Deadline (D i ): is the deadline of the update-request. Formally, is defined as the sum of the arrival time of the update U j and the relative deadline of the subscription Sub k : D i = Ua j + D rel k. R i is to some extent similar with the user request from the classical on-demand model; the difference stands in that the update-request is generated on the server without user s intervention, whenever an update for the subscribed data is available. Moreover, because all the current subscriptions for a particular data item are known at the time of a data item update, a similar number of update-requests with the number of subscriptions are generated, having their arrival time equal with the arrival time of the update. As data updates arrive at the server, they are queued internally in a scheduling queue in order to be broadcasted to the subscribers. In particular, each entry in the scheduling queue is represented as follows: 1. Update (U j ): is the update for data item I j. 2. Service Time (C j ): is the service time required for transmitting the data item update on the downlink channel, 3. Popularity (P j ): is the number of subscriptions for data item I j, and 4. Update-Request Vector (U j ): is a vector of length P j, where each entry in U j represents one of the P j update-requests corresponding to the update U j (i.e., U i = R j,1, R j,2,..., R j,pj ). The time when data item update U j is satisfied represents the update finish time Uf j. Additionally, in the push-based model the finish time of all update-requests corresponding to the update U j is similar and equal with the update finish time Uf j.

23 Chapter 2. Background 14 Similar with the on-demand case, the finish time of an update-request F i should be less than or equal to its deadline D i. Equivalently, the ideal response time for an updaterequest R i is Êi = D i A i. However, the update-request might experience queuing delays in the server leading in delaying the finish time of the update-request beyond its deadline D i, resulting in an actual response time E i = F i A i. Finally, in order to receive updates we assume that the interested clients listen the broadcast channel actively. Moreover, new clients that just tune into the channel should use other methods (such as unicast) if they require the current version of a particular data item before a corresponding update is broadcasted over the channel Summary: On-demand vs. Push-based In summary, on-demand broadcasting differs from push-based broadcasting as follows: Data aggregation takes place incrementally in the on-demand model, while the data item entry is still in the scheduling queue waiting to be broadcasted. On the contrary, in the push-based model all the current subscriptions for the updated data item are known at the time of the update. Hence, data aggregation is inherently included in the push-based model. The on-demand request is mapped to the update-request in the push-based model. The update-request was mainly introduced to use the same data structures as in the on-demand model. The arrival times of all the update-requests corresponding to an update U i, are the same and equal to the arrival time of the update Ua i. Similarly, their finish times are all the same and equal to the finish time of the update Uf i. The push-based model may be seen, to some extent, as a particular case of the on-demand model, where all the possible requests for a particular data item I j, are issued at the same time.

24 Chapter 2. Background 15 The on-demand model disseminates only the current version of the requested data item to its clients, while the push-based model disseminates all the data item updates to its subscribers, as updates are coming in. Our scheduling algorithms presented in this work are independent of the broadcast model adopted. Hence, for simplicity, through the rest of the dissertation we use the terminology and the notations corresponding to the on-demand model only. 2.2 Scheduling Policies in Data Broadcast Systems In this section we present several broadcast scheduling policies employed in data broadcast systems. First, we discuss four traditional scheduling policies that prioritize data items based on one of their defining parameters (i.e., service time, deadline, etc.). Then, we present examples of more specialized scheduling policies which extend these policies in order to incorporate information about the popularity of data items and optimize performance in terms of response time or drop rate Basic Scheduling Policies In a broadcast data server, a request scheduler decides the transmission order of requested data items with the goal of optimizing a certain performance metric. Priority scheduling is one class of schedulers popular in data broadcasting that decide the scheduling order of the data items based on some prioritizing policy. More specifically, each data item from the scheduling queue is assigned a priority according to certain criteria. Hence, whenever more than one data item are being requested, the data item with the highest priority is served first. Several request scheduling policies have been proposed in the literature. However, they all extend or integrate one or more of the following basic policies: 1) Shortest Job First (SJF), 2) Earliest Deadline First (EDF), 3) Most Requests First (MRF), and 4)

25 Chapter 2. Background 16 First Come First Served (FCFS). Under these policies, a scheduling point is reached whenever a data item is transmitted. At that point, the priority of each item with pending requests is computed according to the policy used and the one with the highest priority is served first. Formally, we define the priority functions for these policies as follows: 1. SJF: Each data item I i is assigned a priority equal to 1/C i, where C i is the service time (i.e., transmission cost) of I i. Hence, SJF prioritizes the data item with the smallest service time. 2. EDF: Each data item I i is assigned a priority equal to 1/D i, where D i is the minimum deadline among all the requests for I i. Thus, EDF prioritizes the data item with the earliest deadline. 3. MRF: Each data item I i is assigned a priority equal to P i, where P i is the number of pending requests for I i. Hence, MRF prioritizes the data item with the highest number of pending requests. 4. FCFS: Each data item I i is assigned a priority equal to W i, where W i is the waiting time for the oldest outstanding request for item I i. Hence, FCFS prioritizes the data item with the oldest outstanding request Specialized Scheduling Policies Previous work has extended the basic scheduling policies either to: 1) optimize response time, or 2) minimize drop rate. In the following, we present examples of such schedulers. For instance, with the goal of optimizing response time, Aksoy et al. propose the R W policy which is a combination of MRF and FCFS [5]. Similarly, a policy which combines the benefits of MRF and SJF has appeared in [25, 10]. In this work, we will refer to that policy as Weighted Shortest Job First (W-SJF). Formally, we define the priority functions of these two policies as follows:

26 Chapter 2. Background 17 R W : Each data item I i is assigned a priority equal to P i W i, where P i is the number of pending requests for I i and W i, is the waiting time for the oldest outstanding request for item I i. Thus, R W prioritizes a data item either because it is highly popular or it has at least a long outstanding request. W-SJF: Each data item I i is assigned a priority equal to P i /C i, where P i is the number of pending requests for I i, and C i is its service time. Hence, W-SJF prioritizes a data item either because it is highly popular or it has a small service time. Similarly, several policies have been proposed with the objective of minimizing drop rate. These policies try to satisfy requests within their corresponding hard-deadline and drop the requests which are prone to missing their deadlines. Examples of such policies include Slack time Inverse Number of pending requests (SIN) [36], which combines EDF policy with MRF. Similarly, Multi Attribute Integration (MAI) [11] and Preemptive Request Deadline Size (PRDS) [24], combine EDF policy together with MRF and SJF. More specifically, the priority functions of these policies are defined as follows: SIN-α: Each data item I i is assigned a priority equal to (D i t)/p α i, where D i is the minimum deadline among all requests for I i, t is the current time, P i is the number of pending requests for I i, and α 0 is a weight parameter. On the contrary with the common case, under SIN-α policy the item with the smallest priority value is served first. Hence, SIN-α prioritizes a data item either because it is highly popular or it has an early deadline. MAI: Each data item I i is assigned a priority equal to P i /(C i S i ), where P i is similar as above, C i is the service time of I i, and S i = D i (t + C i ) is the minimum slack (i.e., the remaining time before the request deadline excluding its service time) among all requests for item I i. Thus, MAI prioritizes a data item either because it is highly popular, it has a small service time or it has a small slack.

27 Chapter 2. Background 18 PRDS: Each data item I i is assigned a priority equal to P i /(D i C i ), with the parameters similar as above. Thus, PRDS prioritizes a data item either because it is highly popular, it has a small deadline or it has a small service time. The difference compared with MAI, stands in replacing the slack S i with the minimum deadline D i. However, none of the above scheduling policies targets to optimize performance under SLAs based on soft-deadlines. We show that some of these schedulers are sometimes also successful in optimizing performance under SLAs for some workload conditions but not all. Hence, in the following chapter we propose SAAB, an adaptive scheduling policy that optimizes performance of the broadcast schedule when requests are associated with soft-deadlines.

28 Chapter 3 SLA-Aware Adaptive Broadcast Scheduling In this chapter we introduce SAAB, our SLA-Aware Adaptive Broadcast scheduling policy, which optimizes performance of data broadcasting under SLA-based measures. First, we introduce our scheduling policy that minimizes the tardiness of the broadcast schedule. Then, we extend this policy with the goal of maximizing the system utility. 3.1 SAAB with Tardiness-Aware Scheduling In the following, first we present the tardiness-based performance metric, then we discuss the problem of optimizing data broadcasting under this metric, and finally, we present our SAAB request scheduling policy for optimizing tardiness The Tardiness Metric Typically, the broadcast server strives to satisfy each request R i before its deadline D i. However, if R i cannot meet its deadline, the system will still satisfy it but it will incur some tardiness T i which accounts for the amount of deviation from the pre-specified SLA 19

29 Chapter 3. SLA-Aware Adaptive Broadcast Scheduling 20 (i.e., deadline). Formally, the tardiness metric is defined as follows: T i = 0 iff F i D i, and T i = F i D i otherwise, where F i is the finish time of request R i (as illustrated in Figure 2.3). In contrast to the scheduling policies presented in Section 2.2, in our work we focus on soft deadline-based metrics. This includes the tardiness metric defined above and the more general utility metric defined later in Section 3.2. Towards optimizing tardiness, it has been shown that EDF outperforms SJF at low system utilization, whereas at high utilization, SJF significantly outperforms EDF [32, 15]. This has been observed in the context of transaction scheduling where each transaction is submitted by a single user. However, in the data broadcast scheduling problem addressed in this work, a data item is typically requested by several users. This request aggregation is efficient since it allows for fewer data broadcasts which saves bandwidth and reduces delays. However, in the presence of SLAs, different requests for the same data item might have different deadlines which are often in conflict. SAAB exploits such variability in deadlines in order to maximize the overall performance as explained next Tardiness-Aware SAAB In the following we introduce SAAB scheduling policy for optimizing tardiness. Towards explaining its functionality, we describe the steps followed in obtaining its priority. First, we obtain its priority function for the particular case where we have only two data items in the scheduling queue. Further, we adapt this priority function to a general case where we have multiple data items in the scheduling queue. Finally, we present an analysis that clarifies better the intuition underlying our proposed SAAB policy. In order to describe out SAAB scheduling policy, let s first assume an entry in the scheduling queue for data item I x where I x has a transmission cost C x. Further, assume there is some pending request R x for that data item I x with deadline D x. Finally, let S x

30 Chapter 3. SLA-Aware Adaptive Broadcast Scheduling 21 be the current slack of request R x if it is served immediately. Formally, the slack S x of request R x at the current time t, is the amount of time between R x s deadline and its finish time if R x is scheduled immediately (at the current time t). Hence, S x = D x (t + C x ). However, in the general case data item I x will have have P x pending requests. Hence, the slacks for these requests are < S x,1, S x,2,..., S x,px > and they are computed similarly. In order to decide if I x should be scheduled for transmission, SAAB assigns I x a priority value which is computed according to a certain priority function. To illustrate the priority function employed by SAAB, let us assume two data items I 1 and I 2 with transmission costs C 1 and C 2 respectively. Further, assume that the number of pending requests for item I 1 is P 1 and the number of pending requests for item I 2 is P 2. Hence, the slacks for I 1 is < S 1,1, S 1,2,..., S 1,P1 >. Similarly, the slacks for I 2 is < S 2,1, S 2,2,..., S 2,P2 >. Finally, assume a schedule X where I 1 is scheduled for transmission first then I 2, and another alternative schedule Y where I 2 is scheduled first then I 1. For X to be the schedule of choice, then the tardiness provided by X should be less than that of Y. In the following, we will show how to compute such tardiness for X and Y, and how to utilize that computation for designing our priority function. The tardiness of a request R i depends on its current slack S i, as follows: 1. If S i < 0, then R i will pass its deadline even if it is served immediately, causing R i to accumulate a tardiness T i = S i. 2. If S i 0, then R i will meet its deadline if it is served immediately, causing R i to finish with S i time units before its deadline (i.e., T i =0). Under schedule X, where I 1 is followed by I 2, the total tardiness of requests for data item I 1 is simply the sum of tardiness of all requests which currently have a negative slack. These are the requests which are currently tardy by default at the time when the scheduling decision is made. Let us assume that the number of these requests is

31 Chapter 3. SLA-Aware Adaptive Broadcast Scheduling 22 P 1,def, where 0 P 1,def P 1. Hence, the tardiness of requests for item I 1 under schedule X is: T 1,X = P 1,def j ( S 1,j ) When I 1 is scheduled first, then each request for I 2 is delayed by the amount of time needed to transmit I 1 (i.e., C 1 ). Hence, scheduling item I 1 first will affect the tardiness of the requests for item I 2 as follows: 1. Increasing the tardiness of requests which were already tardy by default at the time where the scheduling decision was made (i.e., P 2,def ), 2. Rendering additional tardy requests to I 2. These are the requests which had positive slack when the scheduling decision was made but that slack became negative while waiting for the transmission of I 1. Lets assume the number of those additional requests is P 2,add. Accordingly, the total tardiness of requests for data item I 2 is simply the sum of all requests which will have a negative slack after transmitting I 1 (i.e., P 2,def + P 2,add ) where 0 P 2,def + P 2,add P 2. Hence, the tardiness of requests for item I 2 under schedule X is: T 2,X = P 2,def +P 2,add j (C 1 S 2,j ) Accordingly, the total tardiness (T X ) under schedule X is computed as follows: T X = P 1,def j ( S 1,j ) + P 2,def +P 2,add j (C 1 S 2,j ) In a similar manner, under schedule Y, where item I 2 is broadcasted before item I 1, the total tardiness is computed as follows:

32 Chapter 3. SLA-Aware Adaptive Broadcast Scheduling 23 T Y = P 2,def j ( S 2,j ) + P 1,def +P 1,add j (C 2 S 1,j ) In order that T X to be less than T Y, and after applying several computations we obtain that: has to be greater than: P 1 1,add (P 1,def + P 1,add + S 1,j ) C 1 C 2 j P 1 2,add (P 2,def + P 2,add + S 2,j ) C 2 C 1 The above inequality allows us to compare the priorities of items I 1 and I 2. In general, in order to compare item I i with any arbitrary item from the scheduling queue, we substitute C 2 with C, where C is average transmission cost of any data item. Hence, we obtain the priority: j P r i = 1 C i (P i,def + P i,add + P i,add j S i,j C ) where P i,add is the number of requests for item I i where S i,j C < 0. In the above priority function, C represents the delay incurred by I i if another single arbitrary data item is scheduled first. However, in the general case, we want to compare I i against all the data items with pending requests. Hence, our general priority function is defined as: P r i = 1 C i (P i,def + P i,add + P i,add j S i,j ) (3.1) (n 1) C where n is the number of data items with pending requests, (n 1) C is the total delay incurred by requests for I i if the other n 1 data items from the queue are scheduled first, and P i,add is the number of requests for item I i where S i,j (n 1) C < 0 (i.e., S i,j /(n 1) C < 1).

33 Chapter 3. SLA-Aware Adaptive Broadcast Scheduling 24 Analysis : To understand the intuition underlying our proposed SAAB policy, consider the following two situations: 1. P i,def = P i : That is when all requests for item I i are tardy by default (i.e., tardy at the scheduling point). In such case, P r i = P i /C i which is the maximum priority value assigned to I i. 2. P i,def + P i,add = 0: That is when no requests for item I i are tardy by default (i.e., P i,def = 0) and no additional requests for I i will become tardy if the other n 1 items in the requests queue are scheduled first (i.e., P i,add = 0). In such case, P r i = 0 which is the minimum priority value assigned to I i. Notice that in the first case above, the priority assigned to an item under SAAB is similar to that assigned under Weighted Shortest Job First (W-SJF) which is known to optimize tardiness under high utilization where most requests miss their deadline. In the second case, SAAB assigns a priority of zero to a data item I i that has enough slack to allow all other items in the queue to be scheduled first without adding any tardiness to requests for I i. In this case SAAB policy has a similar impact as EDF, which is known to optimize tardiness under low utilization, where request slacks are big enough to achieve an optimal ordering of requests without accumulating any tardiness. In between those two extreme cases, SAAB assigns priority values according to the current slacks available for each item and according to the competition from other items in the queue, which in turn allows SAAB to dynamically adapt to all kinds of workload conditions. 3.2 SAAB with Utility-Aware Scheduling In this section, first we present the utility-based performance metric, then we discuss the relationship between tardiness and utility as two forms of SLA, and finally, we present our SAAB scheduling policy for maximizing utility.

34 Chapter 3. SLA-Aware Adaptive Broadcast Scheduling The Utility Metric This general form of SLA allows users to define the utility of requested data as a function in response time. Typically such utility function consists of two components: The first component specifies the maximum utility when data is received before its deadline and the second component specifies a monotonically decreasing function which defines the utility of data received after its deadline. For instance, consider the utility function described in [21] and illustrated in Figure 3.1. For that utility function, the user expects the response time E i for request R i to be less than or equal to Êi, where Êi = D i A i. Under such function, the utility gained by the system when request R i is satisfied is defined as: V i u i (E i ) = V i e α i(e i Êi) ife i Êi ife i > Êi (3.2) Under such utility function, the system gains the maximum value (i.e., V i ) as long as R i is satisfied before its deadline D i (or equivalently, if E i Êi). This gain decreases exponentially as E i exceeds Êi. For instance, with α i = ln0.5, the utility for a request R Ê i i decays by half for every order of increase in the response time Tardiness vs. Utility The tardiness form of SLA presented in Section 3.1 represents a special case of the general utility form of SLA. In particular, under the tardiness metric, the broadcast system is not penalized as long as a request is satisfied before its deadline. However, if the data item is received after its deadline, the system is penalized where that penalty increases linearly with delay. For instance, consider a request R i for data item I i with arrival time A i and deadline D i. Hence, the expected response time for that request is Êi = D i A i. The utility for such request is illustrated in Figure 3.2 and formulated as:

35 Chapter 3. SLA-Aware Adaptive Broadcast Scheduling 26 Figure 3.1: Sample utility function Figure 3.2: Tardiness expressed as utility function

36 Chapter 3. SLA-Aware Adaptive Broadcast Scheduling 27 0 ife i u i (E i ) = Êi (E i Êi) ife i > Êi (3.3) As shown in Figure 3.2, accumulating tardiness resembles a negative utility value. In particular, the broadcast system gains maximum utility (=0) if the actual response time E i of R i is within the timeframe expected by the user (i.e., E i Êi). However, the utility gained by the system decreases as E i increases (i.e., the system is penalized). For instance, the system gains a utility value of Êi if the request response time is 2Êi. Equivalently, the system is penalized Êi if the request response time is 2Êi. Establishing this relationship between tardiness and utility enables us to extend our SAAB scheduling policy for the goal of optimizing utility as explained in the next section Utility-Aware SAAB In order to schedule for optimizing utility, we use a similar technique as in Section 3.1. In particular, let s assume two data items I 1 and I 2 with transmission costs C 1 and C 2 respectively. Further, let s assume that data item I 1 has P 1 pending requests (i.e., < R 1,1, R 1,2,..., R 1,P1 >), where E 1,j is the response time of request R 1,j if I 1 is scheduled for broadcasting right now. Similarly, I 2 has P 2 pending requests (i.e., < R 2,1, R 2,2,..., R 2,P2 >), where E 2,j is the response time of request R 2,j if I 2 is scheduled for broadcasting right now. In a scheduler X, where item I 1 is broadcasted before item I 2, the response time of request R 1,j is E 1,j since I 1 is scheduled immediately. However, each request for I 2 is delayed by the amount of time needed to transmit I 1 (i.e., C 1 ). Hence, the response time for each request for I 2 increases by C 1 time units. Accordingly, the total utility U X under schedule X is computed as: P 1 P 2 U X = u 1,j (E 1,j ) + u 2,j (C 1 + E 2,j ) j j

37 Chapter 3. SLA-Aware Adaptive Broadcast Scheduling 28 Similarly, in a scheduler Y, where item I 2 is broadcasted before item I 1, the total utility U Y under schedule Y is computed as: P 2 P 1 U Y = u 2,j (E 2,j ) + u 1,j (C 2 + E 1,j ) j j In order that U X to be greater than U Y : has to be greater than: P 1 P 1 u 1,j (E 1,j ) u 1,j (C 2 + E 1,j ) j j P 2 P 2 u 2,j (E 2,j ) u 2,j (C 1 + E 2,j ) j j To achieve the general priority function, we extend the comparison to the general case where we compare an arbitrary item I i with the other items in the scheduling queue as explained earlier in Section 3.1. Accordingly, the general priority function for optimizing tardiness is defined as: P i P i P r i = u i,j (E i,j ) u i,j (E i,j + (n 1) C) (3.4) j j where n is the number of data items with pending requests, (n 1) C is the total increase in response time incurred by each request for I i if the other n 1 data items in the queue are scheduled first. To understand the intuition underlying our SAAB policy for optimizing utility, notice that the priority function above (Eq. 3.4) computes the loss in utility incurred by I i if all other data items in the queue are scheduled first. In particular, SAAB employs a greedy heuristic where it gives the highest priority to the data item which is penalized the most if it is scheduled last.

38 Chapter 4 Experimental Setup We have created a simulator to model both an on-demand and a push-based data broadcasting server. In the following, we present the settings that we used in generating the workload, as well as the scheduling policies taken into consideration. Experimental results are presented in the next chapter. Data items: We model a database of {10,000 or 100} data items where the size of each item is picked from the range [s min s max ] according to a Zipf distribution. The default size range is [50K 150K] and the skewness parameter (θ size ) for the Zipf distribution is 1.0 resulting in more small-size items in the database. Additionally, the popularity of each item is set according to a Zipf distribution with a skewness parameter (θ pop ) of 0.5. By default, if not specified otherwise, there is no correlation between the popularity of a data item and its size. Deadlines: As already specified in the model section, each request is associated with a soft deadline, where the deadline D i for request R i is set as: D i = A i + (1 + SF i ) C i (4.1) where A i is the arrival time of R i, C i is the service time of the data item requested by R i, and SF i is the slack factor parameter which determines the ratio between the initial 29

SLA-Aware Adaptive On-Demand Data Broadcasting in Wireless Environments

SLA-Aware Adaptive On-Demand Data Broadcasting in Wireless Environments 2009 Tenth International Conference on Mobile Data Management: Systems, Services and Middleware SLA-Aware Adaptive On-Demand Data Broadcasting in Wireless Environments Adrian Daniel Popescu Mohamed A.

More information

Optimizing I/O-Intensive Transactions in Highly Interactive Applications

Optimizing I/O-Intensive Transactions in Highly Interactive Applications Optimizing I/O-Intensive Transactions in Highly Interactive Applications Mohamed A. Sharaf ECE Department University of Toronto Toronto, Ontario, Canada msharaf@eecg.toronto.edu Alexandros Labrinidis CS

More information

Stretch-Optimal Scheduling for On-Demand Data Broadcasts

Stretch-Optimal Scheduling for On-Demand Data Broadcasts Stretch-Optimal Scheduling for On-Demand Data roadcasts Yiqiong Wu and Guohong Cao Department of Computer Science & Engineering The Pennsylvania State University, University Park, PA 6 E-mail: fywu,gcaog@cse.psu.edu

More information

Volume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering

Volume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Optimal Round

More information

Pull vs. Hybrid: Comparing Scheduling Algorithms for Asymmetric Time-Constrained Environments

Pull vs. Hybrid: Comparing Scheduling Algorithms for Asymmetric Time-Constrained Environments Pull vs. Hybrid: Comparing Scheduling Algorithms for Asymmetric Time-Constrained Environments Jesus Fernandez-Conde and Daniel Mozos Department of Computer Architecture Universidad Complutense de Madrid,

More information

Adaptive Scheduling for On-Demand Time-Critical Information Dissemination over Data Broadcast Channel *

Adaptive Scheduling for On-Demand Time-Critical Information Dissemination over Data Broadcast Channel * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 27, 1959-1983 (211) Adaptive Scheduling for On-Demand Time-Critical Information Dissemination over Data Broadcast Channel * Department of Communication Engineering

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Experiments with Broadcast Routing Algorithms for Energy- Constrained Mobile Adhoc Networks. (Due in class on 7 March 2002)

Experiments with Broadcast Routing Algorithms for Energy- Constrained Mobile Adhoc Networks. (Due in class on 7 March 2002) EE Project Description Winter Experiments with Broadcast Routing Algorithms for Energy- Constrained Mobile Adhoc Networks (Due in class on March ) Abstract In this project, you will experiment with the

More information

Multiprocessor and Real-Time Scheduling. Chapter 10

Multiprocessor and Real-Time Scheduling. Chapter 10 Multiprocessor and Real-Time Scheduling Chapter 10 1 Roadmap Multiprocessor Scheduling Real-Time Scheduling Linux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor Systems

More information

Maintaining Mutual Consistency for Cached Web Objects

Maintaining Mutual Consistency for Cached Web Objects Maintaining Mutual Consistency for Cached Web Objects Bhuvan Urgaonkar, Anoop George Ninan, Mohammad Salimullah Raunak Prashant Shenoy and Krithi Ramamritham Department of Computer Science, University

More information

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek Yingfei Dong David Du Poznan Supercomputing and Dept. of Electrical Engineering Dept. of Computer Science Networking Center

More information

Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling

Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling Weizhen Mao Department of Computer Science The College of William and Mary Williamsburg, VA 23187-8795 USA wm@cs.wm.edu

More information

Scheduling On-Demand Broadcast Items

Scheduling On-Demand Broadcast Items Scheduling On-Demand Broadcast Items Miao Wang mwang@inf.fu-berlin.de Advisor: Ilias Michalarias Freie Universität Berlin, Institute for Computer Science Abstract. The demand for various information delivery

More information

Implementation and modeling of two-phase locking concurrency control a performance study

Implementation and modeling of two-phase locking concurrency control a performance study INFSOF 4047 Information and Software Technology 42 (2000) 257 273 www.elsevier.nl/locate/infsof Implementation and modeling of two-phase locking concurrency control a performance study N.B. Al-Jumah a,

More information

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) CPU Scheduling Daniel Mosse (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process

More information

FUTURE communication networks are expected to support

FUTURE communication networks are expected to support 1146 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 13, NO 5, OCTOBER 2005 A Scalable Approach to the Partition of QoS Requirements in Unicast and Multicast Ariel Orda, Senior Member, IEEE, and Alexander Sprintson,

More information

A STUDY OF THE PERFORMANCE TRADEOFFS OF A TRADE ARCHIVE

A STUDY OF THE PERFORMANCE TRADEOFFS OF A TRADE ARCHIVE A STUDY OF THE PERFORMANCE TRADEOFFS OF A TRADE ARCHIVE CS737 PROJECT REPORT Anurag Gupta David Goldman Han-Yin Chen {anurag, goldman, han-yin}@cs.wisc.edu Computer Sciences Department University of Wisconsin,

More information

Traffic contract. a maximum fraction of packets that can be lost. Quality of Service in IP networks 1. Paolo Giacomazzi. 4. Scheduling Pag.

Traffic contract. a maximum fraction of packets that can be lost. Quality of Service in IP networks 1. Paolo Giacomazzi. 4. Scheduling Pag. 4. Scheduling Pag. 1 Traffic contract A traffic contract between a provider and a customer includer a TCA and a SLA The TCA specifies the traffic profile The SLA specifies the QoS requirements a delay

More information

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL 2006 243 Segment-Based Streaming Media Proxy: Modeling Optimization Songqing Chen, Member, IEEE, Bo Shen, Senior Member, IEEE, Susie Wee, Xiaodong

More information

Chapter -5 QUALITY OF SERVICE (QOS) PLATFORM DESIGN FOR REAL TIME MULTIMEDIA APPLICATIONS

Chapter -5 QUALITY OF SERVICE (QOS) PLATFORM DESIGN FOR REAL TIME MULTIMEDIA APPLICATIONS Chapter -5 QUALITY OF SERVICE (QOS) PLATFORM DESIGN FOR REAL TIME MULTIMEDIA APPLICATIONS Chapter 5 QUALITY OF SERVICE (QOS) PLATFORM DESIGN FOR REAL TIME MULTIMEDIA APPLICATIONS 5.1 Introduction For successful

More information

Fault tolerant scheduling in real time systems

Fault tolerant scheduling in real time systems tolerant scheduling in real time systems Afrin Shafiuddin Department of Electrical and Computer Engineering University of Wisconsin-Madison shafiuddin@wisc.edu Swetha Srinivasan Department of Electrical

More information

Resource Allocation Strategies for Multiple Job Classes

Resource Allocation Strategies for Multiple Job Classes Resource Allocation Strategies for Multiple Job Classes by Ye Hu A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Computer

More information

Column Generation Method for an Agent Scheduling Problem

Column Generation Method for an Agent Scheduling Problem Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Better Never than Late: Meeting Deadlines in Datacenter Networks

Better Never than Late: Meeting Deadlines in Datacenter Networks Better Never than Late: Meeting Deadlines in Datacenter Networks Christo Wilson, Hitesh Ballani, Thomas Karagiannis, Ant Rowstron Microsoft Research, Cambridge User-facing online services Two common underlying

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information

Journal of Electronics and Communication Engineering & Technology (JECET)

Journal of Electronics and Communication Engineering & Technology (JECET) Journal of Electronics and Communication Engineering & Technology (JECET) JECET I A E M E Journal of Electronics and Communication Engineering & Technology (JECET)ISSN ISSN 2347-4181 (Print) ISSN 2347-419X

More information

Improving VoD System Efficiency with Multicast and Caching

Improving VoD System Efficiency with Multicast and Caching Improving VoD System Efficiency with Multicast and Caching Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 1. Introduction 2. Previous Works 3. UVoD

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

Real-time operating systems and scheduling

Real-time operating systems and scheduling Real-time operating systems and scheduling Problem 21 Consider a real-time operating system (OS) that has a built-in preemptive scheduler. Each task has a unique priority and the lower the priority id,

More information

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service TDDD82 Secure Mobile Systems Lecture 6: Quality of Service Mikael Asplund Real-time Systems Laboratory Department of Computer and Information Science Linköping University Based on slides by Simin Nadjm-Tehrani

More information

Scheduling Algorithms to Minimize Session Delays

Scheduling Algorithms to Minimize Session Delays Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of

More information

Implementation Techniques

Implementation Techniques V Implementation Techniques 34 Efficient Evaluation of the Valid-Time Natural Join 35 Efficient Differential Timeslice Computation 36 R-Tree Based Indexing of Now-Relative Bitemporal Data 37 Light-Weight

More information

Device-to-Device Networking Meets Cellular via Network Coding

Device-to-Device Networking Meets Cellular via Network Coding Device-to-Device Networking Meets Cellular via Network Coding Yasaman Keshtkarjahromi, Student Member, IEEE, Hulya Seferoglu, Member, IEEE, Rashid Ansari, Fellow, IEEE, and Ashfaq Khokhar, Fellow, IEEE

More information

Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ

Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ Dongkun Shin Jihong Kim School of CSE School of CSE Seoul National University Seoul National University Seoul, Korea

More information

Scheduling Real Time Parallel Structure on Cluster Computing with Possible Processor failures

Scheduling Real Time Parallel Structure on Cluster Computing with Possible Processor failures Scheduling Real Time Parallel Structure on Cluster Computing with Possible Processor failures Alaa Amin and Reda Ammar Computer Science and Eng. Dept. University of Connecticut Ayman El Dessouly Electronics

More information

Symphony: An Integrated Multimedia File System

Symphony: An Integrated Multimedia File System Symphony: An Integrated Multimedia File System Prashant J. Shenoy, Pawan Goyal, Sriram S. Rao, and Harrick M. Vin Distributed Multimedia Computing Laboratory Department of Computer Sciences, University

More information

A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers

A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers Nabil J. Sarhan Chita R. Das Department of Computer Science and Engineering The Pennsylvania State University University Park,

More information

On Improving the Performance of Cache Invalidation in Mobile Environments

On Improving the Performance of Cache Invalidation in Mobile Environments Mobile Networks and Applications 7, 291 303, 2002 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. On Improving the Performance of Cache Invalidation in Mobile Environments GUOHONG CAO

More information

A Modified Maximum Urgency First Scheduling Algorithm for Real-Time Tasks

A Modified Maximum Urgency First Scheduling Algorithm for Real-Time Tasks Vol:, o:9, 2007 A Modified Maximum Urgency irst Scheduling Algorithm for Real-Time Tasks Vahid Salmani, Saman Taghavi Zargar, and Mahmoud aghibzadeh International Science Index, Computer and Information

More information

Computer Science 4500 Operating Systems

Computer Science 4500 Operating Systems Computer Science 4500 Operating Systems Module 6 Process Scheduling Methods Updated: September 25, 2014 2008 Stanley A. Wileman, Jr. Operating Systems Slide 1 1 In This Module Batch and interactive workloads

More information

Improving Internet Performance through Traffic Managers

Improving Internet Performance through Traffic Managers Improving Internet Performance through Traffic Managers Ibrahim Matta Computer Science Department Boston University Computer Science A Glimpse of Current Internet b b b b Alice c TCP b (Transmission Control

More information

Delay-minimal Transmission for Energy Constrained Wireless Communications

Delay-minimal Transmission for Energy Constrained Wireless Communications Delay-minimal Transmission for Energy Constrained Wireless Communications Jing Yang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, M0742 yangjing@umd.edu

More information

Precedence Graphs Revisited (Again)

Precedence Graphs Revisited (Again) Precedence Graphs Revisited (Again) [i,i+6) [i+6,i+12) T 2 [i,i+6) [i+6,i+12) T 3 [i,i+2) [i+2,i+4) [i+4,i+6) [i+6,i+8) T 4 [i,i+1) [i+1,i+2) [i+2,i+3) [i+3,i+4) [i+4,i+5) [i+5,i+6) [i+6,i+7) T 5 [i,i+1)

More information

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections ) CPU Scheduling CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections 6.7.2 6.8) 1 Contents Why Scheduling? Basic Concepts of Scheduling Scheduling Criteria A Basic Scheduling

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Telecommunication Services Engineering Lab. Roch H. Glitho

Telecommunication Services Engineering Lab. Roch H. Glitho 1 Quality of Services 1. Terminology 2. Technologies 2 Terminology Quality of service Ability to control network performance in order to meet application and/or end-user requirements Examples of parameters

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

Efficient Dynamic Multilevel Priority Task Scheduling For Wireless Sensor Networks

Efficient Dynamic Multilevel Priority Task Scheduling For Wireless Sensor Networks Efficient Dynamic Multilevel Priority Task Scheduling For Wireless Sensor Networks Mrs.K.Indumathi 1, Mrs. M. Santhi 2 M.E II year, Department of CSE, Sri Subramanya College Of Engineering and Technology,

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

Deadline Guaranteed Service for Multi- Tenant Cloud Storage Guoxin Liu and Haiying Shen

Deadline Guaranteed Service for Multi- Tenant Cloud Storage Guoxin Liu and Haiying Shen Deadline Guaranteed Service for Multi- Tenant Cloud Storage Guoxin Liu and Haiying Shen Presenter: Haiying Shen Associate professor *Department of Electrical and Computer Engineering, Clemson University,

More information

Improving the Data Scheduling Efficiency of the IEEE (d) Mesh Network

Improving the Data Scheduling Efficiency of the IEEE (d) Mesh Network Improving the Data Scheduling Efficiency of the IEEE 802.16(d) Mesh Network Shie-Yuan Wang Email: shieyuan@csie.nctu.edu.tw Chih-Che Lin Email: jclin@csie.nctu.edu.tw Ku-Han Fang Email: khfang@csie.nctu.edu.tw

More information

Operating Systems. Scheduling

Operating Systems. Scheduling Operating Systems Scheduling Process States Blocking operation Running Exit Terminated (initiate I/O, down on semaphore, etc.) Waiting Preempted Picked by scheduler Event arrived (I/O complete, semaphore

More information

A NEW ALGORITHM FOR JITTER CONTROL IN WIRELESS NETWORKS FOR QUALITY OF SERVICE

A NEW ALGORITHM FOR JITTER CONTROL IN WIRELESS NETWORKS FOR QUALITY OF SERVICE Int. J. Elec&Electr.Eng&Telecoms. 2013 B Santosh Kumar et al., 2013 Research Paper ISSN 2319 2518 www.ijeetc.com Vol. 2, No. 2, April 2013 2013 IJEETC. All Rights Reserved A NEW ALGORITHM FOR JITTER CONTROL

More information

A Study of the Performance Tradeoffs of a Tape Archive

A Study of the Performance Tradeoffs of a Tape Archive A Study of the Performance Tradeoffs of a Tape Archive Jason Xie (jasonxie@cs.wisc.edu) Naveen Prakash (naveen@cs.wisc.edu) Vishal Kathuria (vishal@cs.wisc.edu) Computer Sciences Department University

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture No. # 13 Transportation Problem, Methods for Initial Basic Feasible

More information

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI CMPE 655- MULTIPLE PROCESSOR SYSTEMS OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI What is MULTI PROCESSING?? Multiprocessing is the coordinated processing

More information

Pull vs Push: A Quantitative Comparison for Data Broadcast

Pull vs Push: A Quantitative Comparison for Data Broadcast Pull vs Push: A Quantitative Comparison for Data Broadcast Demet Aksoy Mason Sin-Fai Leung Computer Science Department University of California, Davis (aksoy,leungm)@cs.ucdavis.edu Abstract Advances in

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

Event-Driven Scheduling. (closely following Jane Liu s Book)

Event-Driven Scheduling. (closely following Jane Liu s Book) Event-Driven Scheduling (closely following Jane Liu s Book) Real-Time Systems, 2006 Event-Driven Systems, 1 Principles Assign priorities to Jobs At events, jobs are scheduled according to their priorities

More information

Real-Time Scheduling. Dynamic Priority Servers

Real-Time Scheduling. Dynamic Priority Servers Real-Time Scheduling Dynamic Priority Servers Objectives Schedule soft aperiodic and hard periodic tasks Reduce average response time of aperiodic requests without compromising schedulability of periodic

More information

CERIAS Tech Report Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien

CERIAS Tech Report Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien CERIAS Tech Report 2003-56 Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien Center for Education and Research Information Assurance

More information

Fair, Effective, Efficient and Differentiated Scheduling in an Enterprise Data Warehouse

Fair, Effective, Efficient and Differentiated Scheduling in an Enterprise Data Warehouse Fair, Effective, Efficient and Differentiated Scheduling in an Enterprise Data Warehouse Chetan Gupta, Abhay Mehta, Song Wang, Umesh Dayal Hewlett-Packard Labs firstname.lastname@hp.com except for songw@hp.com

More information

OPERATING SYSTEM CONCEPTS UNDERSTAND!!! IMPLEMENT!!! ANALYZE!!!

OPERATING SYSTEM CONCEPTS UNDERSTAND!!! IMPLEMENT!!! ANALYZE!!! OPERATING SYSTEM CONCEPTS UNDERSTAND!!! IMPLEMENT!!! Processor Management Memory Management IO Management File Management Multiprogramming Protection and Security Network Management UNDERSTAND!!! IMPLEMENT!!!

More information

Performance and Waiting-Time Predictability Analysis of Design Options in Cost-Based Scheduling for Scalable Media Streaming

Performance and Waiting-Time Predictability Analysis of Design Options in Cost-Based Scheduling for Scalable Media Streaming Performance and Waiting-Time Predictability Analysis of Design Options in Cost-Based Scheduling for Scalable Media Streaming Mohammad A. Alsmirat and Nabil J. Sarhan Department of Electrical and Computer

More information

Resource Reservation & Resource Servers

Resource Reservation & Resource Servers Resource Reservation & Resource Servers Resource Reservation Application Hard real-time, Soft real-time, Others? Platform Hardware Resources: CPU cycles, memory blocks 1 Applications Hard-deadline tasks

More information

ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS

ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS JUAN CARLOS ARAGON SUMMIT STANFORD UNIVERSITY TABLE OF CONTENTS 1.

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

Maximization of Time-to-first-failure for Multicasting in Wireless Networks: Optimal Solution

Maximization of Time-to-first-failure for Multicasting in Wireless Networks: Optimal Solution Arindam K. Das, Mohamed El-Sharkawi, Robert J. Marks, Payman Arabshahi and Andrew Gray, "Maximization of Time-to-First-Failure for Multicasting in Wireless Networks : Optimal Solution", Military Communications

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

Routing protocols in WSN

Routing protocols in WSN Routing protocols in WSN 1.1 WSN Routing Scheme Data collected by sensor nodes in a WSN is typically propagated toward a base station (gateway) that links the WSN with other networks where the data can

More information

On Distributed Algorithms for Maximizing the Network Lifetime in Wireless Sensor Networks

On Distributed Algorithms for Maximizing the Network Lifetime in Wireless Sensor Networks On Distributed Algorithms for Maximizing the Network Lifetime in Wireless Sensor Networks Akshaye Dhawan Georgia State University Atlanta, Ga 30303 akshaye@cs.gsu.edu Abstract A key challenge in Wireless

More information

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou (  Zhejiang University Operating Systems (Fall/Winter 2018) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review Motivation to use threads

More information

A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services

A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 2, APRIL 2000 185 A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services Ion Stoica, Hui Zhang, Member, IEEE, and

More information

Lecture 9. Quality of Service in ad hoc wireless networks

Lecture 9. Quality of Service in ad hoc wireless networks Lecture 9 Quality of Service in ad hoc wireless networks Yevgeni Koucheryavy Department of Communications Engineering Tampere University of Technology yk@cs.tut.fi Lectured by Jakub Jakubiak QoS statement

More information

1158 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST Coding-oblivious routing implies that routing decisions are not made based

1158 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST Coding-oblivious routing implies that routing decisions are not made based 1158 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST 2010 Network Coding-Aware Routing in Wireless Networks Sudipta Sengupta, Senior Member, IEEE, Shravan Rayanchu, and Suman Banerjee, Member,

More information

Introduction to Embedded Systems

Introduction to Embedded Systems Introduction to Embedded Systems Sanjit A. Seshia UC Berkeley EECS 9/9A Fall 0 008-0: E. A. Lee, A. L. Sangiovanni-Vincentelli, S. A. Seshia. All rights reserved. Chapter : Operating Systems, Microkernels,

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (10 th Week) (Advanced) Operating Systems 10. Multiprocessor, Multicore and Real-Time Scheduling 10. Outline Multiprocessor

More information

Some Optimization Trade-offs in Wireless Network Coding

Some Optimization Trade-offs in Wireless Network Coding Some Optimization Trade-offs in Wireless Network Coding Yalin Evren Sagduyu and Anthony Ephremides Electrical and Computer Engineering Department and Institute for Systems Research University of Maryland,

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Pricing Intra-Datacenter Networks with

Pricing Intra-Datacenter Networks with Pricing Intra-Datacenter Networks with Over-Committed Bandwidth Guarantee Jian Guo 1, Fangming Liu 1, Tao Wang 1, and John C.S. Lui 2 1 Cloud Datacenter & Green Computing/Communications Research Group

More information

Insensitive Traffic Splitting in Data Networks

Insensitive Traffic Splitting in Data Networks Juha Leino and Jorma Virtamo. 2005. Insensitive traffic splitting in data networs. In: Proceedings of the 9th International Teletraffic Congress (ITC9). Beijing, China, 29 August 2 September 2005, pages

More information

Mixed Criticality Scheduling in Time-Triggered Legacy Systems

Mixed Criticality Scheduling in Time-Triggered Legacy Systems Mixed Criticality Scheduling in Time-Triggered Legacy Systems Jens Theis and Gerhard Fohler Technische Universität Kaiserslautern, Germany Email: {jtheis,fohler}@eit.uni-kl.de Abstract Research on mixed

More information

Branch-and-Bound Algorithms for Constrained Paths and Path Pairs and Their Application to Transparent WDM Networks

Branch-and-Bound Algorithms for Constrained Paths and Path Pairs and Their Application to Transparent WDM Networks Branch-and-Bound Algorithms for Constrained Paths and Path Pairs and Their Application to Transparent WDM Networks Franz Rambach Student of the TUM Telephone: 0049 89 12308564 Email: rambach@in.tum.de

More information

Building a Real-time Notification System

Building a Real-time Notification System Building a Real-time Notification System September 2015, Geneva Author: Jorge Vicente Cantero Supervisor: Jiri Kuncar CERN openlab Summer Student Report 2015 Project Specification Configurable Notification

More information

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system Real world exposed programs Programs written to interact with the real world, outside the computer Programs handle input and output of data in pace matching the real world processes Necessitates ability

More information

Venice: Reliable Virtual Data Center Embedding in Clouds

Venice: Reliable Virtual Data Center Embedding in Clouds Venice: Reliable Virtual Data Center Embedding in Clouds Qi Zhang, Mohamed Faten Zhani, Maissa Jabri and Raouf Boutaba University of Waterloo IEEE INFOCOM Toronto, Ontario, Canada April 29, 2014 1 Introduction

More information

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) 1/32 CPU Scheduling The scheduling problem: - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) When do we make decision? 2/32 CPU Scheduling Scheduling decisions may take

More information

QUERY PLANNING FOR CONTINUOUS AGGREGATION QUERIES USING DATA AGGREGATORS

QUERY PLANNING FOR CONTINUOUS AGGREGATION QUERIES USING DATA AGGREGATORS QUERY PLANNING FOR CONTINUOUS AGGREGATION QUERIES USING DATA AGGREGATORS A. SATEESH 1, D. ANIL 2, M. KIRANKUMAR 3 ABSTRACT: Continuous aggregation queries are used to monitor the changes in data with time

More information

Improving object cache performance through selective placement

Improving object cache performance through selective placement University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2006 Improving object cache performance through selective placement Saied

More information

Multimedia Systems 2011/2012

Multimedia Systems 2011/2012 Multimedia Systems 2011/2012 System Architecture Prof. Dr. Paul Müller University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY http://www.icsy.de Sitemap 2 Hardware

More information

Chapter 9. Uniprocessor Scheduling

Chapter 9. Uniprocessor Scheduling Operating System Chapter 9. Uniprocessor Scheduling Lynn Choi School of Electrical Engineering Scheduling Processor Scheduling Assign system resource (CPU time, IO device, etc.) to processes/threads to

More information

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX Towards an Adaptive Distributed Shared Memory (Preliminary Version ) Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX 77843-3 E-mail: fjhkim,vaidyag@cs.tamu.edu

More information

A Level-wise Priority Based Task Scheduling for Heterogeneous Systems

A Level-wise Priority Based Task Scheduling for Heterogeneous Systems International Journal of Information and Education Technology, Vol., No. 5, December A Level-wise Priority Based Task Scheduling for Heterogeneous Systems R. Eswari and S. Nickolas, Member IACSIT Abstract

More information

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling Uniprocessor Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Three level scheduling 2 1 Types of Scheduling 3 Long- and Medium-Term Schedulers Long-term scheduler Determines which programs

More information

Network Support for Multimedia

Network Support for Multimedia Network Support for Multimedia Daniel Zappala CS 460 Computer Networking Brigham Young University Network Support for Multimedia 2/33 make the best of best effort use application-level techniques use CDNs

More information

CISC 7310X. C05: CPU Scheduling. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/1/2018 CUNY Brooklyn College

CISC 7310X. C05: CPU Scheduling. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/1/2018 CUNY Brooklyn College CISC 7310X C05: CPU Scheduling Hui Chen Department of Computer & Information Science CUNY Brooklyn College 3/1/2018 CUNY Brooklyn College 1 Outline Recap & issues CPU Scheduling Concepts Goals and criteria

More information

The Pennsylvania State University. The Graduate School. Department of Computer Science and Engineering

The Pennsylvania State University. The Graduate School. Department of Computer Science and Engineering The Pennsylvania State University The Graduate School Department of Computer Science and Engineering QUALITY OF SERVICE BENEFITS OF FLASH IN A STORAGE SYSTEM A Thesis in Computer Science and Engineering

More information

Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

More information

Computation of Multiple Node Disjoint Paths

Computation of Multiple Node Disjoint Paths Chapter 5 Computation of Multiple Node Disjoint Paths 5.1 Introduction In recent years, on demand routing protocols have attained more attention in mobile Ad Hoc networks as compared to other routing schemes

More information