Response Time Analysis of Asynchronous Real-Time Systems

Size: px
Start display at page:

Download "Response Time Analysis of Asynchronous Real-Time Systems"

Transcription

1 Response Time Analysis of Asynchronous Real-Time Systems Guillem Bernat Real-Time Systems Research Group Department of Computer Science University of York York, YO10 5DD, UK Technical Report: YCS January, 2002 Abstract In asynchronous real-time systems the time when all events occur can not be predicted beforehand. Systems with sporadic tasks, which suffer release jitter, or that operate a protocol for sharing resources like the priority ceiling protocol, for example, are asynchronous real-time systems. In this paper we present a sufficient and efficient response time based analysis technique for computing Ê µ, the worst case response time at each invocation of the periodic tasks of realtime asynchronous systems. In addition, efficient idle time computation for asynchronous systems is presented. This analysis technique can be applied to the analysis of several process models including weakly hard real-time systems, and slack management techniques like aperiodic servers and slack stealing algorithms. It is also shown that the pattern of response times of tasks in a hyperperiod is pseudoperiodic and that the the maximum response time instants tend to occur evenly separated within the hyperperiod. 1 Introduction A real-time system is a system in which the time at which events occur is important. These systems are generally structured as a set of concurrent cooperating tasks. These tasks are usually periodic and they have to finish within a well defined deadline. In hard real-time systems it is vital to prove that no single hard deadline is missed. However, not all systems or tasks in the system are always hard. For firm and soft tasks it is possible to miss some of the deadlines occasionally. Whereas for firm tasks it is generally assumed that no value is achieved by running the task over the deadline, for soft tasks there is still some value in running them although they finish late. The 1

2 problem with such definition is that it is generally not possible to determine how many deadlines can actually be missed in an arbitrary window of time. For this reason the notion of weakly-hard task has been introduced [3]. A weakly-hard task is a periodic task for which the number of deadlines that can be missed in any window of Ñ consecutive deadlines is precisely bounded. This is done, for instance, by specifying two Ò parameters, Ñ, meaning that in any Ñ consecutive invocations, at least Ò deadlines have to be met. The schedulability analysis of such systems needs to determine the response time over arbitrary consecutive invocations. There are two main categories of scheduling techniques: fixed priority scheduling and dynamic priority scheduling. A scheduler always runs the ready task with the highest priority first. With fixed priorities, this priority is assigned off-line and does not usually change during the execution of the system. Dynamic scheduling approaches recompute the priority continuously. In this article we consider fixed priority scheduled systems only. We further characterise real-time systems as being synchronous or asynchronous. A synchronous system is a system for which all timing events of the system can be precisely known. An asynchronous system is, on the contrary, a system for which some timing events may not be precisely known. The two only timing events which we need to consider are the task release times, and task completion times. For a system to be synchronous, it is necessary to determine precisely the exact execution time and the release times of the tasks at each invocation and the sources of interference that it can suffer so that the completion time can be precisely determined. These sources of interference therefore include the release time of higher priority tasks and their execution time. The analysis of synchronous systems is clearly easier to perform than the analysis of asynchronous systems. Moreover we argue that very few systems are indeed synchronous because there are sources of asynchronicity that can not be ignored. These include: Mixture of periodic and sporadic tasks. Albeit some tasks are periodic there are tasks that do not have precise arrival times (albeit there is a minimum separation between them). Interference from the underlying operating system that takes the form or release jitter or overheads due to interrupts. Usage of a protocol for sharing resources like the priority ceiling protocol introduces additional interference from lower priority tasks that does not allow the determination of exact interferences, as the time when a potential priority promotion shall occur is not known. Variation on the execution times. Tasks may not always run for their worst case execution time. This also affects the execution time and the completion time of the rest of the tasks as the interference pattern changes. However it is proven that response times have a monotonic relationship with the execution time and that the maximum response time is obtained when tasks run for their maximum execution time. 2

3 There are several techniques for the analysis of the timing properties of real-time systems. Response time analysis is an effective, simple and flexible technique that allows the modelling of most aspects of fixed priority systems. For a detailed description of the technique and the process models for which it can be applied see [7]. For a description of the basic response time schedulability analysis for sporadic tasks, blocking factors and release jitter see [1]. With these techniques it is possible to compute the longest execution time of any task. Therefore, checking whether a deadline will be met is based on determining whether this worst case response time is not longer than the deadline. Fortunately, variations of the technique exist to model the asynchronous systems depicted above. These techniques aim at the definition of the critical instant, the time in which a task may suffer its worst possible interference, and then obtain the worst case response time at that instant. However, it is still not applicable to a wider set of real-time systems which are the main focus of this article. We are interested in modelling the timing aspects of systems over multiple periodic task invocations. Other techniques exist for measuring the sensitivity of the tasks to variation in computation times, see for example [21], they allow to compute by how much the processor speed can be decreased and still guarantee that deadlines will be met. We present in this paper an extension to the worst case response time analysis technique that allows the computation of the worst case response time at each of the invocations of periodic tasks in an asynchronous system. For such systems it is necessary to compute, not only the worst case response time for a single invocation, but for a sequence of invocations. The response time over multiple invocations may vary considerably because the number of higher priority tasks that can interfere with the task invocation under analysis does change. The application of the formulation presented here is in the context of weakly-hard real-time systems. A weakly-hard task has to meet at least Ò deadlines in any Ñ consecutive invocations (additional types of constraints can also be defined). This is very useful when an occasional loss of a deadline is tolerable but guarantees on the minimum separation of missed deadlines is also required. The analysis of such systems requires the computation of the worst case response time at each invocation of periodic tasks. As not all invocations of the tasks coincide with the critical instant, even though some invocations may miss the deadline, it is likely that invocations do actually meet their deadlines. Weakly-hard real-time systems are analysis in detail in [3] and [4]. An application of the formulation developed in this paper to the analysis of the CAN bus with weakly-hard constraints on messages has been presented in [6]. In addition to the two scenarios presented above, it is often necessary to compute the amount of idle time available at a particular priority level in a window of time. The amount of idle time is the time that can be used by lower priority tasks. This computation is essential in algorithms for slack management like slack stealing algorithms [18] and aperiodic servers like the deferrable server or sporadic server. For a detailed analysis of aperiodic servers see [5] and [8]. They use the idle time as a measure of the amount of slack in the system. Most techniques can analyse synchronous systems only. The analysis of these features for asynchronous systems is very complex and computationally very expensive. The formulation presented in the article allow us to compute effectively and efficiently accurate upper bounds on the amount of idle time 3

4 available, even for asynchronous systems. The problem of multiple invocation analysis and idle time computation is quite easy to perform in synchronous systems. A schedule of the system within one hyperperiod (the lcm of the periods of the tasks) can be simulated and then an analysis of the generated schedule which guarantees obtaining worst case values is straightforward. This is the technique used, for example, in [17]. However, it can not be applied to asynchronous systems as the worst case may not be captured by the simulation of the schedule because the conditions that lead to the worst case response time at each task invocation are not known. The general problem of determining the schedulability of an asynchronous system is NP-hard [2]. A significant exception is the case of a single processor when all tasks are synchronous and released simultaneously and where deadlines are less than or equal to their periods [10]. As finding exact values is computationally intractable we aim to provide effective and accurate (tight) upper bounds only. The technique presented in this paper allows the identification of the variations in the response times of the invocations of a task within the hyperperiod. Even though a task may have long response times at the critical instant, these conditions are not repeated at every invocation, in fact, the conditions defined at the critical instant seldom occur. Thus, the response time of the tasks in most invocations is considerably lower than the response time around the critical instant. We show that the invocations when tasks suffer the longer response times are usually evenly spaced within the hyperperiod and that the pattern of response times is periodic with a cycle of at most the hyperperiod, moreover, pseudoperiodic within the hyperperiod. In summary, the contributions presented in this paper are: Efficient computation of idle time at each priority level for asynchronous systems. Computation of worst case response time at each invocation Ê µ for fixed priority real-time systems. Including blocking factors, release jitter, offsets and sporadic tasks. Analysis of periodicity of execution patterns within the hyperperiod and across hyperperiods. The rest of the article is structured as follows. We first describe in detail the formulation for the worst case response time for multiple invocations for a basic synchronous process model. This is done in sections 2 and 2.2. The extensions of the process model that consider the sources of asynchronicity are analysed in section 3. Conclusions are presented in section 4. 2 Basic process model We first define and analyse a synchronous process model made up of purely periodic tasks only. The basic process model which is based on the original process model presented in [11] is made up of Æ purely periodic tasks: 4

5 Ì µ ½ Æ where Ì is the period of the task. At every release of the task, it will request exactly time units. The task is expected to finish time units after it has been released. We initially do not restrict the deadline being smaller or equal to the period. We also use the priority of a task which is denoted by È, 1 being the highest priority. This hypotheses of having a fixed, although common, is unrealistic as there is always some pessimism in the computation of the worst case execution time and also because tasks seldom run for their worst case execution time every time. If tasks do not run at every invocation for the same amount of time,, then the hypothesis of synchronicity no longer holds. However, for the purpose of worst case analysis, the worst scenario occurs when tasks request time units at every task invocation. The following terms are used in the rest of the article: Ë µ ½µÌ is the start time of the task at invocation, for ½. µ is the worst case finalization time of task at invocation ½. Ê is an upper bound on the worst case response time at any invocation. Ê µ µ Ë µ is an upper bound on the worst case response time of task at invocation ½. Note that we aim at obtaining upper bounds on the response time as it is known that for some process models, finding the exact value is NP-hard and therefore safe and tight upper bounds suffice. For some process models (synchronous ones) we can indeed find the exact value. Given a task it is interesting to define the sets of tasks of higher and lower priority than this task. Formally, hp µ is the set of tasks of higher priority than task and lp µ is the set of tasks of lower priority than task. In a similar way, hep µ is the set of tasks of higher priority than or equal priority to task, and lep µ is the set of tasks of lower priority than or equal priority to task. The whole pattern of task releases is repeated every hyperperiod,à lcmì ¾. In fact, from the point of view of task, as it only suffers interference from higher priority tasks we can define the hyperperiod at level,, and it is given by lcmì ¾hep µ. Although, a task can suffer some interference from lower priority tasks (due to blocking factors), the notion of hyperperiod at level still holds. A task is invoked À Ì times in the hyperperiod, and Ì times in the hyperperiod at level. Usually,, and therefore analysing the task within the hyperperiod at level would require much less computation time than analysing it within the hyperperiod. The utilization at level, denoted by Í, is the utilization of the tasks of equal to or higher priority than task. It is given by: Í ¾hep µ Ì (1) For notation considerations, if is a set of tasks, we write Í to denote the utilization of the tasks is: 5

6 Í ¾ The utilization of the system Í is therefore given by Í Í. 2.1 Closed systems Ì (2) Those tasks for which the patterns of response times are repeated cyclically are of special interest. We call these tasks closed tasks. A closed system is one for which all tasks are closed. These tasks have the advantage that we can determine properties of the (possibly infinite) execution of the system by studying a finite sequence of invocations only. In fact, the majority of fixed priority models are closed with the cycle equal to the hyperperiod À, or the hyperperiod at level,. We now introduce formally the definition of closed task: Definition 1 (Closed task) Given a task and the worst case response time at invocation, Ê µ. Task is closed with cycle if: ¼ ¼ ¼ Ö ¼ Ê µ Ê Ö µ We call ¼ the start of the cycle and the cycle of the task 1. Definition 2 Given a closed task, with cycle, the closed worst case response time at invocation, denoted by Ê µ, ½, is given by: It has the following property: Ê µ Ê ¼ µ ½ (3) ½ Ö ¼ Ê µ Ê ¼ Ö µ (4) 2.2 Analysis of the basic process model We now analyse the conditions under which the basic process model scheduled with fixed priorities is closed and we present the formulation to compute the worst case response time of a task, Ê, the worst case response time at invocation, Ê µ Closed systems We first need to determine the conditions under which the system is closed. For the basic process model, a task is closed if the total utilization of the tasks of equal or higher priority is less than or equal to 1. In order to prove that the system is closed, we introduce the concept of bounded and overrun. Formally: 1 Note that, ¼, Ö and denote invocation numbers not time units. 6

7 Definition 3 (Bounded) A task is bounded if an upper bound exists on the response time of any invocation of the task; otherwise it is said that it is unbounded. Formally: A task is bounded if ¼ ½ Ê µ By extension, we say that a system is bounded if all tasks are bounded. Definition 4 (Overrun) A task overruns over the hyperperiod at level, if the last invocation of the task before the hyperperiod at level, finishes after the hyperperiod at level If a task does not overrun over the hyperperiod at level, then the invocation when the task suffers its worst case response time is within the first hyperperiod at level. If the task overruns over the hyperperiod at level, then the worst case response time of the task may not be within the hyperperiod at level as this invocation could interfere with task invocations that start at or after. Evidently, if a task does not overrun over the hyperperiod at level it does not overrun the hyperperiod. Lemma 1 For a task ¾, is bounded Í ½ does not overrun over Proof: We first prove that is Bounded µ Í ½, then Í ½ µ does not suffer overrun and then Not overrun µ Bounded. º Bounded µ Í ½: if Í ½, the amount of requested computation time is greater than the available and therefore the task is not bounded. Therefore, if is bounded then Í ½. º Í ½ µ does not Overrun: The amount of requested Ð processing time between ¼ ص is given by: Ï Øµ È ¾hep µ Ø Ì Ñ. For Ø, we have that Ï µ È ¾hep µ Ð Ì Ñ, as divides Ì for all tasks in hep µ, we have that Ï µ È ¾hep µ Ì Í. Therefore, if Í ½, then Ï µ and therefore all tasks invocations in finish within. Consequently the task does not overrun over the hyperperiod at level. º Not Overrun µ Bounded: An upper bound is the hyperperiod. ¾ This result says that if the utilization at level of a task is smaller or equal than 1 then the last invocation of the task in the hyperperiod at level finishes before the start of the next hyperperiod at level. If the task does not overrun, then the pattern of worst case response times, is closed in each hyperperiod at level as the following theorem shows. Theorem 1 (Closed task) Given a task ¾, is closed with cycle and start of cycle ¼ ¼ iff Í ½. 7

8 Proof: If the task is closed then it is also bounded and therefore from lemma 1 Í ½. If Í ½ then and all tasks of higher priority than do not overrun the hyperperiod at level. Then, at Ø, there is no pending computation time. The amount of interference suffered by a task at invocation ½ is the same as the one suffered at invocation ½. Therefore, the cyclic pattern of response times is repeated every hyperperiod at level, and starts at the first hyperperiod, ¼ ¼. ¾ If the task is closed, then the closed worst case response time at invocation is given by: Ê µ Ê µ ½ In summary, if the utilisation of the system is smaller than or equal to 1 then the system is analysable. As deadlines can be arbitrary small, closed systems may not be schedulable. However, it does not mean that all deadlines are missed. We show that only a subset of the deadlines are actually missed in the hyperperiod and that this pattern is also cyclic Worst case response time analysis We first present the computation of Ê which is a reformulation of previous known results. This is based on the notion of critical instant, the time instant when a task will suffer its maximum interference from other tasks. For the basic process model this occurs when all tasks are released together at Ø ¼ 2 [11]. Ê can be computed with the following equation [1]. Ê is the smallest Û ¼ such that: Û Á Ûµ (5) where Á Ûµ is the interference of tasks of higher priority than during interval ¼ Ûµ and it is given by: Á Ûµ ¾hp µ Û Ì (6) Equation 5 has a solution if Í ½. The solution to equation 5 can be computed with the following recurrence relation. Û ¼ ¼, Û Ò ½ Á Û Ò µ. When Û Ò ½ Û Ò then the smallest solution to equation 5 has been found and Û Û Ò. This response time corresponds to the finalization time of the first invocation. Therefore, we also write: ½µ Ê. Note that if Û Ì, then Û does not give the worst case response time. This is because a task may suffer interference from previous task invocations. Therefore, it is required to examine further task invocations. It is necessary to compute the response time of the first invocations of the task (see section 2.2.5), where is the first invocation for which Ê µ Ì. Thus, we have: 2 Note that there are other process models for which the critical instant does not coincide with Ø ¼ 8

9 Ê ÑÜ ½ Ê µ (7) This problem may occur in systems for which Ì. These results were originally published in [19] and [9] Optimisations There are some speed-ups that can be used to compute Ê, [16]. We review some of them and propose some new ones here. The simplest one is to use an upper bound on the response time given by: Ê ¾hp µ Ì Note that this equation is not a recurrence, if Ê then the task is schedulable. Other optimisations that do obtain the exact value are based on the fact that the correct value of Ê is also achieved if we choose as the starting value for the recurrence any value smaller than Ê. The simplest one is to choose Û ¼, or Û ¼ È ¾hep µ. However, we may do it better if we try to solve equation 5 directly. Note that ÜÝ ÜÝ. We can define a hot start point for Û ¼ by replacing the ceiling function and solving equation 5 for Û. With some algebra, it is given by: Û ¼ ½ Í hp µ (8) If the priority of task is lower than the priority of task then Ê Ê, therefore, we can use Û ¼ Ê as the hot start point for the computation of Ê. In any case, recurrence 5 converges generally in few iterations. Unless the number of tasks is very large the cost of performing the floating point division in equation 8 may be more expensive than the iteration itself Finishing time of the ¾-nd invocation We now extend the formulation to compute the worst case finalization time of task at the second invocation. The finishing time of the second invocation includes the time a task requires to perform its computation plus the interference from higher priority tasks. Note that this interference may be lower than the interference the task suffered at the critical instant due to the fact that higher priority tasks are not invoked in the worst phasing. Between the finalization of the first invocation and the start of the second one there may be some time the processor is used by lower priority tasks. We call this time the idle time at level. Formally: Definition 5 (Æ Ûµ) The idle time at level at Û is the amount of time the processor can be used by tasks of lower priority than during period of time ¼ Ûµ. 9

10 Table 1: Task set for computing ¾µ. Ì ½ ¾ T 2 τ 1 τ 2 τ 3 Idle i Time Task preempted Task deadline Invocation and Finishing instant Task running Figure 1: Schedule of task set in table 1. The amount of idle time at the start of each task invocation is of special interest. Therefore, we also write: Æ µ Æ Ë µµ From the point of view of task it is important only the total amount of processor time that can be used by lower priority tasks, and not which tasks actually use it. The computation of the idle time is shown later in section We use an example to show how to compute ¾µ. Figure 1 shows the schedule of the task set shown in table 1. Task ¾ finishes the first invocation at Ø, and task takes the processor for ¾ time units. Note that after Ø ½¼, when task ¾ starts its second invocation, no lower priority tasks can run. The worst case finishing time of the second task invocation is equal to the time required to compute ¾ time units starting from Ø ½¼, taking into account which is the interference from higher priority tasks. The computation of ¾µ is based on the fact that this time is equivalent to the worst case response time of a task at the same priority as task ¾ with ¾ Æ ¾ ¾µ. This is shown in the following theorem: Theorem 2 The worst case finalization time of the second invocation of task ¾ is the smallest Û ¼ such that: Û ¾ Æ Ë ¾µµ Á Ûµ (9) 10

11 Proof: We first note that tasks of lower priority than task cannot run after has been invoked. Therefore, for all Ø such that Ë ¾µ Ø ¾µ, Æ Øµ Æ Ë ¾µµ. Therefore, Æ ¾ Ë ¾µµ gives the amount of time free for lower priority tasks between ½µ and Ë ¾µ. Now, consider replacing task by ¼ that has ¼ ¾ Æ Ë ¾µµ. The worst case response time of task ¼ is the worst case finishing time of task at invocation ¾ and it can be computed by equation 5. By substituting ¼ in equation 5 we obtain equation 9. ¾ The computation of Æ Øµ is described later in the paper. The solution to the previous equation can be found, as before, with a recurrence formula. Note that the computation of the task will happen, at least, time units after the invocation time. Therefore, Ë ¾µ is a proper starting point. Thus we have: Û ¼ Ë ¾µ Û Ò ½ ¾ Æ Ë ¾µµ Á Û Ò (10) µ The recurrence finishes if Í ½ and the smallest Û Ò so that Û Ò Û Ò ½ is the solution to equation Finishing time of the -th invocation By extension of the formulation shown for ¾µ, the computation of the finishing time of the -th invocation of the task, is the time required for invocations of the task, plus the time the processor is used by lower priority tasks plus the interference due to higher priority tasks. µ is the minimum Û ¼ such that: Û Æ µ Á Ûµ (11) The proof follows the same argument as the proof of ¾µ. As before, the solution to the previous equation can be found using a recurrence. Æ Ë µµ is constant and does not change at each iteration. The smallest value that can be a solution to equation 11 corresponds to Ë µ. Thus we have: Û ¼ Ë µ Û Ò ½ Æ µ Á Û Ò (12) µ Again, the smallest Û Ò so that Û Ò Û Ò ½ is the solution to equation 11. The recurrence finishes if Í ½. Note that the formulation also considers the case where a task can suffer interference from previous invocations on the same task. Therefore, it is also suitable to analyse task sets with arbitrary deadlines, i.e. Ì. Also note that Æ µ is constant and does not need to be recomputed at each iteration of equation Computing Æ Øµ The computation of Æ Øµ is more complex because it can not be computed directly. Approaches presented in the literature [14] are in fact upper bounds only. This calculation can be done in a similar way as the worst case response time technique. The 11

12 computation is based on noting that the amount of idle time between ¼ ص is equal to the maximum computation time a single task, running at lower priority than, could use during ¼ ص. To compute this value we assume that the task set is made up only of the tasks of higher than or equal priority to task plus a virtual task,, with lower priority than. Task will consume all unused computation time of tasks of higher or equal priority than task. To compute the amount of idle time at level between ¼ ص, task has a period and deadline equal to the time Ø: Ì Ø Ø. The maximum time the processor can be used by tasks of lower priority than is the maximum computation time,, that makes task meet its deadline. Formally: Æ Øµ ÑÜ is schedulable (13) The scheduling test in (13) is done by solving the next equation for Û and checking whether Û : Û ¼ Á Ûµ (14) where the amount of interference at level also considering task, denoted by Á ¼ Ûµ, is given by: Á ¼ Ûµ ¾hep µ Û Ì (15) As before, it can be computed by a recurrence relation. If Û Ò Û Ò ½ and Û Ò then the idle time at level is at least equal to. If Û Ò then the idle time is smaller than. The possible values of range from ¼ to Ø and can be computed by a dichotomic search. The number of values to test is of the order Ç ÐÓ ¾ ص. The computation of the amount of idle time is computationally more expensive than the actual computation of Ê. In fact, each of the values to test has a similar complexity to that of computing Ê. For this reason it is very important to find more efficient ways of computing it. This is done in detail in the following subsection Computing Æ µ For the computation of µ, it is required to compute the amount of idle time at the release instant of each task invocation, Ë µ. This time instant has some properties that will allow us to obtain tighter upper and lower bounds on the possible values of the amount of idle time, and therefore, to compute it much more efficiently. By noting that Æ Øµ is monotonically nondecreasing we have that the amount of idle time between ½µ Ë µ ranges from ¼ Ë µ ½µ, that is, Æ ½µ Æ µ Æ ½µ Ë µ ½µ (16) 12

13 Considering that for ¼, Æ µ ¼ and ¼µ ¼. We can obtain tighter bounds on this values if we consider the maximum and minimum amount of computation higher priority tasks will perform in the interval ½µ Ë µ. We note the following: At ½µ all higher priority tasks have finished their computation. Therefore, there is no pending computation time from higher priority tasks. This is very easy to show: if there were pending computation time, task could not have finished. Given a higher priority task, for each full period of the task that lays within ½µ Ë µ, the task will require time units. Therefore, the amount of idle time is reduced in the same amount. Given a higher priority task, the last invocation of the higher priority task before Ë µ may use, at most, time units. Therefore, an upper bound on the maximum interference higher priority tasks can produce in interval ½µ Ë µ is given by the addition of all of these interferences. As we cannot make any assumption on the phasing of the tasks, a lower bound would correspond to the maximum interference a particular task may suffer. We now present the exact formulation. We define Ò Ûµ as the number of invocations of task that lay completely inside the interval ¼ Û. It is given by: Ò Ûµ Û Ì similarly the number of invocations that intersect with ¼ Û is given by: Æ Ûµ Û Ì We will use them to obtain the number of full invocations of higher priority tasks in a given interval. For a given task, of higher priority than task, we denote by µ the amount of computation task requires for the invocations that lay within the interval ½µ Ë µ, it is given by: (17) (18) µ Ò Ë µµ Æ ½µµ ¼ (19) where Ü ¼ is a shorthand for ÑÜ Ü ¼µ. If the task,, is not invoked in ½µ Ë µ then Ò Ë µµ Æ ½µµ and therefore µ ¼. At the last invocation of task before Ë µ, there may not be enough time for the task to perform its computation. If so, we know that the maximum amount of interference will produce is the maximum between its computation time and the size of the interval between the invocation time of task and the end of the interval Ë µ. We denote this value by µ. If the task is not invoked within ½µ Ë µ, then µ is zero. µ is given by: 13

14 N( F (k-1) ) j i n ( S (k)) j i i A (k) j i B (k) j τ j T j τ i F i (k-1) S i (k) Time Figure 2: Computing µ and µ. µ ¼ if Ò Ë µµ Ì ½µ ÑÒ Ë µ Ò Ë µµ Ì otherwise (20) Figure 2 shows the values Æ ½µµ and Ò Ë µµ for a task over a periodic task. It can be seen that Ò Ë µµ Æ ½µµ ¾ gives the number of full invocations of the higher priority,, within ½µ Ë µ. Therefore, µ ¾. As there is enough time for task to finish in its last invocation before Ë µ, µ. We can now define lower and upper bounds on the values of idle time. A lower bound on the amount of idle time within ½µ Ë µ, denoted by Ð µ, is given by the size of the interval, minus an upper bound on the maximum interference, formally: Ð µ ¾ Ë µ µ ¾hp µ µ µ In a similar way, an upper bound on the amount of idle time within ½µ Ë µ, denoted by Ù µ, is given by the size of the interval minus the minimum of the interferences. Note that we cannot make any stronger assumption because if two tasks were present, what determines the exact amount of interference is the interference pattern between them. More exact computations of the minimum interference would require more complex computations, which is not interesting. Formally, it is given by: Ù µ Ë µ µ ÑÒ ¾hp µ ¼ (21) µ µ (22) Therefore, the amount of idle time at invocation, is constrained to be between Æ ½µ Ð µ Æ µ Æ ½µ Ù µ (23) these bounds are much tighter than the ones of equation 16. The lower bound is quite effective as it considers the amount of interference produced by all higher priority tasks. We can see the effectiveness with the following 14

15 y=x Idle Time u u 50 exact l Time Figure 3: Upper and lower bounds of idle time computation. example. Assume that the task under analysis,, has a large period, say ½¼¼¼ ticks, and that all higher priority tasks have short periods, say, between and ¼. The computation of Ð µ caters for all the task invocations of the tasks, and it may be a little bit pessimistic as the last invocations of the tasks before Ë µ may produce less interference than the one accounted for in Ð µ (recall that Ð µ is a lower bound on the idle time, that is, an overestimation of the amount of interference). The computation of the upper bound is less effective as the best we can guarantee is the interference for a single task. Nevertheless, we can use a heuristic as the upper bound on the idle time by computing a closer lower bound on the maximum interference. The heuristic is based on considering only the interference that corresponds to full task invocations (the term µ). Formally, the heuristic to test, denoted by Ù¼ µ is given by: Ù ¼ µ ¾ Ë µ µ ¾hp µ µ As Ù ¼ µ is a heuristic, it may be the case that Æ ¼ µ Æ ½µ Ù µ. Therefore, ¼ we have to check that Ù µ is in fact an upper bound. The effectiveness of the bounds on the idle time can be better seen with the following example. Figure 3 shows the distribution of idle time at level ½ between [0,1000] ¼ (24) 15

16 Table 2: Example task set. Tasks 9 and 10 missed the deadline in the worst case. Ì Ê of task set in table 2. It also shows the upper and lower bounds on the amount of idle time and the heuristic on the upper bound, Ù, Ð and Ù ¼. In order to make the graph more representative, the graph plots the values of ٠ص, Рص and Ù ¼ ص for different values of Ø Ë µ. That is, ½ Ë µ ½¼¼¼. The axes have different scales. The Ü axis ranges from ¼ to ½¼¼¼ and the Ý axis from ¼ to ¼¼. The upper bound, ٠ص is not very good as it departures a lot from the exact value. However, it can be clearly seen that the lower bound and the heuristic upper bound are very close to the exact value. Additionally, on average, the difference Ù ¼ ص Рص is constant. This is very important as it indicates that the range of the number of possible values of the idle time to test is kept constant A complete example for weakly hard real-time systems One of the applications of the formulation presented before is the analysis of systems that can tolerate a bounded number of missed deadlines. If a task misses its deadline at the critical instant, it is not possible to determine how many deadlines will be missed unless Ê µ is computed for all invocations in the hyperperiod. This number depends on the relative periods of the tasks and can be quite large if periods are co-prime. However, it is considered good engineering practice to choose the periods of the tasks so that the hyperperiod (and the number of invocations of each task in the hyperperiod) is kept small. This can be easily done by choosing, whenever possible, the periods of the tasks as multiples of each other, with common prime factors or by transforming the periods so that they have common prime factors. In the worst case, we have to compute È ¾ task invocations. However, we expect most of the tasks to 16

17 Response time Response time τ τ 10 Figure 4: Response times in the hyperperiod at level of tasks and ½¼ of task set in table 2. meet its deadline in the worst case (critical instant) and therefore it will be required to compute Ê ½µ only. Note that the values of µ can also be computed by simulating the schedule of the tasks between ¼Àµ. However, for non-trivial task sets this approach is impractical because it is required to compute µ for all of the tasks and for all of the invocations in the hyperperiod. The exact number of invocations to test is given by ¾, which is much larger than ¾. Moreover, as mentioned before, for more complex process models a simulation based tool would be required to simulate the tasks for multiple hyperperiods, which would require even much more computation time. For example, table 2 shows a task set adapted from the one presented in [12]. It shows the worst case response times of the tasks. The hyperperiod of the task set is ½½¼¼¼ time units. If a simulation based tool were used to build the schedule and to compute µ for ½, it should have to process more than ¾¼¼¼ task invocations. With the formulation presented here, the computation of each µ is harder but, it is necessary to compute only ½¾ task invocations. Moreover, the study shows that among the ½ tasks, only ¾ missed the deadline in the worst case. Therefore, for the other ½ the computation of Ê is enough. This reduces the number of invocations to test to just Pseudoperiodicity of response times The analysis of the tasks sets depicted in this section illustrates two essential properties of schedules of real-time systems: Tasks do not take the worst case response time at every invocation, moreover, there is a high variability in the response times of the tasks within the hyperperiod 17

18 (or hyperperiod at level ). The distribution of the invocations where the task suffers its maximum response time are regularly distributed within the hyperperiod. This is because the conditions that result in the task suffering the maximum interference (an almost critical instant) come from particular phasing of a subset of higher priority tasks that are repeated within the hyperperiod. This results in the pattern of response times having a pseudoperiodicity and this same pattern is repeated every hyperperiod. 3 Extensions to the basic process model Hitherto, we have described in detail the analysis of the basic synchronous process model that is made up of purely periodic tasks only. However, few real systems adhere to such restrictions. Interrupts occur at arbitrary po ints in time, tasks need to synchronize or to share data with other tasks, and the interference due to the real-time operating system shall need to be taken into consideration. We now present the analysis of extensions to the basic process model. Offsets: each periodic task may have an offset assigned, denoted by Ç, that is the time delay before it is first invoked. This offset modifies the pattern of interference a task may suffer as the critical instant does not happen at Ø ¼. Blocking factors: mutual exclusion is ensured by the use of a protocol such as the priority ceiling protocol [15]. This introduces additional delays, called the blocking factor, in the tasks response times that need to be accounted for. These blocking factors may occur at any invocation and need to be accounted. Sporadic tasks: the system may have a mixture of periodic tasks and sporadic tasks. Periodic tasks have a well defined invocation time. On the contrary, sporadic tasks have random arrival times. The worst case finalization time formulation is updated to cater for the maximum interference sporadic tasks may impose on other tasks. Interrupts can be modelled by sporadic tasks. Release jitter: sporadic tasks may not be put in a notional ready queue as soon as they arrive. This may be because the task arrival is polled by a tick scheduler, or the task is awakened by messages that are also polled by a message server. This maximum time is called the release jitter and it is denoted by Â. This time difference may increase the amount of interference lower priority tasks can suffer. The first extension still applies to synchronous systems, but it is analysed because it the other extensions (which are asynchronous) rely on offset computation for the analysis. We now revise each of them in turn. 18

19 3.1 Offsets The process model can also be extended by considering that each task has an offset Ç. We consider that each task is periodic and it is characterised by: Ì Ç µ The offset, ¼ Ç Ì, of a task is the time instant after Ø ¼ when the task is first invoked. The task is further invoked with a separation of Ì time units from this time. Therefore, the task is invoked at: Ë µ ½µÌ Ç We note that the worst case finalization time formulation presented also works for any offset greater than or equal to zero. This will be used later when analyzing the process model of sporadic tasks with release jitter. A task should finish within time units after the invocation, that is, by Ë µ. The exact computation of the worst case response of the task requires the analysis of all the invocations in the hyperperiod. However, the worst case response time formulation for the basic process model (equation 5) gives an upper bound. We now present the formulation for µ with offsets and the conditions required for the system to be closed Formulation The main difference to the basic process model is that the tasks are not released together at Ø ¼, and therefore, they may not suffer the maximum interference. In other words, the critical instant may never occur. The computation Ê is performed as in the basic process model and we have that Ê Ê µ. The computation of µ is also done using equation 11. However, ½µ needs to be computed with equation 11 and not with equation 5 as Æ Ë ½µµ may be greater than 0. For the offset model we need to update the interference function only. It is given by: Û Ç Á Ûµ (25) similarly, Á ¼ Ûµ ¾hp µ ¾hep µ Ì Û Ç Ì (26) This corresponds to a shift of Ç time units of the task invocation time. This update also affects the computation of the amount of idle time. The net effect is that the interference has been reduced and the amount of idle time increased in the same amount. The shortcuts for computing the bounds on the amount of idle time need to be updated too. In this case, Ò Ûµ and Æ Ûµ are updated as follows: 19

20 Ò Ûµ Û Ç Ì ¼ Æ Ûµ Û Ç Ì ¼ (27) (28) The computation of µ in equation 19 does not change, and µ originally computed with equation 20 is updated as follows. If Ò Ë µµì Ç ½µ then otherwise, µ ¼ (29) µ ÑÜ Ë µ Ò Ë µµì Ç (30) Closed systems Even though the formulation µ is very easy to update, the inclusion of offsets requires revision of the conditions under which the system is closed. We note that even though the tasks are closed and bounded, there may be an overrun over the hyperperiod at level. This can be shown with the trivial task set made up of a single task with Ì ½¼,, and Ç. The task starts executing at Ø and will finish at Ø ½, which corresponds to time units after the hyperperiod. However, the task is closed and bounded. Lemma 1 can only be proven for: Bounded Í ½. However, if Í ½ the task is also closed as the following theorem shows: Theorem 3 Given a task ¾ with offsets, is closed with cycle and start of cycle ¼ iff Í ½. Proof: If the task is closed then, it is also bounded and therefore from lemma 1 Í ½. If Í ½ then due to the fact that there are offsets, some computation time requested before may finish after (even though Í ½). This is due to the fact that there may be some idle time in the first hyperperiod. Therefore, task invocations in the second hyperperiod may suffer interference from tasks invoked in the first hyperperiod and therefore Ê µ Ê µ, ½. At Ø, the amount of requested computation time is given by Ï µ È ¾hep µ Ð Ç Ì Ñ. However, some of this computation may finish after due to the fact the task cannot start until Ç. The amount of requested computation Ñ time at Ø ¾ is given by Ï ¾ µ È ¾hep µ Ð ¾ Ç Ì È Ð Ñ Ç Ì Ì Í ÈÐ Ç Ì Ñ Í Ï µ. The amount of requested computation time in the second hyperperiod is given by Í, at the third hyperperiod, the amount of requested computation time is also given by Í, 20

21 the same as before. Therefore, as Í ½, the amount of requested computation time in ¾ µ, is the same as in ¾ µ and in Ö Ö ½µ µ. By noting that the tasks have the same release pattern, we have that, Ê µ Ê Ö µ, for ½, Ö ¼. Therefore, the task is closed with cycle and start of cycle ¼. ¾ The previous theorem suggests two ways of computing Ê µ. Compute Ê µ for the second hyperperiod only, or compute Ê µ for the first hyperperiod and some invocations in the second hyperperiod. Formally: Compute Ê µ for ½. The drawback is having to compute the amount of idle time at level from Ø ¼ to Ë ½ µ Ç. This computation can be quite expensive as the amount of idle time in a whole hyperperiod can be quite large. However, by applying the upper and lower bounds, this can be done very efficiently. Ê µ is given by: Ê µ Ê µ ½ After Ø there may be some pending computation time from task invocations from the first hyperperiod. This pending computation time may interfere with some invocations in the second hyperperiod. Therefore, for a given, Ê µ may be greater than the corresponding invocation in the first hyperperiod, Ê µ. However, the first ¼ ¼ such that Ê ¼ µ Ê ¼ µ has the property that Ê µ Ê µ for ¼. This comes from the fact that this invocation of the task does not suffer more interference than the corresponding invocation in the first hyperperiod. The computation of Ê µ can be done as follows. First, compute Ê µ for ½. Then, compute Ê ¼ µ for some invocations in the second hyperperiod. The first ¼ ½ such that Ê ¼ µ Ê ¼ µ corresponds to the first invocation after which the response times in the second hyperperiod are the same as in the first one. Therefore, Ê µ is given by: Ê µ Ê µ Ê µ ½ ¼ ¼ The first method is more straightforward but it requires the computation of the amount of idle time in the first hyperperiod. The second method is preferred because it is an extension of the previous formulation and because it is not common that the tasks overrun over the hyperperiod and therefore in most of the cases ¼ will be zero or one. Note that with offsets, Ê ½µ does not give the worst case response time of the task. The exact worst case response time Ê ¼ is given by: Ò Ó Ê ¼ ÑÜ Ê µ (31) ½ The worst case response time Ê computed with equation 5 is only an upper bound on the task response time. 21

22 B B τ i τ... k-1 k Idle S i (k-1) S i (k) Time Figure 5: Blocking at invocation. 3.2 Blocking factors The use of a protocol for ruling mutual exclusion such as the priority ceiling protocol [15], implies that a task may be blocked by lower priority tasks by a blocking time or blocking factor, denoted by. For this case the process model is updated as follows: Ì µ In this section we propose two different ways of computing µ for task sets with blocking factors and we reformulate the conditions for which the task set is closed Formulation The worst case response time formulation is the one presented in section [1], we have that Ê is the smallest Û ¼ such that: Û Á Ûµ (32) where Á Ûµ is given by equation 6. We now extend the formulation to compute µ. A task can only be blocked once at each invocation for duration of time units. The worst case scenario at invocation happens when the task suffers the maximum blocking at the release time at each task invocation. This is shown in figure 5. Task is invoked at invocation at time Ë µ. Just before this instant, a lower priority task has locked a semaphore and has had its priority changed to the ceiling of the semaphore. Task is delayed by at most time units. Also note that for the computation of µ it is not relevant whether between ½µ and Ë µ there have been other lower priority tasks that have locked the semaphore and consequently that have had its priority raised up to the ceiling of the semaphore. The worst case scenario happens when suffers a blocking just at the release instant. Therefore, µ is the minimum Û ¼ such that: Û Æ µ Á Ûµ (33) The amount of idle time between ¼ Ë µµ is computed as before, with the only consideration that the task has been blocked every time in the last ½ invocations. 22

23 The scheduling test of the virtual task for computing Æ µ needs to be updated too. Æ µ is the maximum so that task is schedulable. The schedulability test of is given by: Û ½µ ¼ Á Ûµ (34) However, this situation is very pessimistic as the task will not suffer the maximum interference from lower priority tasks at each invocation. We note that if all tasks do always run for their worst case execution time, then a particular task may only be blocked at invocation if a lower priority task can run between the finalization time of the previous invocation, ½µ, and the invocation time of the current invocation, Ë µ. That is, if Æ µ Æ ½µ. Therefore, for ¼, we denote by µ, the maximum blocking time a task may have suffered up to invocation. It is given by: ¼µ ¼ µ if Æ ½µ ½µ Æ µ µ Otherwise (35) The µ formulation is updated as follows. µ is the smallest Û ¼ such that: Û µ Æ µ Á Ûµ (36) The schedulability test for the virtual task shown in equation 34, is updated accordingly: Û ½µ ¼ Á Ûµ (37) ¼ The bounds used on the computation of Æ µ: Ð µ, Ù µ and Ù µ also hold. This is because the worst case scenario for task at invocation happens when it suffers a blocking at Ë µ. Therefore, we can assume that between ½µ and Ë µ no blocking has been performed and the amount of idle time is bounded by Ð and Ù of equations 21 and 22. Also note that µ is an upper bound on the worst case finalization time as, even though lower priority tasks may run between the finalization time of one invocation and the invocation time of the following one, the task that provokes the blocking may not get the processor, it may not perform the lock operation on the semaphore or it may not lock it in a way that produces the largest interference on task. This last formulation can be only applied if the tasks do run exactly for time units, if the tasks do run for less than its execution time the assumptions no longer hold and the simple computation of the blocking factor needs to be used instead Closed system The computation of µ in equation 33 is equivalent to the computation of µ for the basic process model in which is replaced by. We define the equivalent utilization at level of this task set, denoted by Í, as : Í Ì Í hp µ (38) 23

An Approach to Task Attribute Assignment for Uniprocessor Systems

An Approach to Task Attribute Assignment for Uniprocessor Systems An Approach to ttribute Assignment for Uniprocessor Systems I. Bate and A. Burns Real-Time Systems Research Group Department of Computer Science University of York York, United Kingdom e-mail: fijb,burnsg@cs.york.ac.uk

More information

SFU CMPT Lecture: Week 8

SFU CMPT Lecture: Week 8 SFU CMPT-307 2008-2 1 Lecture: Week 8 SFU CMPT-307 2008-2 Lecture: Week 8 Ján Maňuch E-mail: jmanuch@sfu.ca Lecture on June 24, 2008, 5.30pm-8.20pm SFU CMPT-307 2008-2 2 Lecture: Week 8 Universal hashing

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

Lecture 12: An Overview of Scheduling Theory

Lecture 12: An Overview of Scheduling Theory Lecture 12: An Overview of Scheduling Theory [RTCS Ch 8] Introduction Execution Time Estimation Basic Scheduling Approaches Static Cyclic Scheduling Fixed Priority Scheduling Rate Monotonic Analysis Earliest

More information

Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems

Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems J.M. López, M. García, J.L. Díaz, D.F. García University of Oviedo Department of Computer Science Campus de Viesques,

More information

Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling

Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling Weizhen Mao Department of Computer Science The College of William and Mary Williamsburg, VA 23187-8795 USA wm@cs.wm.edu

More information

A Capacity Sharing and Stealing Strategy for Open Real-time Systems

A Capacity Sharing and Stealing Strategy for Open Real-time Systems A Capacity Sharing and Stealing Strategy for Open Real-time Systems Luís Nogueira, Luís Miguel Pinho CISTER Research Centre School of Engineering of the Polytechnic Institute of Porto (ISEP/IPP) Rua Dr.

More information

On the Performance of Greedy Algorithms in Packet Buffering

On the Performance of Greedy Algorithms in Packet Buffering On the Performance of Greedy Algorithms in Packet Buffering Susanne Albers Ý Markus Schmidt Þ Abstract We study a basic buffer management problem that arises in network switches. Consider input ports,

More information

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È.

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. Let Ò Ô Õ. Pick ¾ ½ ³ Òµ ½ so, that ³ Òµµ ½. Let ½ ÑÓ ³ Òµµ. Public key: Ò µ. Secret key Ò µ.

More information

Designing Networks Incrementally

Designing Networks Incrementally Designing Networks Incrementally Adam Meyerson Kamesh Munagala Ý Serge Plotkin Þ Abstract We consider the problem of incrementally designing a network to route demand to a single sink on an underlying

More information

Overview of Scheduling a Mix of Periodic and Aperiodic Tasks

Overview of Scheduling a Mix of Periodic and Aperiodic Tasks Overview of Scheduling a Mix of Periodic and Aperiodic Tasks Minsoo Ryu Department of Computer Science and Engineering 2 Naive approach Background scheduling Algorithms Under static priority policy (RM)

More information

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È.

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. Let Ò Ô Õ. Pick ¾ ½ ³ Òµ ½ so, that ³ Òµµ ½. Let ½ ÑÓ ³ Òµµ. Public key: Ò µ. Secret key Ò µ.

More information

Fundamental Trade-offs in Aggregate Packet Scheduling

Fundamental Trade-offs in Aggregate Packet Scheduling Fundamental Trade-offs in Aggregate Packet Scheduling Zhi-Li Zhang Ý, Zhenhai Duan Ý and Yiwei Thomas Hou Þ Ý Dept. of Computer Science & Engineering Þ Fujitsu Labs of America University of Minnesota 595

More information

Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms

Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms Roni Khardon Tufts University Medford, MA 02155 roni@eecs.tufts.edu Dan Roth University of Illinois Urbana, IL 61801 danr@cs.uiuc.edu

More information

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols Christian Scheideler Ý Berthold Vöcking Þ Abstract We investigate how static store-and-forward routing algorithms

More information

End-to-end bandwidth guarantees through fair local spectrum share in wireless ad-hoc networks

End-to-end bandwidth guarantees through fair local spectrum share in wireless ad-hoc networks End-to-end bandwidth guarantees through fair local spectrum share in wireless ad-hoc networks Saswati Sarkar and Leandros Tassiulas 1 Abstract Sharing the locally common spectrum among the links of the

More information

Probabilistic analysis of algorithms: What s it good for?

Probabilistic analysis of algorithms: What s it good for? Probabilistic analysis of algorithms: What s it good for? Conrado Martínez Univ. Politècnica de Catalunya, Spain February 2008 The goal Given some algorithm taking inputs from some set Á, we would like

More information

requests or displaying activities, hence they usually have soft deadlines, or no deadlines at all. Aperiodic tasks with hard deadlines are called spor

requests or displaying activities, hence they usually have soft deadlines, or no deadlines at all. Aperiodic tasks with hard deadlines are called spor Scheduling Aperiodic Tasks in Dynamic Priority Systems Marco Spuri and Giorgio Buttazzo Scuola Superiore S.Anna, via Carducci 4, 561 Pisa, Italy Email: spuri@fastnet.it, giorgio@sssup.it Abstract In this

More information

Introduction to Embedded Systems

Introduction to Embedded Systems Introduction to Embedded Systems Sanjit A. Seshia UC Berkeley EECS 9/9A Fall 0 008-0: E. A. Lee, A. L. Sangiovanni-Vincentelli, S. A. Seshia. All rights reserved. Chapter : Operating Systems, Microkernels,

More information

A Note on Karr s Algorithm

A Note on Karr s Algorithm A Note on Karr s Algorithm Markus Müller-Olm ½ and Helmut Seidl ¾ ½ FernUniversität Hagen, FB Informatik, LG PI 5, Universitätsstr. 1, 58097 Hagen, Germany mmo@ls5.informatik.uni-dortmund.de ¾ TU München,

More information

ACONCURRENT system may be viewed as a collection of

ACONCURRENT system may be viewed as a collection of 252 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 10, NO. 3, MARCH 1999 Constructing a Reliable Test&Set Bit Frank Stomp and Gadi Taubenfeld AbstractÐThe problem of computing with faulty

More information

CLOCK DRIVEN SCHEDULING

CLOCK DRIVEN SCHEDULING CHAPTER 4 By Radu Muresan University of Guelph Page 1 ENGG4420 CHAPTER 4 LECTURE 2 and 3 November 04 09 7:51 PM CLOCK DRIVEN SCHEDULING Clock driven schedulers make their scheduling decisions regarding

More information

Precedence Graphs Revisited (Again)

Precedence Graphs Revisited (Again) Precedence Graphs Revisited (Again) [i,i+6) [i+6,i+12) T 2 [i,i+6) [i+6,i+12) T 3 [i,i+2) [i+2,i+4) [i+4,i+6) [i+6,i+8) T 4 [i,i+1) [i+1,i+2) [i+2,i+3) [i+3,i+4) [i+4,i+5) [i+5,i+6) [i+6,i+7) T 5 [i,i+1)

More information

Time Triggered and Event Triggered; Off-line Scheduling

Time Triggered and Event Triggered; Off-line Scheduling Time Triggered and Event Triggered; Off-line Scheduling Real-Time Architectures -TUe Gerhard Fohler 2004 Mälardalen University, Sweden gerhard.fohler@mdh.se Real-time: TT and ET Gerhard Fohler 2004 1 Activation

More information

Overview. Sporadic tasks. Recall. Aperiodic tasks. Real-time Systems D0003E 2/26/2009. Loosening D = T. Aperiodic tasks. Response-time analysis

Overview. Sporadic tasks. Recall. Aperiodic tasks. Real-time Systems D0003E 2/26/2009. Loosening D = T. Aperiodic tasks. Response-time analysis Overview Real-time Systems D0003E Lecture 11: Priority inversion Burns/Wellings ch. 13 (except 13.12) Aperiodic tasks Response time analysis Blocking Priority inversion Priority inheritance Priority ceiling

More information

ANALYSIS OF HARD REAL-TIME COMMUNICATIONS

ANALYSIS OF HARD REAL-TIME COMMUNICATIONS ANALYSIS OF HARD REAL-TIME COMMUNICATIONS Ken Tindell, Real-Time Systems Research Group, Department of Computer Science, University of York, England ABSTRACT In a distributed hard real-time system, communications

More information

Maintaining Mutual Consistency for Cached Web Objects

Maintaining Mutual Consistency for Cached Web Objects Maintaining Mutual Consistency for Cached Web Objects Bhuvan Urgaonkar, Anoop George Ninan, Mohammad Salimullah Raunak Prashant Shenoy and Krithi Ramamritham Department of Computer Science, University

More information

Cello: A Disk Scheduling Framework for Next Generation Operating Systems

Cello: A Disk Scheduling Framework for Next Generation Operating Systems Cello: A Disk Scheduling Framework for Next Generation Operating Systems Prashant Shenoy Ý Harrick M. Vin Department of Computer Science, Department of Computer Sciences, University of Massachusetts at

More information

Analyzing Real-Time Systems

Analyzing Real-Time Systems Analyzing Real-Time Systems Reference: Burns and Wellings, Real-Time Systems and Programming Languages 17-654/17-754: Analysis of Software Artifacts Jonathan Aldrich Real-Time Systems Definition Any system

More information

A General Greedy Approximation Algorithm with Applications

A General Greedy Approximation Algorithm with Applications A General Greedy Approximation Algorithm with Applications Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract Greedy approximation algorithms have been

More information

SIMULTANEOUS PERTURBATION STOCHASTIC APPROXIMATION FOR REAL-TIME OPTIMIZATION OF MODEL PREDICTIVE CONTROL

SIMULTANEOUS PERTURBATION STOCHASTIC APPROXIMATION FOR REAL-TIME OPTIMIZATION OF MODEL PREDICTIVE CONTROL SIMULTANEOUS PERTURBATION STOCHASTIC APPROXIMATION FOR REAL-TIME OPTIMIZATION OF MODEL PREDICTIVE CONTROL Irina Baltcheva Felisa J.Vázquez-Abad ½ (member of GERAD) Université de Montréal DIRO, CP 6128

More information

Optimal Time Bounds for Approximate Clustering

Optimal Time Bounds for Approximate Clustering Optimal Time Bounds for Approximate Clustering Ramgopal R. Mettu C. Greg Plaxton Department of Computer Science University of Texas at Austin Austin, TX 78712, U.S.A. ramgopal, plaxton@cs.utexas.edu Abstract

More information

Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network

Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network Thomas Nolte, Hans Hansson, and Christer Norström Mälardalen Real-Time Research Centre Department of Computer Engineering

More information

A Modality for Recursion

A Modality for Recursion A Modality for Recursion (Technical Report) March 31, 2001 Hiroshi Nakano Ryukoku University, Japan nakano@mathryukokuacjp Abstract We propose a modal logic that enables us to handle self-referential formulae,

More information

Online Scheduling for Sorting Buffers

Online Scheduling for Sorting Buffers Online Scheduling for Sorting Buffers Harald Räcke ½, Christian Sohler ½, and Matthias Westermann ¾ ½ Heinz Nixdorf Institute and Department of Mathematics and Computer Science Paderborn University, D-33102

More information

Source EE 4770 Lecture Transparency. Formatted 16:43, 30 April 1998 from lsli

Source EE 4770 Lecture Transparency. Formatted 16:43, 30 April 1998 from lsli 17-3 17-3 Rate Monotonic Priority Assignment (RMPA) Method for assigning priorities with goal of meeting deadlines. Rate monotonic priority assignment does not guarantee deadlines will be met. A pure periodic

More information

Microkernel/OS and Real-Time Scheduling

Microkernel/OS and Real-Time Scheduling Chapter 12 Microkernel/OS and Real-Time Scheduling Hongwei Zhang http://www.cs.wayne.edu/~hzhang/ Ack.: this lecture is prepared in part based on slides of Lee, Sangiovanni-Vincentelli, Seshia. Outline

More information

Simplified design flow for embedded systems

Simplified design flow for embedded systems Simplified design flow for embedded systems 2005/12/02-1- Reuse of standard software components Knowledge from previous designs to be made available in the form of intellectual property (IP, for SW & HW).

More information

Structure and Complexity in Planning with Unary Operators

Structure and Complexity in Planning with Unary Operators Structure and Complexity in Planning with Unary Operators Carmel Domshlak and Ronen I Brafman ½ Abstract In this paper we study the complexity of STRIPS planning when operators have a single effect In

More information

Introduction to Real-time Systems. Advanced Operating Systems (M) Lecture 2

Introduction to Real-time Systems. Advanced Operating Systems (M) Lecture 2 Introduction to Real-time Systems Advanced Operating Systems (M) Lecture 2 Introduction to Real-time Systems Real-time systems deliver services while meeting some timing constraints Not necessarily fast,

More information

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system Real world exposed programs Programs written to interact with the real world, outside the computer Programs handle input and output of data in pace matching the real world processes Necessitates ability

More information

FUTURE communication networks are expected to support

FUTURE communication networks are expected to support 1146 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 13, NO 5, OCTOBER 2005 A Scalable Approach to the Partition of QoS Requirements in Unicast and Multicast Ariel Orda, Senior Member, IEEE, and Alexander Sprintson,

More information

1.1 Explain the difference between fast computing and real-time computing.

1.1 Explain the difference between fast computing and real-time computing. i 1.1 Explain the difference between fast computing and real-time computing. 1.2 What are the main limitations of the current real-time kernels for the development of critical control applications? 1.3

More information

Buffer Sizing to Reduce Interference and Increase Throughput of Real-Time Stream Processing Applications

Buffer Sizing to Reduce Interference and Increase Throughput of Real-Time Stream Processing Applications Buffer Sizing to Reduce Interference and Increase Throughput of Real-Time Stream Processing Applications Philip S. Wilmanns Stefan J. Geuns philip.wilmanns@utwente.nl stefan.geuns@utwente.nl University

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Consistency and Set Intersection

Consistency and Set Intersection Consistency and Set Intersection Yuanlin Zhang and Roland H.C. Yap National University of Singapore 3 Science Drive 2, Singapore {zhangyl,ryap}@comp.nus.edu.sg Abstract We propose a new framework to study

More information

Computing optimal linear layouts of trees in linear time

Computing optimal linear layouts of trees in linear time Computing optimal linear layouts of trees in linear time Konstantin Skodinis University of Passau, 94030 Passau, Germany, e-mail: skodinis@fmi.uni-passau.de Abstract. We present a linear time algorithm

More information

An Improved Priority Ceiling Protocol to Reduce Context Switches in Task Synchronization 1

An Improved Priority Ceiling Protocol to Reduce Context Switches in Task Synchronization 1 An Improved Priority Ceiling Protocol to Reduce Context Switches in Task Synchronization 1 Albert M.K. Cheng and Fan Jiang Computer Science Department University of Houston Houston, TX, 77204, USA http://www.cs.uh.edu

More information

Implementing Sporadic Servers in Ada

Implementing Sporadic Servers in Ada Technical Report CMU/SEI-90-TR-6 ESD-90-TR-207 Implementing Sporadic Servers in Ada Brinkley Sprunt Lui Sha May 1990 Technical Report CMU/SEI-90-TR-6 ESD-90-TR-207 May 1990 Implementing Sporadic Servers

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Exam Review TexPoint fonts used in EMF.

Exam Review TexPoint fonts used in EMF. Exam Review Generics Definitions: hard & soft real-time Task/message classification based on criticality and invocation behavior Why special performance measures for RTES? What s deadline and where is

More information

A Comparison of Structural CSP Decomposition Methods

A Comparison of Structural CSP Decomposition Methods A Comparison of Structural CSP Decomposition Methods Georg Gottlob Institut für Informationssysteme, Technische Universität Wien, A-1040 Vienna, Austria. E-mail: gottlob@dbai.tuwien.ac.at Nicola Leone

More information

Scheduling Sporadic and Aperiodic Events in a Hard Real-Time System

Scheduling Sporadic and Aperiodic Events in a Hard Real-Time System Technical Report CMU/SEI-89-TR-11 ESD-TR-89-19 Scheduling Sporadic and Aperiodic Events in a Hard Real-Time System Brinkley Sprunt Lui Sha John Lehoczky April 1989 Technical Report CMU/SEI-89-TR-11 ESD-TR-89-19

More information

An algorithm for Performance Analysis of Single-Source Acyclic graphs

An algorithm for Performance Analysis of Single-Source Acyclic graphs An algorithm for Performance Analysis of Single-Source Acyclic graphs Gabriele Mencagli September 26, 2011 In this document we face with the problem of exploiting the performance analysis of acyclic graphs

More information

Scheduling Algorithm and Analysis

Scheduling Algorithm and Analysis Scheduling Algorithm and Analysis Model and Cyclic Scheduling (Module 27) Yann-Hang Lee Arizona State University yhlee@asu.edu (480) 727-7507 Summer 2014 Task Scheduling Schedule: to determine which task

More information

Real-time operating systems and scheduling

Real-time operating systems and scheduling Real-time operating systems and scheduling Problem 21 Consider a real-time operating system (OS) that has a built-in preemptive scheduler. Each task has a unique priority and the lower the priority id,

More information

Cofactoring-Based Upper Bound Computation for Covering Problems

Cofactoring-Based Upper Bound Computation for Covering Problems TR-CSE-98-06, UNIVERSITY OF MASSACHUSETTS AMHERST Cofactoring-Based Upper Bound Computation for Covering Problems Congguang Yang Maciej Ciesielski May 998 TR-CSE-98-06 Department of Electrical and Computer

More information

Directed Single Source Shortest Paths in Linear Average Case Time

Directed Single Source Shortest Paths in Linear Average Case Time Directed Single Source Shortest Paths in inear Average Case Time Ulrich Meyer MPI I 2001 1-002 May 2001 Author s Address ÍÐÖ ÅÝÖ ÅܹÈÐÒ¹ÁÒ ØØÙØ ĐÙÖ ÁÒÓÖÑØ ËØÙÐ ØÞÒÙ Û ½¾ ËÖÖĐÙÒ umeyer@mpi-sb.mpg.de www.uli-meyer.de

More information

Scheduling Periodic and Aperiodic. John P. Lehoczky and Sandra R. Thuel. and both hard and soft deadline aperiodic tasks using xed-priority methods.

Scheduling Periodic and Aperiodic. John P. Lehoczky and Sandra R. Thuel. and both hard and soft deadline aperiodic tasks using xed-priority methods. Chapter 8 Scheduling Periodic and Aperiodic Tasks Using the Slack Stealing Algorithm John P. Lehoczky and Sandra R. Thuel This chapter discusses the problem of jointly scheduling hard deadline periodic

More information

DISTRIBUTED REAL-TIME SYSTEMS

DISTRIBUTED REAL-TIME SYSTEMS Distributed Systems Fö 11/12-1 Distributed Systems Fö 11/12-2 DISTRIBUTED REAL-TIME SYSTEMS What is a Real-Time System? 1. What is a Real-Time System? 2. Distributed Real Time Systems 3. Predictability

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Authors Abugchem, F. (Fathi); Short, M. (Michael); Xu, D. (Donglai)

Authors Abugchem, F. (Fathi); Short, M. (Michael); Xu, D. (Donglai) TeesRep - Teesside's Research Repository A Note on the Suboptimality of Nonpreemptive Real-time Scheduling Item type Article Authors Abugchem, F. (Fathi); Short, M. (Michael); Xu, D. (Donglai) Citation

More information

On-line Scheduling on Uniform Multiprocessors

On-line Scheduling on Uniform Multiprocessors On-line Scheduling on Uniform Multiprocessors Shelby Funk Ý Joël Goossens Þ Sanjoy Baruah Ý Ý University of North Carolina Chapel Hill, North Carolina funk,baruah@cs.unc.edu Þ Université Libre de Bruxelles

More information

Tolls for heterogeneous selfish users in multicommodity networks and generalized congestion games

Tolls for heterogeneous selfish users in multicommodity networks and generalized congestion games Tolls for heterogeneous selfish users in multicommodity networks and generalized congestion games Lisa Fleischer Kamal Jain Mohammad Mahdian Abstract We prove the existence of tolls to induce multicommodity,

More information

Event List Management In Distributed Simulation

Event List Management In Distributed Simulation Event List Management In Distributed Simulation Jörgen Dahl ½, Malolan Chetlur ¾, and Philip A Wilsey ½ ½ Experimental Computing Laboratory, Dept of ECECS, PO Box 20030, Cincinnati, OH 522 0030, philipwilsey@ieeeorg

More information

Tasks. Task Implementation and management

Tasks. Task Implementation and management Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration

More information

Online Facility Location

Online Facility Location Online Facility Location Adam Meyerson Abstract We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these

More information

Approximation by NURBS curves with free knots

Approximation by NURBS curves with free knots Approximation by NURBS curves with free knots M Randrianarivony G Brunnett Technical University of Chemnitz, Faculty of Computer Science Computer Graphics and Visualization Straße der Nationen 6, 97 Chemnitz,

More information

Fault tolerant scheduling in real time systems

Fault tolerant scheduling in real time systems tolerant scheduling in real time systems Afrin Shafiuddin Department of Electrical and Computer Engineering University of Wisconsin-Madison shafiuddin@wisc.edu Swetha Srinivasan Department of Electrical

More information

A Framework for Space and Time Efficient Scheduling of Parallelism

A Framework for Space and Time Efficient Scheduling of Parallelism A Framework for Space and Time Efficient Scheduling of Parallelism Girija J. Narlikar Guy E. Blelloch December 996 CMU-CS-96-97 School of Computer Science Carnegie Mellon University Pittsburgh, PA 523

More information

arxiv: v1 [cs.ma] 8 May 2018

arxiv: v1 [cs.ma] 8 May 2018 Ordinal Approximation for Social Choice, Matching, and Facility Location Problems given Candidate Positions Elliot Anshelevich and Wennan Zhu arxiv:1805.03103v1 [cs.ma] 8 May 2018 May 9, 2018 Abstract

More information

Learning to Align Sequences: A Maximum-Margin Approach

Learning to Align Sequences: A Maximum-Margin Approach Learning to Align Sequences: A Maximum-Margin Approach Thorsten Joachims Department of Computer Science Cornell University Ithaca, NY 14853 tj@cs.cornell.edu August 28, 2003 Abstract We propose a discriminative

More information

From Tableaux to Automata for Description Logics

From Tableaux to Automata for Description Logics Fundamenta Informaticae XX (2003) 1 33 1 IOS Press From Tableaux to Automata for Description Logics Franz Baader, Jan Hladik, and Carsten Lutz Theoretical Computer Science, TU Dresden, D-01062 Dresden,

More information

Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ

Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ Dongkun Shin Jihong Kim School of CSE School of CSE Seoul National University Seoul National University Seoul, Korea

More information

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS BY MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER

More information

An Abstraction Algorithm for the Verification of Level-Sensitive Latch-Based Netlists

An Abstraction Algorithm for the Verification of Level-Sensitive Latch-Based Netlists An Abstraction Algorithm for the Verification of Level-Sensitive Latch-Based Netlists Jason Baumgartner ½, Tamir Heyman ¾, Vigyan Singhal, and Adnan Aziz ½ IBM Enterprise Systems Group, Austin, Texas 78758,

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Ulysses: A Robust, Low-Diameter, Low-Latency Peer-to-Peer Network

Ulysses: A Robust, Low-Diameter, Low-Latency Peer-to-Peer Network 1 Ulysses: A Robust, Low-Diameter, Low-Latency Peer-to-Peer Network Abhishek Kumar Shashidhar Merugu Jun (Jim) Xu Xingxing Yu College of Computing, School of Mathematics, Georgia Institute of Technology,

More information

FIXED PRIORITY SCHEDULING ANALYSIS OF THE POWERTRAIN MANAGEMENT APPLICATION EXAMPLE USING THE SCHEDULITE TOOL

FIXED PRIORITY SCHEDULING ANALYSIS OF THE POWERTRAIN MANAGEMENT APPLICATION EXAMPLE USING THE SCHEDULITE TOOL FIXED PRIORITY SCHEDULING ANALYSIS OF THE POWERTRAIN MANAGEMENT APPLICATION EXAMPLE USING THE SCHEDULITE TOOL Jens Larsson t91jla@docs.uu.se Technical Report ASTEC 97/03 DoCS 97/82 Department of Computer

More information

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Sergey Gorinsky Harrick Vin Technical Report TR2000-32 Department of Computer Sciences, University of Texas at Austin Taylor Hall

More information

On the Complexity of Worst-Case Blocking Analysis of Nested Critical Sections

On the Complexity of Worst-Case Blocking Analysis of Nested Critical Sections On the Complexity of Worst-Case Blocking Analysis of Nested Critical Sections Alexander Wieder Björn B. Brandenburg Max Planck Institute for Software Systems (MPI-SWS) Abstract Accurately bounding the

More information

Disjoint, Partition and Intersection Constraints for Set and Multiset Variables

Disjoint, Partition and Intersection Constraints for Set and Multiset Variables Disjoint, Partition and Intersection Constraints for Set and Multiset Variables Christian Bessiere ½, Emmanuel Hebrard ¾, Brahim Hnich ¾, and Toby Walsh ¾ ¾ ½ LIRMM, Montpelier, France. Ö Ð ÖÑÑ Ö Cork

More information

Resource-bound process algebras for Schedulability and Performance Analysis of Real-Time and Embedded Systems

Resource-bound process algebras for Schedulability and Performance Analysis of Real-Time and Embedded Systems Resource-bound process algebras for Schedulability and Performance Analysis of Real-Time and Embedded Systems Insup Lee 1, Oleg Sokolsky 1, Anna Philippou 2 1 RTG (Real-Time Systems Group) Department of

More information

Module 11. Directed Graphs. Contents

Module 11. Directed Graphs. Contents Module 11 Directed Graphs Contents 11.1 Basic concepts......................... 256 Underlying graph of a digraph................ 257 Out-degrees and in-degrees.................. 258 Isomorphism..........................

More information

SFU CMPT Lecture: Week 9

SFU CMPT Lecture: Week 9 SFU CMPT-307 2008-2 1 Lecture: Week 9 SFU CMPT-307 2008-2 Lecture: Week 9 Ján Maňuch E-mail: jmanuch@sfu.ca Lecture on July 8, 2008, 5.30pm-8.20pm SFU CMPT-307 2008-2 2 Lecture: Week 9 Binary search trees

More information

1. Chapter 1, # 1: Prove that for all sets A, B, C, the formula

1. Chapter 1, # 1: Prove that for all sets A, B, C, the formula Homework 1 MTH 4590 Spring 2018 1. Chapter 1, # 1: Prove that for all sets,, C, the formula ( C) = ( ) ( C) is true. Proof : It suffices to show that ( C) ( ) ( C) and ( ) ( C) ( C). ssume that x ( C),

More information

Computing Maximally Separated Sets in the Plane and Independent Sets in the Intersection Graph of Unit Disks

Computing Maximally Separated Sets in the Plane and Independent Sets in the Intersection Graph of Unit Disks Computing Maximally Separated Sets in the Plane and Independent Sets in the Intersection Graph of Unit Disks Pankaj K. Agarwal Ý Mark Overmars Þ Micha Sharir Ü Abstract Let Ë be a set of Ò points in Ê.

More information

ECE608 - Chapter 16 answers

ECE608 - Chapter 16 answers ¼ À ÈÌ Ê ½ ÈÊÇ Ä ÅË ½µ ½ º½¹ ¾µ ½ º½¹ µ ½ º¾¹½ µ ½ º¾¹¾ µ ½ º¾¹ µ ½ º ¹ µ ½ º ¹ µ ½ ¹½ ½ ECE68 - Chapter 6 answers () CLR 6.-4 Let S be the set of n activities. The obvious solution of using Greedy-Activity-

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Topology 550A Homework 3, Week 3 (Corrections: February 22, 2012)

Topology 550A Homework 3, Week 3 (Corrections: February 22, 2012) Topology 550A Homework 3, Week 3 (Corrections: February 22, 2012) Michael Tagare De Guzman January 31, 2012 4A. The Sorgenfrey Line The following material concerns the Sorgenfrey line, E, introduced in

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

On the Complexity of List Scheduling Algorithms for Distributed-Memory Systems.

On the Complexity of List Scheduling Algorithms for Distributed-Memory Systems. On the Complexity of List Scheduling Algorithms for Distributed-Memory Systems. Andrei Rădulescu Arjan J.C. van Gemund Faculty of Information Technology and Systems Delft University of Technology P.O.Box

More information

6.1 Motivation. Fixed Priorities. 6.2 Context Switch. Real-time is about predictability, i.e. guarantees. Real-Time Systems

6.1 Motivation. Fixed Priorities. 6.2 Context Switch. Real-time is about predictability, i.e. guarantees. Real-Time Systems Real-Time Systems Summer term 2017 6.1 Motivation 6.1 Motivation Real-Time Systems 6 th Chapter Practical Considerations Jafar Akhundov, M.Sc. Professur Betriebssysteme Real-time is about predictability,

More information

This file contains an excerpt from the character code tables and list of character names for The Unicode Standard, Version 3.0.

This file contains an excerpt from the character code tables and list of character names for The Unicode Standard, Version 3.0. Range: This file contains an excerpt from the character code tables and list of character names for The Unicode Standard, Version.. isclaimer The shapes of the reference glyphs used in these code charts

More information

3.4 Deduction and Evaluation: Tools Conditional-Equational Logic

3.4 Deduction and Evaluation: Tools Conditional-Equational Logic 3.4 Deduction and Evaluation: Tools 3.4.1 Conditional-Equational Logic The general definition of a formal specification from above was based on the existence of a precisely defined semantics for the syntax

More information

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9 Implementing Scheduling Algorithms Real-Time and Embedded Systems (M) Lecture 9 Lecture Outline Implementing real time systems Key concepts and constraints System architectures: Cyclic executive Microkernel

More information

Properly Colored Paths and Cycles in Complete Graphs

Properly Colored Paths and Cycles in Complete Graphs 011 ¼ 9 È È 15 ± 3 ¾ Sept., 011 Operations Research Transactions Vol.15 No.3 Properly Colored Paths and Cycles in Complete Graphs Wang Guanghui 1 ZHOU Shan Abstract Let K c n denote a complete graph on

More information

Scheduling with Bus Access Optimization for Distributed Embedded Systems

Scheduling with Bus Access Optimization for Distributed Embedded Systems 472 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 8, NO. 5, OCTOBER 2000 Scheduling with Bus Access Optimization for Distributed Embedded Systems Petru Eles, Member, IEEE, Alex

More information