Dynamic Scheduling of Real-Time Aperiodic Tasks on Multiprocessor Architectures

Size: px
Start display at page:

Download "Dynamic Scheduling of Real-Time Aperiodic Tasks on Multiprocessor Architectures"

Transcription

1 Proceedings qf the 29th Annual Hawaii International Conference on System Sciences Dynamic Scheduling of Real-Time Aperiodic Tasks on Multiprocessor Architectures Abstract The application of static optimization techniques such as branch-and-bound to real-time task scheduling has been investigated. Few pieces of work, however, have been reported which propose and investigate online optimization techniques for dynamic scheduling of real-time tasks. In such task domains, the d@iculty of scheduling is exacerbated by the fact that the cost of scheduling itself contributes directly to the performance of the algon thms and,that it cannot be ignored. This paper proposes a class of algorithms that employ novel, on-line optimization techniques to dynamically schedule a set of sporadic real-time tasks. These algorithms explicitly account for the scheduling cost and its effect on the ability to meet deadlines. The paper addresses issues related to real-time task scheduling in the context of a general graph-theoretic framework Issues related to where and when the task of scheduling is performed are also crddressed. We compare two online scheduling strategies, namely an inter-leaving strategy and an overlapping strategy. In the former strategy, scheduling and execution are inter-leaving in time. Each scheduling phase performed by one processor of the syste,m is followed by an execution phase. In the latter strategy, scheduling and execution are overlapping in time. A specified processor, in this strategy, is dedicated to pe$orm scheduling. Results of expen ments show that the proposed algorithms perform better than existing approaches, in terms of meeting deadlines and total execution costs, over a large range of workloads. 1. Introduction Multiprocessor architectures provide a rich computing environment from which a wide range of problem domains, including real-time applications, can benefit. Efficient and effective scheduling and resource management techniques can play a major role in effectively un1eashin.g the potential power of multiprocessor architectures in solving hard problems. One of the major schleduling problems that has been addressed extensively in the literature [ I.21 is that of assigning a set of tasks to different processors in the Babak Hamidzadeh Yacine Atif Department of Computer Science Hong Kong University of Science & Technology Clear Water Say, Kowloon, Hong Kong. system, in order to minimize the total response time of the total task set. These techniques try to achieve this objective by focusing on evenly balancing the load among the processors and on reducing communication costs in the system. Many multiprocessor scheduling problems have been recognized as hard optimization problems [3], finding solutions to which may take prohibitively long times. Scheduling algorithms for multiprocessor architectures, including those for real-time applications can be divided into two main categories of static and dynamic scheduling. In static scheduling, the allocation of resources is determined off-line prior to the start of task execution. Dynamic scheduling algorithms perform sequencing and resource allocation on-line in the hope of using more comprehensive and up-to-date knowledge of the tasks and the environment. Tasks can also be divided into two broad categories based on their invocation (arrival) patterns over time, namely periodic and aperiodic [4]. Periodic tasks are those which are invoked exactly once during each period. Aperiodic tasks can be regarded as tasks with arbitrary arrival times whose characteristics are not known a priori. Dynamic scheduling techniques are particularly of interest in scheduling aperiodic tasks in multiprocessor architectures, due to the unknown characteristics of these tasks prior to run time. In real-time applications, the tasks to be assigned to different processors of a multiprocessor system have time constraints complying with which is critical for the correctness of the answers produced by the system. Ibis condition adds to the problem of scheduling ordinary tasks, a predictability element which creates a new dimension of complexity in solving such problems. In real-time applications, merely reducing response times is not a sufficient condition for acceptable levels of performance. The predictability requirement in these applications is used to guarantee compliance with the tasks time constraints or to predict deadline violation prior to task execution. The degree of required predictability and guarantee of deadline compliance varies in different applications. According to these requirements, real-time tasks have been divided into three broad categories of hard, soft and semi-hard real time [5]. Hard real-time /96 $ IEEE, 469

2 Proceedings of the 29th Annual Hawaii International Conference on System Sciences tasks are those tasks that have to be executed and whose deadlines (and other time constraints) must be met. Failure to execute a hard real-time task while complying with the tasks deadline may have catastrophic consequences. Soft real-time tasks are those tasks which can tolerate occasional deadline loss without drastically affecting the overall integrity of the system operations. The semi-hard real-time category [5], represents a realistic class of tasks that are less strict than hard real-time tasks. This class of real-time tasks emphasizes the predictability of complying with a tasks deadline. Semi-hard tasks are defined as those tasks which must meet their deadlines, if they are accepted and are scheduled to be executed. Note that with this kind of tasks, the utility of not executing a task at all is much higher than executing that task and missing its deadline. Once a task is predicted to miss its deadline, the system can be notified to take contingency actions to prevent negative consequences of not performing that task. Characterizing this class of tasks has important implications for the design of dynamic scheduling algorithms for aperiodic tasks which we intend to address in this paper. When scheduling real-time tasks dynamically on the processors of a multiprocessor architecture, it is very important to address several issues about the scheduling algorithm, such as the time slots at which the algorithm is invoked, the duration of each invocation time slot, the distribution of the scheduling task itself among different processors, and the complexity of the algorithm. These major issues in real-time task scheduling have rarely been addressed. They are important factors because each one by itself and in relation with the other factors, creates several tradeoffs that can directly affect the quality of the answers produced by the scheduler and the degree of predictability and guarantee that the scheduling algorithm can provide in meeting the task time constraints. Below, we discuss these issues and possible approaches to them. We then propose our approach to addressing these issues. Two approaches to the issue regarding the distribution of the scheduling task among different processors can be taken, namely centralized scheduling and distributed scheduling. Centralized scheduling strategies assign a single processor to perform all scheduling operations. With distributed scheduling strategies, on the other hand, more than one processor may be assigned to perform the scheduling operations. Another possible approach in distributed scheduling is to have each processor be responsible for assigning available tasks to itself when it finishes its previously assigned tasks. The issue regarding the times at which the scheduling algorithm is executed addresses problems such as whether or not the scheduling task is continuously in progress in parallel with other task executions. Two approaches to address this issue are interleaved scheduling and execution and overlapped scheduling and execution. In the interleaved scheduling and execution, the scheduling task is performed in repeated cycles of single scheduling periods followed by task execution periods. In overlapped scheduling and execution, the scheduling task is continuous and concurrent with the execution of other tasks. We have chosen the centralized scheduling approach. We expect this approach to lend itself well to overlapped scheduling-and-execution, since it allows the scheduling task to be performed on one processor while the other tasks are performed elsewhere in the system. The issue regarding the duration of each scheduling time slot addresses problems such as the frequency at which the scheduled tasks are delivered to working processors and the frequency with which newly-arrived tasks are sought for consideration in the scheduling process. This is an important issue in dynamic scheduling of aperiodic, real-time tasks and has mostly been ignored. The existing approaches concentrate on finding a feasible solution for the entire batch of tasks in the current scheduling period without regard for arriving tasks, for keeping other processors idle, and/or for missing the deadlines of scheduled tasks in the current period, due to long scheduling times. The complexity of the scheduling algorithm addresses the tradeoff between the scheduling cost and the schedule quality in terms of metrics such as deadline compliance. Most existing approaches provide an analysis of their algorithm complexity without investigating the effect of scheduling complexity on the quality of the schedule. Using complex scheduling algorithms, provides us with closer-to-optimal solutions and with better predictability in terms of deadline compliance. The more complex the scheduling algorithms are, however, the more overhead cost they will incur and the more system resources have to be allocated to these algorithms which could, otherwise, be used for task execution. Our approach to the problem of dynamically scheduling aperiodic real-time tasks is that of overlapped scheduling and execution using a centralized scheduling technique. Our algorithms continuously perform repeated periods of scheduling, duration of each of which is directly controlled by the algorithm, in order to account for the tradeoff between scheduling time and scheduling quality and 470

3 Proceedings ofthe 29th Annual Hawaii International Conference on System Sciences in order to achieve good1 response times. A centralized scheduling technique allows us to provide the predictability and the deadline compliance guarantee that scheduling real-time tasks requires in a multiprocessor architecture. We evaluate: our techniques via comparison of their performance in complying with deadlines and in minimizing total response times with those of existing dynamic scheduling techniques similar to those proposed for the Spring kernel [6]. In the experiments, we investigate the tradeoff between the loss of resources due to dedicating a processor to scheduling in centralized scheduling and the gain in schedule quality in terms of guaranteed deadline compliance and in terms of reduced response times. We also compare the performance of the overlapped scheduling and execution paradigm with those of the interleaved scheduling and execution paradigm. The results of our experiments show significant performance improvements in comparison with the other existing techniques. The remainder of the paper is organized as follows. Section 2 contains a specification of the task model and the statement of the problem. Section 3 provides a description of our algorithms and their characteristics. Section 4 contains the results of our performance-comparison experiments and a discussion of those results. Section 5 concludes the paper with a summary of the results and plans for our future work. 2. Task Model and IProblem Statement In this paper, we address the problem of scheduling a set T of n aperiodic, non-preemptable, semihard real-time tasks with earliest start times and deadlines on a Uniform Memory-Access (UMA) multiprocessor architecture. In a UMA architecture[7], an interconnection network is situated between the processors and the global shared memory, as shown in figure 1. In this architecture, each processor incurs about the same delay when referencing any location in the shared memory. The inlterconnection network can be a common bus, a crossbar switch or a multistage network. This type of architecture, is suitable for and is commonly used in real-time aplplications, since it facilitates synchronization and colmmunication among processors through the use of shared global variables in the common memory. Each task?;: in T is characterized by a processing time Pi, arrival time Aj earliest start time or ready time Sj and a deadline Di. The tasks earliest start times are representatives of delays that may be encountered in acquiring access to resources such as the interconnection network in our model. Our objective is to maximize the number of tasks whose deadlines are met, once they are accepted for execution, and to minimize the tasks total response times it PO PI P2 P3 3 s 8 lnrerconncction Network Shared Memory 1. Uniform Memory Access (UMA) architecture 3. Continuous On-Line Scheduling (COLS) In a system employing COLS, a dedicated processor performs continuous scheduling periods which overlap with the remaining working processors. The input to each scheduling period j is the set of available tasks (i.e. Batch(j)) in the system at the time when scheduling starts. The result of each scheduling period j is a feasible partial or complete schedule which is delivered for execution by the processing elements of the system. A partial schedule is one that contains only a subset of the tasks in Batch(i). A complete schedule is a sequence containing all the tasks in Batch(j). In the remainder of this section we provide a graph-theoretic framework for our scheduling algorithms. We then provide a specification of the algorithm and its mechanisms for selecting appropriate parameter values Graph-Theoretic Scheduling Real-time task scheduling on a multiprocessor architecture can be regarded as the problem of searching for a permutation of a set of given tasks T and their assignments to the processors in the system such that once executed, they are guaranteed to meet their associated deadlines. Schedules all of whose tasks meet their deadlines when they are executed are referred to as feasible schedules. The search for a feasible schedule is performed on a certain representation of the task space. Graphs are a common structure for representing the task space. An example [8,9] of the space of partial schedules of a task set T is shown in 2 for a set of four tasks T = {T,, Tz, T3, T4} to be scheduled on a UMA architecture P with two working processors PE, and PE2. The nodes {vi} E V in the task space G(V,E) of this problem, represent partial schedules of T on P and the edges (vi,vi) E E represent transformation functions that extend the partial schedule at one end of the edge by an assignment (T;,PE,) where Ti is the next task to be scheduled and PE, is the least loaded processor according to the current partial schedule. Thus, the feasible (complete) schedules, if they exist, will be at the leaf 471

4 Proceedings of the 29th Annual Hawaii Intemational Conference on System Sciences nodes of such a tree. The search space has two dimensions: the processors dimension in depth and the tasks dimension in breadth. Different levels of the search tree represent the target processors, whereas the branches at each level represent the permutations among the remaining tasks to be scheduled. (+-][-+-) (+g [+-) - _ 2. Example Search Space for scheduling T = {T,, T2, T3, T4} on P = { PE,, PE2} The search starts at the root node representing an empty schedule. In each iteration, the search continues by trying to extend the current schedule with one more taskto-processor assignment. This is done by choosing one of the successors of the current node (representing the current schedule) to be the new schedule. The choice of the next node to be expanded among the list of candidate nodes is an important decision that can significantly affect the performance of the scheduling algorithm. Expansion of a node in the graph is defined as the process of generating the successors of that node in the graph. The successors of a node vi are the set of all the k nodes {v,,._., +s) which are connected to that node via direct links (v;, v,),... ( vi, vk). The graph is generated, based on information provided by the node to be expanded. When a dead-end is reached (i.e. an infeasible schedule is found), the search process backtracks to the parent of the current node. Backtracking, is done by discarding the current partial schedule, and extending it by a different task but on the same processor since that processor is still the least-loaded one. Based on the size of the task space and the availability of information about the complete task set T, the complete search space may or may not be generated and stored prior to the start of the scheduling process. In the case of large task spaces and the case where the complete set of tasks is not known, such as aperiodic task models, it is more practical to generate portions of the task space on-line, as needed by the scheduling algorithm. In a dynamic scheduling algorithm, issues regarding what portion of the solution space should be generated and how large that portion should be are important, since they are linked to the scheduling cost and schedule quality. In the following sub-sections, we describe COLS s policy regarding allocation of scheduling time and generation (examination) of portions of the task space Scheduling Procedure To perform the tasks by their deadlines, COLS finds feasible schedules for a set of tasks during a scheduling period. It then delivers those schedules to the assigned processors for execution. COLS uses a novel on-line parameter tuning and prediction technique to determine the time of a scheduling period. The allocated time to scheduling in this algorithm is self-adjusted based on what is known on-line about the nature of the problem and the problem instance and also the predicted load of the processors at the delivery time. The algorithm continually self-adjusts its scheduling time based on parameters such as processor loads, slack, and task arrival rate. The time at which the first processor in the system becomes idle is used to stop the scheduling process, in order to deliver the currently scheduled tasks, so that the processor idle times are minimized. Other parameters such as slack and task arrival rate affect the scheduling time such that the larger the slack, the greater is the allocated time to scheduling, and the lower the arrival rate, the greater is the allocated time to scheduling. A pseudo-code of the algorithm is given in the appendix of this paper. During an iteration of a scheduling period j, a partial feasible schedule of the set of arrived tasks is calculated by removing the most promising node from a list of candidate nodes (i.e. the open list), generating the immediate children of that node (expanding a node), testing the feasibility (see following sub-sections) of the partial schedule represented by each child node and adding the feasible child nodes to the open list. If a heuristic exists based on which to prioritize the nodes on the open list, the algorithm will use it to sort the new list with the most promising node in front of the list. A partial schedule implies the possibility of scheduling only a fraction of the arrived tasks and rejecting the remainder of tasks if their deadlines are predicted to be missed, or postponing their scheduling until the next scheduling period if they are not ready or are not scheduled yet. Each scheduling period is terminated when there are no more partial or complete feasible schedules to examine or when allocated time to that particular period runs out. 472

5 Proceedings of the 29th Annual Hawaii International Conference on System Sciences Allocation of Scheduling Time The allocated time of scheduling period j is controlled by a stopping criterion as shown in figure 3. In the formula for the stopping criterion of COLS, Batch(j) denotes the set of tasks considered in scheduling period j. Slack (Tr) is defined as the maximum time by which the start of Tl s execution can be delayed from the beginning of scheduling period j or :;I, without missing its deadline Dp Load (PE,) in the formula, denotes the load on the processing element PE,, at the beginning of the scheduling period, and h denotes the task arrival rate. T&,J may represent the elapsed time and may depend on the number of nodes expanded during the jth period and the processing speed of the processor on which the scheduling algorithm is performed. ; T.T (j).s Min Max ( Main-Slack, MinJ..oad}, [ MinSlack - Mm [Slack (TJ IT, E Bard (J) ] FIraiq 3. Stopping Criterion of COLT The term in figure 3 is included in order to limit the amount of time allocatfed to scheduling period j so that none of the deadlines of tasks in the current batch are violated due to scheduling cost. This term, however, ignores the waiting time on a processing element s ready queue which may delay the execution of the schedule upon completion of the scheduling time. The term was thus added to the criterion to account for possible queuing delays. The term! in the stopping criterion is included, in order to stop a scheduling period early in bursty arrivals, so that incoming tasks are accounted for soon after their arrivals. Under low arrival rates, the stopping criterion will allow longer scheduling periods to optimize the tasks in the: current batch and to allow a reasonable number of tasks to arrive to form the next batch. Note that the primary factor in designing the stopping criterion of COLS is to maximize the number of tasks whose deadlines are met by honoring their slacks and by maximizing utilization of processing elements via minimization of their idle. times. At the end of each scheduling period, the partial feasible schedule found during that period is delivered to the ready queues of the working processors for execution. If the period is terminated, due to failure to extend further a feasible partial schedule (say F) to include more tasks, F is delivered for execution as the outcome of scheduling period j Feasibility Test COLS predicts deadline violation of tasks based on a feasibility test that takes into account the scheduling time of a period, as well as the current time, deadline, earliest start time and processing time of tasks. Accounting for the scheduling time in the feasibility test ensures that no task will miss its deadline due to the use of resources for scheduling. The test is designed to make sure that a task is feasible at the end of the scheduling period, as well as the time at which the feasibility test is performed. IF Oc + Rem-T,fj) + Smrt cl, pe,j 5 S, ) THEN F is infeasible ELSE IF (tc + Rem-T,(j) + End (,, PE ) t 2 D, ) THEN F is feasible ELSE F is infeasible 4. Feasibility test of COLS The test for adding a task q to the current feasible partial schedule F to obtain partial schedule F in scheduling period j is performed as shown in figure 4. Note that, according to the specified feasibility test, we mark tasks whose earliest start times are later than their scheduled start time as infeasible. The scheduling of these tasks is postponed until later. This is a choice we selected over scheduling these tasks by introducing delays into the schedule to execute them later when their earliest start times are honored. In the test, t, denotes the current time, Rem-T,(j) denotes the remaining time of scheduling period j,srarr (r PE ) denotes the scheduled t start time of task Tt relative to other tasks in F on processor PEk, and.&~d(,,~s,) denotes the scheduled finish time of task Tl relative to other tasks in F on processor PEk. 4. Experimental evaluation In this section, we evaluate the performance of the COLS algorithm and discuss the results of our comparison experiments with an algorithm similar to that designed for the Spring kernel [6]. The experiments were designed to evaluate the deadline compliance capability of the candidate algorithms and their ability to reduce total execution cost under different parameter configurations. The COLS algorithm was explained in previous sections. In the following, we provide a brief description of Spring s limited-backtracking algorithms. These algorithms were selected for comparison with the 473

6 Proceedings of the 29th Annual Hawaii International Conference on System Sciences COLS algorithms, because they too are based on a graphtheoretic framework and because they are one of the few existing techniques which attempt an on-line optimization approach for solving hard scheduling problems involving real-time aperiodic tasks. We were also interested in investigating the cost of dedicating one processor to perform the on-line scheduling in an overlapping mode compared to the inter-leaving mode of the limited backtracking algorithms. In other sub-sections, we describe the design issues involved in our experiments. We then provide results of our experiments and discuss their possible interpretations Limited-Backtracking Algorithms of Spring The limited-backtracking algorithms are graphbased algorithms which attempt to dynamically schedule a set of incoming tasks by exploring the solution space for an appropriate sequence containing the complete set of tasks in a batch. The set of tasks in one batch are determined by the set of tasks that have arrived into the system at a time prior to the start of a scheduling period. Feasibility tests are designed to detect an infeasible sequence of tasks. Once an infeasible node is reached, the subtree below that node is pruned and the algorithm has the option of backtracking out of that node to explore other options or to stop and announce failure. By announcing failure, the algorithm rejects the entire task set in a batch. In order to keep the scheduling cost low, the algorithms employ a limited backtracking technique which is as follows. Assuming no backtracking, the algorithm follows a single path in the solution space attempting to reach a leaf node that schedules all tasks feasibly. If such a leaf node is reached, the algorithm announces success and delivers the tasks for execution. If, on the other hand, the single examined path is deemed infeasible, the algorithm discards all the tasks in the current batch and attempts to schedule tasks in a new batch. With one level of backtracking, the algorithm will have another chance to explore other schedules and so on. These algorithms ignore the effect of scheduling costs on schedule quality and on delivery of feasible schedules. Some of the tasks scheduled by Spring may have missed their deadline by the time they are submitted for execution. The level of backtracking in these algorithms is a fixed parameter. The algorithms also ignore the effect of accepting partial feasible schedules to find which of the scheduling algorithm uses much resources. For the purpose of our experiments, we implemented an interleave scheduling and execution version of the limited backtracking algorithms. In this scheme, a processor PE, performs scheduling of a batch of tasks as explained in the previous paragraphs. As part of this scheduling period, the scheduling processor (PE,) assigns tasks to all processors, including to itself. Once the scheduling of the batch is completed, the task assignments are delivered to processors. The processors, including PE,, will then execute the tasks that are assigned to them. PE, will start the next scheduling period j, once it has executed all the tasks that it assigned to itself during scheduling period j Experiment Design In the experiments, a Poisson job arrival process was used to simulate a natural sequence of aperiodic task arrivals. The time window t, within which arrivals are observed, was set to 200 time units. The arrival rate h ranged from 0.5 to 5. We modeled the delay in obtaining access to the interconnection network for executing task I;: by that task s earliest start time S, Si s are assigned a value, selected with uniform probability, from the interval (Ai>S-), where Smur = Si xm. M is a parameter used to model the degree to which the Si s and Ai'S are set apart. We chose 3 as the value of this parameter for our experiments. The processing times Pi of tasks Ti are uniformly distributed within the interval between I and 50 time units. Deadlines Di are uniformly distributed in the interval (Endi, Dm,) where Endi is the finish time of Ti assuming it starts at Si (i.e. Endi = Si + Pi), and D,, is a maximum value a deadline can have and is calculated as D m,,r= Endi x SF. SF is a parameter that controls the degree of laxity in task deadlines and ranges from 1 to 10 in our experiments. Larger SF values represent larger slack, whereas small SF values represent tight deadlines. Note that as SF becomes larger, it will be possible to find feasible schedules with less scheduling effort. Our target architecture is a Uh4A multiprocessor model with 10 processing elements. Note that for COLS 9 out of the 10 processing elements actually perform tasks, whereas in the limited backtracking algorithms, all 10 processing elements are involved in task execution. The results of our experiments do reflect the performance of COLS with 9 working processors versus the performance of the limited backtracking algorithms with 10 working processors. The metrics of performance in our experiments were chosen to be deadline compliance (hit ratio) andtotal execution cost. Deadline compliance or hit ratio measures the percentage of tasks which have completed their execution by their deadline. The execution cost measures the total execution time spent on scheduling and executing all the tasks in a run. By this metric, we wanted to study both the load-balancing performance and the effect of scheduling cost. In both algorithms, we measure the scheduling cost as the logical time spent in generating the nodes of the tree representing the solution space. Since the time to generate a node in the tree 474

7 Proceedings of the 29th Annual Hawaii International Conference on System Sciences depends on the processing speed of the processing element on which the algorithm runs, we defined an expansion rate parameter ER to have a generic measure of scheduling effort in logical time units. ER defines the number of nodes that a particular processor can generate and examine per unit of time. The value of ER was chosen to be 5 nodes/unit-of-time in the experiments. One of our algorithm parameters is the constant coefficient k of the term k/h in the stopping criterion of COLS (see figure 3). This parameter implies the expected task batch size for each scheduling period of COLS and was set to 5 for the experiments. Finally, the degree of backtracking in the limited-backtracking algorithms constitutes another parameter of our experiments. In previous experiments [5], we have studied the performance of the limitejd backtracking algorithms with different backtracking levels ranging from 1 to 10. The results of this experiment show that there does not exist a fixed value for the level of backtracking which can produce good performance under different parameter configurations. Too small or too large backtracking levels are shown to produce poorer results. For our experiments reported here, we chose backtracking level 2 (BT-2), since this level of backtracking (or others close to it) were shown to provide the best performance under a variety of conditions. The remainder of this section discusses the results of the experiments Comparison of Deadline Compliance In the remainder of this section, we present the results of our experiments in which we compared the performance of the COLS and the limited-backtracking algorithm with backtrac:king level 2 (referred to as BT-2). s 5 through 10 show the performance of the two algorithms in terms of the ratio of the task deadlines that were met using a. Uh4A architecture with 10 processors. Recall that COLS is using only 9 processors for computation since one processor is dedicated to perform continuously the on-line scheduling task. As is shown in the tigures, COLS outperforms BT-2 under all parameter configurations. s 5,6 and 7 show the results as the degree of laxity varies for arrival rate values 0.5, 3 and 5, respectively. As is evident from these figures, the gap between COLS and BT-2 widens as the arrival rate increases, so much that for arrival rates 3 and 5, COLS outperforms BT-2 by a factor of 2 and 4, respectively. s 8, 9 and 10 show the results as the arrival rate varies for slack-factor values 3, 5 and 10. The figures show again, that COLS outperforms BT-2 by wide margins, particularly for lower degrees of laxity Laxity Comparison of deadline compliance (X=3) Laxity6 8 I( 7. Comparison of deadline compliance (h=5 80 h i 40 t Rnre of Amimi~ 5. Comparison ofdeadline compliance (X=0.5) 8. Comparison of deadlline compliance (SF=3) 475

8 Proceedings of the 29th Annual Hawaii International Conference on System Sciences These results demonstrate the potential improvement in predictability and in the ability to guarantee deadline compliance when additional resources are allocated for performing more sophisticated scheduling and when the time and complexity of the scheduling task are controlled directly. processors. This is due to the limited depth to which it is allowed to search, due to limited time that is allocated for its scheduling periods Number of Processon Rate of Arrivals 11.. Comparison of execution caste (SF=5, h =O..S/ 9. Comparison of deadlline compliance (SF=5) Rate of Arrivals 12. Comparison of execution caste (SF=5, )L=3) 10. Comparison of deadlline compliance (SF=IO) 4.4. Comparison of Total Execution Costs Another interesting metric of performance in comparing the two algorithms is the total time each algorithm spends to schedule and execute the total taskset, as the number of processors increases. s 11 to 15 depict the total execution times of COLS and BT-2 as the number of processors increases. As is shown in these figures, COLS has lower response times than BT-2, although it is using 10% less of its processing power for task execution than BT-2. s 11, 12 and 13 show the results for arrival rate values 0.5,3 and 5 while the degree of laxity is fixed to 5. The figures demonstrate the ability of COLS to reduce response times more effectively than BT-2 as the number of processors increases. COLS seems to saturate its load-balancing performance under higher number of \ \ 2200 b -\ \ P - -, E \ / \ x \ / \ e2coo Y c 4-l Number of Processon 13. Comparison of execution caste (SF=S, x=5) 476

9 Proceedings afthe 29th Annual Hawaii International Conference on System Sciences Number of Pmcesrors 14.. Comparison of execurion cosfe (SF=3, L3) s 14 and 15 show a progression of performance as the degree of laxity increases. These figures demonstrate the total task execution costs for slack-factor values 3, 5 and 10, respectively when the arrival rate is fixed to a value of 3. As is shown in the figures, COLS consistently performs at a reduced total execution costs compared to BT Numkr of Pmcessors 15. Comparison of execution caste (SF=3, RIO) We hope that results of these experiments will demonstrate the potential1 in improving performance by dedicating resource to performing sophisticated on-line optimization and by explicitly accounting for the tradeoffs between scheduling time and the quality of the schedules produced. 5. Conclusion In this paper, we have proposed a set of dynamic scheduling algorithms called Continuous On-Line Scheduling (COLS) that are aimed at scheduling a set of aperiodic semi-hard deadlines on the processors of a UMA architecture. Semi-hard deadlines were termed to be a class of real-time tasks in which not executing a task has higher utility than executing the task and missing its deadline. COLS was designed to explicitly address a fundamental tradeoff in dynamic scheduling, namely the balance between the time allocated to scheduling and the quality of the resulting schedules. The algorithm performs dynamic scheduling by dedicating a processor to perform scheduling continuously, in parallel with other task executions. COLS controls scheduling cost to produce high deadline compliance ratios in the available time. COLS automatically calculates the amount of time to allocate to scheduling in different scheduling periods using a stopping criterion that, once met, will interrupt the scheduling process, in order to deliver the set of scheduled tasks for execution. COLS s stopping criterion is based on a combination of different parameters such as slack, task arrival rates, and processor loads. From the results of our experiments we conclude that very effective stopping criteria can be designed to adapt the duration of scheduling periods automatically, in order to obtain high deadline compliance. We also conclude that centralized scheduling in multiprocessor architectures, particularly for real-time applications has a great potential in improving performance and is thus worthy of further investigation. Our experiments show that the deadline compliance of COLS is higher compared to other existing algorithms under a wide range of parameter values for a number of parameters. The results also show that COLS outperforms the existing techniques in terms of total execution costs As part of our future research, we plan to investigate the effect of different heuristics on the performance of our algorithms. We also plan to explore and compare a number of different problem representations and their effect on deadline compliance of real-time tasks. Acknowledgments We would like to give many thanks to Professor Krithi Ramamritham, discussions with whom has guided this research throughout its life cycle. We would also like to thank Professor Shashi Shekhar, Professor Vipin Kumar, and Professor Laveen Kanal for their contributions and comments. We are grateful to the referees for reviewing this paper and providing valuable comments. This research has been supported, in part, by a grant from The Research Grant Council of Hong Kong (Grant No. DAG 94/95EG29). References 1. T. L. &savant and J. G. Kuhl, Taxonomy of Scheduling in General-Purpose Distributed Computing Systems, IEEE Transactions on Software Engineering, vol. 14, no.2, pp.l , February

10 Proceedings of the 29th Annual Hawaii Zntemational Conference on System Sciences Appendix: 2. M. G. Norman and P. Thanisch, Models of Machines and Computation for Mapping in Multicomputers, ACM Computing Surveys, vol. 25 no.3, pp , September M.R. Garey and D. S. Johnson, Complexity Results for Multiprocessor Scheduling Under Resource Constraints, SIAM Journal of Computing, pp , A. Burns, Scheduling Hard Real-Time Systems: A Review, Software Engineering Journal.vol.6, no.3, pp , May B. Hamidzadeh and Y. Atif, Deadline Compliance of Aperiodic Semi-Hard Real- Time Tasks, Submitted to the ACM SIGMETRICS, J. A. Stankovic and K. Ramamritham, The Design of the Spring Kernel, Proceedings of the Real-Time Systems Symposium, pp , IEEE, Dec K. Hwang, Advanced Computer Architecture: Parallelism, Scalability, Programmability, McGraw-Hill Inc., W. Zhao and K. Ramamritham, Simple ad Integrated Heuristic Algorithms for Scheduling Tasks with Time and Resource Constraints, Journal of Systems and Software, W. Zhao, K. Ramamritham, and J. A. Stankovic, Preemptive Scheduling Under Time and Resource Constraints, IEEE Transactions on Computers, August Pseudo-code for COLS PROCEDURE COLS (startmode); VAR BEGIN queuesucc-list : queue-of-nodes; x,current-node.new-start : node; (* Search procedure *) BEGIN 1 current-node := head(queue); I delete(current-node,queue); I succ-list := successors(currentgode); I FOR each x IN succ-list DO I BEGIN I I IF feasible(x) THEN I I insert(x,queue); I END END (* End of the search procedure *) (* head(queue) contains the complete/partial schedule. If a dead-end is reached i.e empty(queue) is true, then current-node contains the last feasible partial schedule *) (* Deliberation procedure *) I IF leaf(head(queue)) THEN I assign-complete-schedule(head(queue)); I ELSE I IF (stopping-criterion) THEN I assign-partial-schedule(head(queue)); I ELSE I assign-partial-schedule(current-node); I (* End of the deliberation procedure *) (* Prepare the next scheduling-phase by allocating scheduling-time, building the next bench of tasks to be scheduled and predicting the processors load when the previous schedule is assigned. *) SC:= adjust-scheduling-time; current-task-set := Remaining-Task-S u Arrived-Task-Set; current-load := predict-load (all-processors); new-start := create-node(current-task-set, current-load); (* Resume scheduling of current-task-set according to the predicted processors load *) WHILE NOT [ leaf(head(queue)) OR (stopping-criterion) OR empty (queue) ] DO END. COLS (new-start); 478

A Scalable Scheduling Algorithm for Real-Time Distributed Systems

A Scalable Scheduling Algorithm for Real-Time Distributed Systems A Scalable Scheduling Algorithm for Real-Time Distributed Systems Yacine Atif School of Electrical & Electronic Engineering Nanyang Technological University Singapore E-mail: iayacine@ntu.edu.sg Babak

More information

Event-Driven Scheduling. (closely following Jane Liu s Book)

Event-Driven Scheduling. (closely following Jane Liu s Book) Event-Driven Scheduling (closely following Jane Liu s Book) Real-Time Systems, 2006 Event-Driven Systems, 1 Principles Assign priorities to Jobs At events, jobs are scheduled according to their priorities

More information

Simplified design flow for embedded systems

Simplified design flow for embedded systems Simplified design flow for embedded systems 2005/12/02-1- Reuse of standard software components Knowledge from previous designs to be made available in the form of intellectual property (IP, for SW & HW).

More information

Effects of Hard Real-Time Constraints in Implementing the Myopic Scheduling Algorithm

Effects of Hard Real-Time Constraints in Implementing the Myopic Scheduling Algorithm Effects of Hard Real-Time Constraints in Implementing the Myopic Scheduling Algorithm Abstract- Institute of Information Technology, University of Dhaka, Dhaka 1 muheymin@yahoo.com, K M. Sakib, M S. Hasan

More information

Scheduling in Multiprocessor System Using Genetic Algorithms

Scheduling in Multiprocessor System Using Genetic Algorithms Scheduling in Multiprocessor System Using Genetic Algorithms Keshav Dahal 1, Alamgir Hossain 1, Benzy Varghese 1, Ajith Abraham 2, Fatos Xhafa 3, Atanasi Daradoumis 4 1 University of Bradford, UK, {k.p.dahal;

More information

Time Triggered and Event Triggered; Off-line Scheduling

Time Triggered and Event Triggered; Off-line Scheduling Time Triggered and Event Triggered; Off-line Scheduling Real-Time Architectures -TUe Gerhard Fohler 2004 Mälardalen University, Sweden gerhard.fohler@mdh.se Real-time: TT and ET Gerhard Fohler 2004 1 Activation

More information

Precedence Graphs Revisited (Again)

Precedence Graphs Revisited (Again) Precedence Graphs Revisited (Again) [i,i+6) [i+6,i+12) T 2 [i,i+6) [i+6,i+12) T 3 [i,i+2) [i+2,i+4) [i+4,i+6) [i+6,i+8) T 4 [i,i+1) [i+1,i+2) [i+2,i+3) [i+3,i+4) [i+4,i+5) [i+5,i+6) [i+6,i+7) T 5 [i,i+1)

More information

Performance Effects of Information Sharing in a Distributed Multiprocessor Real-Time Scheduler

Performance Effects of Information Sharing in a Distributed Multiprocessor Real-Time Scheduler Performance Effects of Information Sharing in a Distributed Multiprocessor Real-Time Scheduler Hongyi Zhou Karsten Schwan Ian F. Akyildiz Bellcore RRC 4C-306, 444 Hoes Lane Piscataway, NJ 08855-1300 College

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

UNIT 4 Branch and Bound

UNIT 4 Branch and Bound UNIT 4 Branch and Bound General method: Branch and Bound is another method to systematically search a solution space. Just like backtracking, we will use bounding functions to avoid generating subtrees

More information

EECS 571 Principles of Real-Time Embedded Systems. Lecture Note #8: Task Assignment and Scheduling on Multiprocessor Systems

EECS 571 Principles of Real-Time Embedded Systems. Lecture Note #8: Task Assignment and Scheduling on Multiprocessor Systems EECS 571 Principles of Real-Time Embedded Systems Lecture Note #8: Task Assignment and Scheduling on Multiprocessor Systems Kang G. Shin EECS Department University of Michigan What Have We Done So Far?

More information

ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AND RESOURCE CONSTRAINTS

ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AND RESOURCE CONSTRAINTS ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AND RESOURCE CONSTRAINTS Santhi Baskaran 1 and P. Thambidurai 2 1 Department of Information Technology, Pondicherry Engineering

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (10 th Week) (Advanced) Operating Systems 10. Multiprocessor, Multicore and Real-Time Scheduling 10. Outline Multiprocessor

More information

Introduction to Real-Time Systems ECE 397-1

Introduction to Real-Time Systems ECE 397-1 Introduction to Real-Time Systems ECE 97-1 Northwestern University Department of Computer Science Department of Electrical and Computer Engineering Teachers: Robert Dick Peter Dinda Office: L477 Tech 8,

More information

Learning Outcomes. Scheduling. Is scheduling important? What is Scheduling? Application Behaviour. Is scheduling important?

Learning Outcomes. Scheduling. Is scheduling important? What is Scheduling? Application Behaviour. Is scheduling important? Learning Outcomes Scheduling Understand the role of the scheduler, and how its behaviour influences the performance of the system. Know the difference between I/O-bound and CPU-bound tasks, and how they

More information

REAL-TIME SCHEDULING FOR DEPENDABLE MULTIMEDIA TASKS IN MULTIPROCESSOR SYSTEMS

REAL-TIME SCHEDULING FOR DEPENDABLE MULTIMEDIA TASKS IN MULTIPROCESSOR SYSTEMS REAL-TIME SCHEDULING FOR DEPENDABLE MULTIMEDIA TASKS IN MULTIPROCESSOR SYSTEMS Xiao Qin Liping Pang Zongfen Han Shengli Li Department of Computer Science, Huazhong University of Science and Technology

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information

Penalty Minimization in Scheduling a Set of Soft Real-Time Tasks

Penalty Minimization in Scheduling a Set of Soft Real-Time Tasks Technical Report Number 2007-536 Penalty Minimization in Scheduling a Set of Soft Real-Time Tasks Arezou Mohammadi and Selim G. Akl School of Computing Queen s University Kingston, Ontario, Canada K7L

More information

Scheduling Real Time Parallel Structure on Cluster Computing with Possible Processor failures

Scheduling Real Time Parallel Structure on Cluster Computing with Possible Processor failures Scheduling Real Time Parallel Structure on Cluster Computing with Possible Processor failures Alaa Amin and Reda Ammar Computer Science and Eng. Dept. University of Connecticut Ayman El Dessouly Electronics

More information

Homework index. Processing resource description. Goals for lecture. Communication resource description. Graph extensions. Problem definition

Homework index. Processing resource description. Goals for lecture. Communication resource description. Graph extensions. Problem definition Introduction to Real-Time Systems ECE 97-1 Homework index 1 Reading assignment.............. 4 Northwestern University Department of Computer Science Department of Electrical and Computer Engineering Teachers:

More information

Table 9.1 Types of Scheduling

Table 9.1 Types of Scheduling Table 9.1 Types of Scheduling Long-term scheduling Medium-term scheduling Short-term scheduling I/O scheduling The decision to add to the pool of processes to be executed The decision to add to the number

More information

FUTURE communication networks are expected to support

FUTURE communication networks are expected to support 1146 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 13, NO 5, OCTOBER 2005 A Scalable Approach to the Partition of QoS Requirements in Unicast and Multicast Ariel Orda, Senior Member, IEEE, and Alexander Sprintson,

More information

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT PhD Summary DOCTORATE OF PHILOSOPHY IN COMPUTER SCIENCE & ENGINEERING By Sandip Kumar Goyal (09-PhD-052) Under the Supervision

More information

The Cheapest Way to Obtain Solution by Graph-Search Algorithms

The Cheapest Way to Obtain Solution by Graph-Search Algorithms Acta Polytechnica Hungarica Vol. 14, No. 6, 2017 The Cheapest Way to Obtain Solution by Graph-Search Algorithms Benedek Nagy Eastern Mediterranean University, Faculty of Arts and Sciences, Department Mathematics,

More information

System-Level Synthesis of Application Specific Systems using A* Search and Generalized Force-Directed Heuristics

System-Level Synthesis of Application Specific Systems using A* Search and Generalized Force-Directed Heuristics System-Level Synthesis of Application Specific Systems using A* Search and Generalized Force-Directed Heuristics Chunho Lee, Miodrag Potkonjak, and Wayne Wolf Computer Science Department, University of

More information

OPERATING SYSTEM. The Process. Introduction Process creation & termination Process state diagram Process scheduling & its criteria

OPERATING SYSTEM. The Process. Introduction Process creation & termination Process state diagram Process scheduling & its criteria OPERATING SYSTEM The Process Introduction Process creation & termination Process state diagram Process scheduling & its criteria Process The concept of process is fundamental to the structure of operating

More information

Shared-Memory Multiprocessor Systems Hierarchical Task Queue

Shared-Memory Multiprocessor Systems Hierarchical Task Queue UNIVERSITY OF LUGANO Advanced Learning and Research Institute -ALaRI PROJECT COURSE: PERFORMANCE EVALUATION Shared-Memory Multiprocessor Systems Hierarchical Task Queue Mentor: Giuseppe Serazzi Candidates:

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

Network Load Balancing Methods: Experimental Comparisons and Improvement

Network Load Balancing Methods: Experimental Comparisons and Improvement Network Load Balancing Methods: Experimental Comparisons and Improvement Abstract Load balancing algorithms play critical roles in systems where the workload has to be distributed across multiple resources,

More information

Computer Science 4500 Operating Systems

Computer Science 4500 Operating Systems Computer Science 4500 Operating Systems Module 6 Process Scheduling Methods Updated: September 25, 2014 2008 Stanley A. Wileman, Jr. Operating Systems Slide 1 1 In This Module Batch and interactive workloads

More information

Search and Optimization

Search and Optimization Search and Optimization Search, Optimization and Game-Playing The goal is to find one or more optimal or sub-optimal solutions in a given search space. We can either be interested in finding any one solution

More information

Multiprocessor scheduling

Multiprocessor scheduling Chapter 10 Multiprocessor scheduling When a computer system contains multiple processors, a few new issues arise. Multiprocessor systems can be categorized into the following: Loosely coupled or distributed.

More information

Mixed Criticality Scheduling in Time-Triggered Legacy Systems

Mixed Criticality Scheduling in Time-Triggered Legacy Systems Mixed Criticality Scheduling in Time-Triggered Legacy Systems Jens Theis and Gerhard Fohler Technische Universität Kaiserslautern, Germany Email: {jtheis,fohler}@eit.uni-kl.de Abstract Research on mixed

More information

Controlled duplication for scheduling real-time precedence tasks on heterogeneous multiprocessors

Controlled duplication for scheduling real-time precedence tasks on heterogeneous multiprocessors Controlled duplication for scheduling real-time precedence tasks on heterogeneous multiprocessors Jagpreet Singh* and Nitin Auluck Department of Computer Science & Engineering Indian Institute of Technology,

More information

Cache Management for Shared Sequential Data Access

Cache Management for Shared Sequential Data Access in: Proc. ACM SIGMETRICS Conf., June 1992 Cache Management for Shared Sequential Data Access Erhard Rahm University of Kaiserslautern Dept. of Computer Science 6750 Kaiserslautern, Germany Donald Ferguson

More information

Fault tolerant scheduling in real time systems

Fault tolerant scheduling in real time systems tolerant scheduling in real time systems Afrin Shafiuddin Department of Electrical and Computer Engineering University of Wisconsin-Madison shafiuddin@wisc.edu Swetha Srinivasan Department of Electrical

More information

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling Uniprocessor Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Three level scheduling 2 1 Types of Scheduling 3 Long- and Medium-Term Schedulers Long-term scheduler Determines which programs

More information

On Latency Management in Time-Shared Operating Systems *

On Latency Management in Time-Shared Operating Systems * On Latency Management in Time-Shared Operating Systems * Kevin Jeffay University of North Carolina at Chapel Hill Department of Computer Science Chapel Hill, NC 27599-3175 jeffay@cs.unc.edu Abstract: The

More information

ENERGY EFFICIENT SCHEDULING SIMULATOR FOR DISTRIBUTED REAL-TIME SYSTEMS

ENERGY EFFICIENT SCHEDULING SIMULATOR FOR DISTRIBUTED REAL-TIME SYSTEMS I J I T E ISSN: 2229-7367 3(1-2), 2012, pp. 409-414 ENERGY EFFICIENT SCHEDULING SIMULATOR FOR DISTRIBUTED REAL-TIME SYSTEMS SANTHI BASKARAN 1, VARUN KUMAR P. 2, VEVAKE B. 2 & KARTHIKEYAN A. 2 1 Assistant

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

A Level-wise Priority Based Task Scheduling for Heterogeneous Systems

A Level-wise Priority Based Task Scheduling for Heterogeneous Systems International Journal of Information and Education Technology, Vol., No. 5, December A Level-wise Priority Based Task Scheduling for Heterogeneous Systems R. Eswari and S. Nickolas, Member IACSIT Abstract

More information

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system Real world exposed programs Programs written to interact with the real world, outside the computer Programs handle input and output of data in pace matching the real world processes Necessitates ability

More information

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31 CPU scheduling CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In a single-processor

More information

CONSTRUCTION AND EVALUATION OF MESHES BASED ON SHORTEST PATH TREE VS. STEINER TREE FOR MULTICAST ROUTING IN MOBILE AD HOC NETWORKS

CONSTRUCTION AND EVALUATION OF MESHES BASED ON SHORTEST PATH TREE VS. STEINER TREE FOR MULTICAST ROUTING IN MOBILE AD HOC NETWORKS CONSTRUCTION AND EVALUATION OF MESHES BASED ON SHORTEST PATH TREE VS. STEINER TREE FOR MULTICAST ROUTING IN MOBILE AD HOC NETWORKS 1 JAMES SIMS, 2 NATARAJAN MEGHANATHAN 1 Undergrad Student, Department

More information

Geometric data structures:

Geometric data structures: Geometric data structures: Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade Sham Kakade 2017 1 Announcements: HW3 posted Today: Review: LSH for Euclidean distance Other

More information

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Parallel Programming. Parallel algorithms Combinatorial Search

Parallel Programming. Parallel algorithms Combinatorial Search Parallel Programming Parallel algorithms Combinatorial Search Some Combinatorial Search Methods Divide and conquer Backtrack search Branch and bound Game tree search (minimax, alpha-beta) 2010@FEUP Parallel

More information

Exam Review TexPoint fonts used in EMF.

Exam Review TexPoint fonts used in EMF. Exam Review Generics Definitions: hard & soft real-time Task/message classification based on criticality and invocation behavior Why special performance measures for RTES? What s deadline and where is

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

Scheduling with Bus Access Optimization for Distributed Embedded Systems

Scheduling with Bus Access Optimization for Distributed Embedded Systems 472 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 8, NO. 5, OCTOBER 2000 Scheduling with Bus Access Optimization for Distributed Embedded Systems Petru Eles, Member, IEEE, Alex

More information

Survey of different Task Scheduling Algorithm

Survey of different Task Scheduling Algorithm 2014 IJEDR Volume 2, Issue 1 ISSN: 2321-9939 Survey of different Task Scheduling Algorithm 1 Viral Patel, 2 Milin Patel 1 Student, 2 Assistant Professor 1 Master in Computer Engineering, Parul Institute

More information

Heuristic (Informed) Search

Heuristic (Informed) Search Heuristic (Informed) Search (Where we try to choose smartly) R&N: Chap., Sect..1 3 1 Search Algorithm #2 SEARCH#2 1. INSERT(initial-node,Open-List) 2. Repeat: a. If empty(open-list) then return failure

More information

Unit 9 : Fundamentals of Parallel Processing

Unit 9 : Fundamentals of Parallel Processing Unit 9 : Fundamentals of Parallel Processing Lesson 1 : Types of Parallel Processing 1.1. Learning Objectives On completion of this lesson you will be able to : classify different types of parallel processing

More information

Chapter S:II. II. Search Space Representation

Chapter S:II. II. Search Space Representation Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation

More information

Real-time Scheduling of Skewed MapReduce Jobs in Heterogeneous Environments

Real-time Scheduling of Skewed MapReduce Jobs in Heterogeneous Environments Real-time Scheduling of Skewed MapReduce Jobs in Heterogeneous Environments Nikos Zacheilas, Vana Kalogeraki Department of Informatics Athens University of Economics and Business 1 Big Data era has arrived!

More information

Static Multiprocessor Scheduling of Periodic Real-Time Tasks with Precedence Constraints and Communication Costs

Static Multiprocessor Scheduling of Periodic Real-Time Tasks with Precedence Constraints and Communication Costs Static Multiprocessor Scheduling of Periodic Real-Time Tasks with Precedence Constraints and Communication Costs Stefan Riinngren and Behrooz A. Shirazi Department of Computer Science and Engineering The

More information

CACHE MANAGEMENT FOR SHARED SEQUENTIAL DATA ACCESS

CACHE MANAGEMENT FOR SHARED SEQUENTIAL DATA ACCESS Information Systems Vol. 18, No.4, pp. 197-213, 1993 Printed in Great Britain. All rights reserved 0306-4379/93 $6.00 + 0.00 Copyright 1993 Pergamon Press Ltd CACHE MANAGEMENT FOR SHARED SEQUENTIAL DATA

More information

On Fully Distributed Adaptive Load Balancing

On Fully Distributed Adaptive Load Balancing On Fully Distributed Adaptive Load Balancing David Breitgand, Rami Cohen 2, Amir Nahir, Danny Raz 2 IBM Haifa Research Lab, Israel. 2 CS Department, Technion, Haifa Abstract. Monitoring is an inherent

More information

COMPLEX embedded systems with multiple processing

COMPLEX embedded systems with multiple processing IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 12, NO. 8, AUGUST 2004 793 Scheduling and Mapping in an Incremental Design Methodology for Distributed Real-Time Embedded Systems

More information

Tasks. Task Implementation and management

Tasks. Task Implementation and management Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration

More information

OPERATING SYSTEM. Functions of Operating System:

OPERATING SYSTEM. Functions of Operating System: OPERATING SYSTEM Introduction: An operating system (commonly abbreviated to either OS or O/S) is an interface between hardware and user. OS is responsible for the management and coordination of activities

More information

Introduction to Embedded Systems

Introduction to Embedded Systems Introduction to Embedded Systems Sanjit A. Seshia UC Berkeley EECS 9/9A Fall 0 008-0: E. A. Lee, A. L. Sangiovanni-Vincentelli, S. A. Seshia. All rights reserved. Chapter : Operating Systems, Microkernels,

More information

CS 167 Final Exam Solutions

CS 167 Final Exam Solutions CS 167 Final Exam Solutions Spring 2018 Do all questions. 1. [20%] This question concerns a system employing a single (single-core) processor running a Unix-like operating system, in which interrupts are

More information

A Fuzzy-based Multi-criteria Scheduler for Uniform Multiprocessor Real-time Systems

A Fuzzy-based Multi-criteria Scheduler for Uniform Multiprocessor Real-time Systems 10th International Conference on Information Technology A Fuzzy-based Multi-criteria Scheduler for Uniform Multiprocessor Real-time Systems Vahid Salmani Engineering, Ferdowsi University of salmani@um.ac.ir

More information

Lecture 9: Load Balancing & Resource Allocation

Lecture 9: Load Balancing & Resource Allocation Lecture 9: Load Balancing & Resource Allocation Introduction Moler s law, Sullivan s theorem give upper bounds on the speed-up that can be achieved using multiple processors. But to get these need to efficiently

More information

Hardware/Software Partitioning and Scheduling of Embedded Systems

Hardware/Software Partitioning and Scheduling of Embedded Systems Hardware/Software Partitioning and Scheduling of Embedded Systems Andrew Morton PhD Thesis Defence Electrical and Computer Engineering University of Waterloo January 13, 2005 Outline 1. Thesis Statement

More information

APHID: Asynchronous Parallel Game-Tree Search

APHID: Asynchronous Parallel Game-Tree Search APHID: Asynchronous Parallel Game-Tree Search Mark G. Brockington and Jonathan Schaeffer Department of Computing Science University of Alberta Edmonton, Alberta T6G 2H1 Canada February 12, 1999 1 Running

More information

Reference Model and Scheduling Policies for Real-Time Systems

Reference Model and Scheduling Policies for Real-Time Systems ESG Seminar p.1/42 Reference Model and Scheduling Policies for Real-Time Systems Mayank Agarwal and Ankit Mathur Dept. of Computer Science and Engineering, Indian Institute of Technology Delhi ESG Seminar

More information

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter Lecture Topics Today: Advanced Scheduling (Stallings, chapter 10.1-10.4) Next: Deadlock (Stallings, chapter 6.1-6.6) 1 Announcements Exam #2 returned today Self-Study Exercise #10 Project #8 (due 11/16)

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Backtracking. Chapter 5

Backtracking. Chapter 5 1 Backtracking Chapter 5 2 Objectives Describe the backtrack programming technique Determine when the backtracking technique is an appropriate approach to solving a problem Define a state space tree for

More information

: Principles of Automated Reasoning and Decision Making Midterm

: Principles of Automated Reasoning and Decision Making Midterm 16.410-13: Principles of Automated Reasoning and Decision Making Midterm October 20 th, 2003 Name E-mail Note: Budget your time wisely. Some parts of this quiz could take you much longer than others. Move

More information

Call Admission Control in IP networks with QoS support

Call Admission Control in IP networks with QoS support Call Admission Control in IP networks with QoS support Susana Sargento, Rui Valadas and Edward Knightly Instituto de Telecomunicações, Universidade de Aveiro, P-3810 Aveiro, Portugal ECE Department, Rice

More information

CS3733: Operating Systems

CS3733: Operating Systems CS3733: Operating Systems Topics: Process (CPU) Scheduling (SGG 5.1-5.3, 6.7 and web notes) Instructor: Dr. Dakai Zhu 1 Updates and Q&A Homework-02: late submission allowed until Friday!! Submit on Blackboard

More information

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured System Performance Analysis Introduction Performance Means many things to many people Important in any design Critical in real time systems 1 ns can mean the difference between system Doing job expected

More information

A Note on Scheduling Parallel Unit Jobs on Hypercubes

A Note on Scheduling Parallel Unit Jobs on Hypercubes A Note on Scheduling Parallel Unit Jobs on Hypercubes Ondřej Zajíček Abstract We study the problem of scheduling independent unit-time parallel jobs on hypercubes. A parallel job has to be scheduled between

More information

Course Syllabus. Operating Systems

Course Syllabus. Operating Systems Course Syllabus. Introduction - History; Views; Concepts; Structure 2. Process Management - Processes; State + Resources; Threads; Unix implementation of Processes 3. Scheduling Paradigms; Unix; Modeling

More information

Resource CoAllocation for Scheduling Tasks with Dependencies, in Grid

Resource CoAllocation for Scheduling Tasks with Dependencies, in Grid Resource CoAllocation for Scheduling Tasks with Dependencies, in Grid Diana Moise 1,2, Izabela Moise 1,2, Florin Pop 1, Valentin Cristea 1 1 University Politehnica of Bucharest, Romania 2 INRIA/IRISA,

More information

Aperiodic Task Scheduling

Aperiodic Task Scheduling Aperiodic Task Scheduling Radek Pelánek Preemptive Scheduling: The Problem 1 processor arbitrary arrival times of tasks preemption performance measure: maximum lateness no resources, no precedence constraints

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

Comparative Study of blocking mechanisms for Packet Switched Omega Networks

Comparative Study of blocking mechanisms for Packet Switched Omega Networks Proceedings of the 6th WSEAS Int. Conf. on Electronics, Hardware, Wireless and Optical Communications, Corfu Island, Greece, February 16-19, 2007 18 Comparative Study of blocking mechanisms for Packet

More information

A Synchronization Algorithm for Distributed Systems

A Synchronization Algorithm for Distributed Systems A Synchronization Algorithm for Distributed Systems Tai-Kuo Woo Department of Computer Science Jacksonville University Jacksonville, FL 32211 Kenneth Block Department of Computer and Information Science

More information

Joint Entity Resolution

Joint Entity Resolution Joint Entity Resolution Steven Euijong Whang, Hector Garcia-Molina Computer Science Department, Stanford University 353 Serra Mall, Stanford, CA 94305, USA {swhang, hector}@cs.stanford.edu No Institute

More information

Scheduling and Mapping in an Incremental Design Methodology for Distributed Real-Time Embedded Systems

Scheduling and Mapping in an Incremental Design Methodology for Distributed Real-Time Embedded Systems (1) TVLSI-00246-2002.R1 Scheduling and Mapping in an Incremental Design Methodology for Distributed Real-Time Embedded Systems Paul Pop, Petru Eles, Zebo Peng, Traian Pop Dept. of Computer and Information

More information

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable What s An OS? Provides environment for executing programs Process abstraction for multitasking/concurrency scheduling Hardware abstraction layer (device drivers) File systems Communication Do we need an

More information

Algorithms Dr. Haim Levkowitz

Algorithms Dr. Haim Levkowitz 91.503 Algorithms Dr. Haim Levkowitz Fall 2007 Lecture 4 Tuesday, 25 Sep 2007 Design Patterns for Optimization Problems Greedy Algorithms 1 Greedy Algorithms 2 What is Greedy Algorithm? Similar to dynamic

More information

Pull based Migration of Real-Time Tasks in Multi-Core Processors

Pull based Migration of Real-Time Tasks in Multi-Core Processors Pull based Migration of Real-Time Tasks in Multi-Core Processors 1. Problem Description The complexity of uniprocessor design attempting to extract instruction level parallelism has motivated the computer

More information

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado

More information

Scheduling of Parallel Real-time DAG Tasks on Multiprocessor Systems

Scheduling of Parallel Real-time DAG Tasks on Multiprocessor Systems Scheduling of Parallel Real-time DAG Tasks on Multiprocessor Systems Laurent George ESIEE Paris Journée du groupe de travail OVSTR - 23 mai 2016 Université Paris-Est, LRT Team at LIGM 1/53 CONTEXT: REAL-TIME

More information

Free upgrade of computer power with Java, web-base technology and parallel computing

Free upgrade of computer power with Java, web-base technology and parallel computing Free upgrade of computer power with Java, web-base technology and parallel computing Alfred Loo\ Y.K. Choi * and Chris Bloor* *Lingnan University, Hong Kong *City University of Hong Kong, Hong Kong ^University

More information

2002 Journal of Software

2002 Journal of Software 0-9825/2002/13(01)0051-08 2002 Journal of Software Vol13, No1,, (,0) E-mail qiaoyingbj@hotmailcom http//ieliscasaccn,,,,,,, ; ; ; TP301 A,,,,,Mok [1],,, Krithi Ramamritham [2],,,,GManimaran [3] Anita Mittal

More information

A Survey on Grid Scheduling Systems

A Survey on Grid Scheduling Systems Technical Report Report #: SJTU_CS_TR_200309001 A Survey on Grid Scheduling Systems Yanmin Zhu and Lionel M Ni Cite this paper: Yanmin Zhu, Lionel M. Ni, A Survey on Grid Scheduling Systems, Technical

More information

Maintaining Mutual Consistency for Cached Web Objects

Maintaining Mutual Consistency for Cached Web Objects Maintaining Mutual Consistency for Cached Web Objects Bhuvan Urgaonkar, Anoop George Ninan, Mohammad Salimullah Raunak Prashant Shenoy and Krithi Ramamritham Department of Computer Science, University

More information

Local-Deadline Assignment for Distributed Real-Time Systems

Local-Deadline Assignment for Distributed Real-Time Systems Local-Deadline Assignment for Distributed Real-Time Systems Shengyan Hong, Thidapat Chantem, Member, IEEE, and Xiaobo Sharon Hu, Senior Member, IEEE Abstract In a distributed real-time system (DRTS), jobs

More information

V. Solving Integer Linear Programs

V. Solving Integer Linear Programs Optimization Methods Draft of August 26, 2005 V. Solving Integer Linear Programs Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois 60208-3119,

More information

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI CMPE 655- MULTIPLE PROCESSOR SYSTEMS OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI What is MULTI PROCESSING?? Multiprocessing is the coordinated processing

More information

8: Scheduling. Scheduling. Mark Handley

8: Scheduling. Scheduling. Mark Handley 8: Scheduling Mark Handley Scheduling On a multiprocessing system, more than one process may be available to run. The task of deciding which process to run next is called scheduling, and is performed by

More information

On the Relationship of Server Disk Workloads and Client File Requests

On the Relationship of Server Disk Workloads and Client File Requests On the Relationship of Server Workloads and Client File Requests John R. Heath Department of Computer Science University of Southern Maine Portland, Maine 43 Stephen A.R. Houser University Computing Technologies

More information