Static Multiprocessor Scheduling of Periodic Real-Time Tasks with Precedence Constraints and Communication Costs

Size: px
Start display at page:

Download "Static Multiprocessor Scheduling of Periodic Real-Time Tasks with Precedence Constraints and Communication Costs"

Transcription

1 Static Multiprocessor Scheduling of Periodic Real-Time Tasks with Precedence Constraints and Communication Costs Stefan Riinngren and Behrooz A. Shirazi Department of Computer Science and Engineering The University of Texas at Arlington Arlington, Texas Abstract The problem of allocating and scheduling realtime tasks, with Precedence Constraints and Communication Costs, on a multiprocessor architecture in order to meet the timing constraints is known to be NP-complete. Due to the growing complexity of real-time applications there is a need to find scheduling methods that can handle large task sets in a reasonable time. Also, scheduling methods should consider precedence and exclusion relations in order to support parallelism within tasks and to resolve mutual exclusion situations. In this paper four heuristic scheduling algorithms are developed and evaluated. In particular clustering vs. non-clustering techniques are investigated with some interesting results. 1. Introduction Scheduling of real-time tasks onto a multiprocessor architecture has been a topic under investigation in the past decades. As real-time applications are becoming more and more complex and demanding, the ways researchers model applications and attack the problem of scheduling are changing as well. In the past, much of the research focused on scheduling of simple, independent tasks. However, the increase in application complexity has inspired a need to parallelize even the execution of individual subtasks, and to model more complex intertask relations, such as exclusion relations [ 161. The general problem of finding an optimal schedule on multiprocessors is known to be NP-complete. Due to the growing complexity of real-time applications, there is a need for scheduling algorithms that can handle large and complex applications. Real-time tasks, typically critical periodic tasks, whose timing constraints have to be guaranteed to be met in advance, may be scheduled before run-time. Such a scheduling strategy is called static or pre-run time scheduling. When using static scheduling, typically worst case timing estimates must be used for scheduling decisions. If a task s completion time does not have to be guaranteed in advance, dynamic scheduling can be used [3]. With dynamic scheduling the task is scheduled at run time, and more accurate estimates of the system state can be used for scheduling decisions. Static and dynamic scheduling can be combined when scheduling tasks for a real-time system. Tasks that must be guaranteed in advance can be statically scheduled, while the remaining tasks can be dynamically scheduied, thereby taking advantage of both scheduling strategies. This paper deals with the problem of static scheduling of periodic tasks, where precedence relations and communication costs between subtasks must be taken into consideration. Some optimal static scheduling algorithms have been presented that can handle limited task sets [16][17]. In real-time scheduling, optimahty implies that if there is a feasible solution, then the scheduling algorithm will find it. A general optimal scheduling method is yet to be shown to work for large, complex task sets. When communication between subtasks have to be considered, the problem becomes even more complex. Thus, for large and complex problems heuristic scheduling techniques seem to be a promising approach for obtaining a schedule within a reasonable time. A variety of heuristic scheduling methods have been developed, using different ways of modeling the real-time applications [6][7][8][9][10][15]. Heuristics are guidelines that are used by the scheduling algorithm to quickly come up with scheduling decisions. Because the guidelines will not necessarily give the best possible scheduling decisions, heuristic scheduling algorithms will produce suboptimal results. Ramamritham [8] and Xu [16] both consider task sets where individual tasks are further divided into subtasks with precedence relations. With this approach, the scheduler can take advantage of parallelism within tasks /95 $ IEEE 143

2 Xu [16] models real-time applications as sets of tasks with subtasks. An optimal approach is used that can handle moderately large task sets, but is yet to be shown to work for large and complex applications. Also, communication time between subtasks is ignored. The algorithm by Ramamritham [8] uses heuristics for scheduling in order to handle large task sets within reasonable time. A distributed system is assumed and communication costs are accounted for. An ongoing project at The University of Texas at Arlington, named PARSA (PARallel program Scheduling and Assessment environment), deals, among other things, with scheduling of tasks onto multiprocessors with point-to-point interconnection networks [ll]. A main feature in PARSA is accurate communication cost estimates to be considered during the scheduling process. The work presented in this paper is a step to enhance the existing tool set of PARSA with real-time scheduling capabilities [7][10]. Similar to [8], both communication costs and parallelism within tasks are considered. In section 2, we present our method of modeling the applications. Section 3 shows how multiple complex tasks are combined for input to the scheduling algorithms. Section 4 presents a Base Algorithm and how it is used as the core of the four proposed different scheduling approaches. Section 5 displays and discusses our experimental evaluations of the different scheduling algorithms, and section 6 gives a time complexity analysis of the different scheduling algorithms. Finally, section 7 gives the conclusion and suggests some future directions for this work. 2. Application Model The assumption is that the application to be scheduled onto the target architecture consists of a set of independent tasks. Each task is further divided into subtasks which have precedence relations to indicate dependencies and order of execution. Wherever a precedence relation exists, there is also a corresponding communication cost that must be accounted for between subtasks that are scheduled to execute on different processors. Also, there can exist exclusion relations between subtasks (maybe from different tasks). Exclusion relations can be in time or in space. Time exclusions can be used to model mutual exclusion constraints [16]. By not allowing some subtasks to execute at the same time, predictability is enhanced and costly run-time overhead for mutual exclusion insurance is avoided. Space exclusions can be used to model cases where some subtasks cannot execute on the same processors. An example would be if some subtasks are replicated for fault tolerance. Obviously, the different replicas must execute on different processors. The tasks are represented by Directed Acyclic Graphs (DAGs) where vertices represent subtasks and edges represent precedence relations and data dependencies with corresponding communication costs. Associated with each subtask are its execution cost, its release time, its deadline, and its exclusion relations with other subtasks. Since we are considering periodic tasks, there is also a period associated with each task. The release time of a task; i.e., the time it can potentially begin it execution, is by default the beginning of its period and its deadline is the end of its period, unless explicitly specified. 3. Graph Expansion In order to schedule tasks with different periods, a DAG is created that represents the different tasks over the Least Common Multiple of the task periods (LCM) [8][16]. Thus, the new expanded graph will contain multiple instances of each task. The release time will now be, for a task t and instance i, release=i x period(l), and the deadline will be deadline = (if I) x period(t). Task 1 txpanaea Task Execution Schedule Task 2 Scheduler L Task 3 Fig. 1 Overview of Scheduling Procedure 144

3 When a schedule has been produced using the expanded graph as input, it will be repeated with a period of LCM. The procedure of expansion and scheduling is illustrated in Fig Scheduling Algorithms The scheduling algorithms presented here are related to the work of Ramamritham [8], which creates a schedule for a distributed system connected by a TDMA bus network, including a schedule for communication on the bus. In PARSA we consider parallel processors with point-to-point interconnection networks, where the number of available processors is not always small. Thus, some modifications to the original algorithm had to be made, and these will be explained as the algorithms are presented. A base algorithm was developed, and four different algorithms emerged differentiated by if they used a clustering pre-scheduling step, and what clustering technique was employed Clustering Techniques Pairwise Clustering: This is the clustering technique proposed in [8]. Here a clustering decision is made by pairwise examination of communicating subtasks. If the ratio of the sum of the execution costs and the communication cost is lower than some threshold value CF, then the subtasks must execute on the same processor. The process of clustering and scheduling is repeated for different values of CF, starting ;;rith the maximum execution / communication ratio (maxcf) for the whole DAG plus one. The CF value is then decremented with (maxcf+l) / 10 each time clustering is done until CF<O. Communicating subtasks will thus be less and less likely to be clustered together. Critical Path Linear Clustering: This clustering technique is a special case of the clustering algorithm proposed by Kim and Browne [5]. Here the critical path of the DAG is clustered and the operation is repeated for the remaining subtasks until all subtasks are included in some cluster Assigning Subtask Deadlines As a pre-scheduling step performed by the scheduling algorithms, they assign deadlines to the individual subtasks. Initially, as we recall from section 2, the subtasks of a task were assigned the same deadline as the whole task, unless otherwise specified. In order to help in finding scheduling decisions during the course of scheduling, we need more accurate values for the individual subtask deadlines. The deadlines are calculated by subtracting the maximum path length from the current subtask to an exit subtask (a subtask without children), excluding the current subtask s own execution time, from the deadline of the whole task. When any of the clustering techniques has been used, we know which communicating subtasks are to be scheduled on the same processor, so the corresponding communication cost is set to zero. Thus that communication is ignored when calculating exit path lengths. When no clustering is used, we do not know in advance whether two communicating subtasks are to be scheduled on the same processor or not. We can now make two choices, optimistic or pessimistic. We can include communication costs when calculating exit path lengths, which would give pessimistic deadlines to the subtasks, or we can ignore communication costs, giving optimistic deadlines. This is the difference between the Pessimistic Algorithm and the Optimistic Algorithm discussed later. Note that the deadlines for the individual subtasks of a task are only for aiding in scheduling decisions. Only the deadlines given in the input graph (typically the deadline of the whole task) are used when checking whether a schedule is feasible or not. The reason for this is that, for example in the Pessimistic Algorithm, individual subtask deadlines can sometimes be violated without the whole task missing its deadline The Base Algorithm After clustering is done (if any) the process of scheduling the subtasks begins. A ready list is initialized with the entry nodes (subtasks with no parents) in the DAG. The ready list will during the scheduling process contain subtasks which can be considered for scheduling, i.e. their parents have already been scheduled. The subtasks in the ready list are examined one by one and the subtask with the highest priority is selected for scheduling. The priorities are determined heuristically as discussed below; but first some definitions are presented: DSM,: Desirable Starting Moment for subtask t on processor p. This is the time when all parents of subtask t have finished executing and all communication from them can reach processor p; i.e., the earliest time subtask t can be potentially executed. 145

4 Load,(Z): At any given time T, Load, is the finish time of the last subtask scheduled on processor p before time r; i.e., it represents the current load of processor p at time T. ASMt,: Actual Starting Moment for subtask f on processor p. ASM+4AX@SM~, Load,(Z)); i.e., the earliest time that subtask t can actually start executing on processor p is ASM,. For each subtask t in the ready list and for each processor p we calculate the following heuristic priority values to be used when deciding which subtask to use for the next scheduling step. The priority values are as follows, in order of decreasing importance: 1. ASM,, for subtask t on processor p. Min(ASM,,) is chosen. 2. Laxity, of subtask t on processor p (Laxily,,= deadline, - execution time, - ASM,,). Again, Min(Laxity,) is chosen. 3. Number of children of subtask t. The subtask with the most children is chosen. 4. Processor where the last scheduled subtask has no children in the same cluster is preferred. 5. Processor assignment which would result in lowest communication cost is preferred. Heuristic 4 will prevent subtasks from different clusters to interfere with each other s execution (cluster merging) when it is not necessary, and heuristic 5 will attempt to minimize communication. Only scheduling decisions that do not violate clustering constraints or exclusion constraints are considered. Let RT, be the release time of task t and let ts be the set of subtask scheduled by time T. At any given time T, if RT, > Min(ASM,.,,,,), for all tasks in ts-t, then t will no longer be considered for this particular scheduling step. The above heuristic values provide a priority listing among the tasks eligible for scheduling during each step. This priority list will give the subtask to be chosen from the -ready list (the one with the highest priority), the processor on which it should execute, and the starting time for the subtask. After the chosen subtask has been scheduled, its children are checked to see if all their parents have been scheduled. If so, they are moved into the ready list. The process of selecting, scheduling, and moving ready children into the ready list is repeated until the ready list is empty or no scheduling decision can be found that does not violate deadlines, clustering constraints, or exclusion relations. In the algorithm proposed by Ramamritham [S] a clock is maintained and at each scheduling step it is advanced to the minimum of the earliest start time of subtasks that can be scheduled and the time when next processor becomes available. A ready list will contain subtasks that can start executing at the given time, sorted in order of increasing latest start time. A mapping of the subtasks in the ready list to the available processors is generated, and if the mapping violates some constraints, another mapping is generated in a systematic fashion (lexicographic order). Such a scheme relies heavily on being able to schedule communications before scheduling the children subtasks. In a point-to-point network we cannot make such an assumption. In particular, the earliest start time for a subtask and the path for communication will depend on where the subtask will execute in relation to its parent subtasks. We also want to be able to schedule communications over possibly more than one communication link. Furthermore, if the number of processors is not small, the number of mappings that might have to be tried before a valid mapping is found grows unreasonably large. The argument that the scheme requires minimal information to be kept for each search point should be considered only if backtracking is allowed, but backtracking was shown to have minimal effect [S]. The approach proposed here maintains the rule of letting the subtask in the ready list with the earliest possible start time (ASM) and the smallest latest start time (laxity) have the best opportunity to be scheduled. It also allows for scheduling communication at the time of scheduling the child subtasks, by considering communication at the time of calculating the DSM values. The pairwise clustering technique and even the Critical Path clustering technique will sometimes result in conflicting constraints. As an example, suppose subtasks A and B are scheduled on the same processor. They have a common child subtask, C, which we are now trying to schedule. A is clustered with C, but B is not. Therefore, C should be scheduled on the same processor as A, but on a different processor than B - an obvious conflict. This can be taken care of by removing the second clustering constraint that requires communicating subtasks assigned to different clusters to be scheduled on different processors. An addition to the scheduling algorithm is that, if a feasible scheduling decision cannot be made, remove all clustering constraints for this particular scheduling step 146

5 and try again. This gives the algorithm a second chance to find a feasible scheduling decision at a given scheduling step. For the pairwise clustering, the subtask deadlines might have to be modified to account for the cases where independent subtasks (subtasks that have no ancestor/ descendant relationship) are included in the same cluster and thus must execute on the same processor [8]. For each subtask, this is taken care of by calculating the accumulated execution time, sumexe, for all descendants of the subtask within the same cluster, and also the latest deadline, maxdeadline, of the same descendants. The deadline for the current subtask must be less than or equal to maxdeadline - sumexe. 5. Experimental Evaluation The scheduling algorithms presented here were evaluated by using them on a large number of randomly generated task graphs as well as on some real applications. Again, the four algorithms are: (i) the pairwise clustering algorithm, (ii) the Critical Path clustering algorithm, and the non-clustering Base Algorithm with (iii) pessimistic and (iv) optimistic deadline assignments. For performance metric, we used the Success Patio, which is defined as the number of feasible schedules produced by a given algorithm divided by the number of different tested input graphs. We also compared the average execution times of the algorithms. This time is of course computer dependent, but it will nevertheless reflect relative performance among the proposed algorithms. For each input graph, three different graphs representing three different tasks were expanded as explained in section 2. Each scheduling algorithm was then applied to the resulting graph. In order to investigate the effectiveness of a scheduling algorithm with relation to the tightness or looseness of the real-time deadlines, scheduling of a graph was attempted several times after scaling the deadlines of the input tasks. The scaling was achieved by multiplying the release times and deadlines by a Deadline Scaling Factor, DSF. For example, if the deadline of a task is time 100 (considered a tight deadline) and DSF=1.5, then during the next scheduling experiment, this task s deadline is considered to be time 150 (a looser deadline by a factor of 50%). Each data point, based on the DSF values, was obtained by using 400 different input graphs. The initial assignment of periods to the tasks is based on the average path length from top to bottom when ignoring communication costs. This would give a loose lower bound on execution time of the tasks. Since the average execution time of the subtasks, the average number of subtasks per level, and the maximum number of subtasks in a task were known, the average path length from top to bottom could be calculated. Obviously, using this lower bound as the task period, results in graphs that may not be schedulable (being able to meet the deadlines) for DSF = 1.0. However, as DSF becomes larger, we can observe a clear difference in the performance of the algorithms. The problem of finding if a graph is schedulable at all is computationally intractable, but Success Patio (as a performance measure based on the number of scheduled graphs over total number of graphs), provides a relative measure of performance among the compared algorithms. In all of our experiments, the clustering techniques resulted in a worse performance compared to nonclustering algorithms. Apparently, the clustering constraints were too rigid to allow good scheduling decisions. Fig 2 displays the results from experiment 1. Task 1 has a maximum of 20 subtasks, task 2 has a maximum of 40 subtasks, and task 3 has a maximum of 60 subtasks. The subtask execution times were chosen randomly between 1 and 10 and the communication to execution ratio was set to 1. Also, the number of subtasks per level was random between 4 and 10. The period of each task was proportional to its number of subtasks since the average number of subtasks per level was fixed and the period was set to be the average path length from top to bottom. Since the graph was expanded over LCM of the task periods, this means that 6 instances of task 1, 3 instances of task 2, and 2 instances of task 3 were included. This gave resulting task graphs with 360 subtasks on the average. The algorithms attempted to create schedules on a fully connectedl, 10 processor target architecture. As can be seen, the clustering techniques produced a lower performance. Also, it is interesting to notice that for the non-clustering algorithms we got a slightly better performance by using an optimistic approach when assigning timing constrains; i.e., ignoring communication costs. Fig 3 shows the results from a similar experiment but assuming a 50 processor architecture. The purpose for this was to see how the algorithms would perform when they were given close to unlimited (for the given problem sizes) access to processors. Again, the clustering techniques performed worse than the Base Algorithm by I A fully connected network is assumed for simplicity. The algorithms can be easily modified to become applicable to any point-tc-point connected set of processing elements. 147

6 Success Ratio Critical Path - Pessimistic Deadline Scaling Factor Fig processors, large tasks, communication to execution ratio set to 1 Success Ratio _.. - Critical Path ---- A Pessimistic Optimistic....._ Deadline Scaling Factor Fig processors, large tasks, communication to execution ratio set to 1 148

7 itself. However, the difference was less pronounced and, in fact, the Critical Path clustering algorithm performed better or at least as well as the pessimistic version of the Base Algorithm. To allow unlimited access to processors might be unreasonable, so the first experiment should be of more interest than the second. Fig 4 gives the results of the third experiment, where the three tasks had 4, 8, and 12 subtasks respectively. The resulting expanded graph had 12 subtasks and was less complex than in experiments one and two, in order to make the experiment close to the ones in [8]. In this third run, the communication to execution ratio was set to 0.1, we assumed 6 processors, and the execution times of the subtasks were randomly selected between 50 and 100 time units. Here, the pessimistic, optimistic, and the critical path algorithms performed equally well, with the exception of the pairwise clustering algorithm, which performed worse. Fig 5 shows the results from a similar experiment, but with a communication to execution ratio of 0.4. These graphs were harder to schedule due to the increased communication cost, and noticeably the Base Algorithm again performed better without clustering. The performance of the optimistic and pessimistic algorithms were identical. Fig 6 shows the average execution times for the algorithms during the first experiment. Again, the times are computer dependent but reflect relative performances. The clustering overhead is somewhat noticeable for the Critical Path clustering algorithm, but it shows its profound impact on the execution time of the pair-wise clustering algorithm. Another overhead of the pair-wise clustering algorithm is due to the fact that it repeatedly tries to schedule the input graph for different values of the threshold CF. Fig 7 shows the results from an extra experiment that was conducted to evaluate the pair-wise clustering algorithm s performance as we change the size of the decrements of CF. The scenario used was the same as in the first experiment, but with 100 input graphs, and an improvement in performance was noted as the decrements were made smaller. However, this was achieved at the cost of longer execution times, so the increments were kept at (maxcf+l) / 10 which would make it consistent with the experiments in [8]. As a last experiment, we tried the algorithms on three task graphs derived from matrix multiplication programs. The communication to execution ratio was set to 1, the assumed number of processors was 10, and the result was that the expanded graph could be scheduled at a Deadline Scaling Factor of 2.1 by the Critical Path clustering algorithm and by the two non-clustering algorithms, and at 2.2 by the pair-wise clustering algorithm. It should be noted that none of the experiments involved any exclusion relations, the reason being that all these algorithms will treat such relations in the same way and thus, in a relative comparison, will not show any different performance results. However, it is important to emphasize the need for scheduling algorithms to support the use of such relations or similar approaches to allow for general modeling of real-time applications. 6. Time Complexity Analysis Before presenting a time complexity analysis of the scheduling algorithms, here are some definitions: N,: The number of subtasks in the input graph, including all task instances. N: The number of subtasks in the input graph, counting only one instance of each task. E: The number of edges in the input graph. P: The number of processors in the target architecture. I: The number of different values of the CF parameter used by the pairwise clustering algorithm. The Base Algorithm has to make Ni scheduling decisions. For each scheduling decision, all subtasks in the ready list must be considered, and for each of those subtasks (at least those which cannot be ignored directly), all processors are considered. The number of subtasks in the ready list will be, in the worst case, proportional to Ni, but the number of subtasks in the ready list that will not be ignored directly (a subtask can be ignored if its release time puts it out of competition) is proportional to N. Thus, the time complexity will be O(N,NP). The assignment of subtask deadlines involves traversing each edge in the input graph, making the total time complexity for the Base Algorithm O(E+N,NP). The pairwise clustering technique involves investigating each edge in the input graph, so that does not add to the complexity of the Base Algorithm, but since many values for the CF parameter will be tried, the total time complexity for the pairwise clustering algorithm is O(Ix(E+N;NP)). The Critical Path clustering algorithm adds a time of O(EN) for the clustering step which gives a total time complexity of O(EN+N,NP). 7. Conclusion and Future Work Four algorithms were developed here for static scheduling of periodic real-time tasks on a multiprocessor architecture, taking communication costs into account. The core of the algorithms, the Base 149

8 Proceedings of the 28th Annual Hawaii International Conference on System Sciences Success Ratio Critical Path Clustering Pessimistic - --_ _--_- -.-_ Optimistic - Paitwise Clustering ~ 0 I Deadline Scaling Factor Fig. 4 6 processors, small tasks, communication to execution ratio set to 0.1 Success Ratio Clustering - Pessimistic 0 i I Deadline Scaling Factor Fig. 5 6 processors, small tasks, communication to execution ratio set to

9 Proceedings of the 28th Annual Hawaii International Conference on System Sciences Avg. Execution 9000 T Time (ms) Pairwise Clustering Pessimistic Optimistic Fig. 6 Avg. execution time for experiment 1 Success Ratio Deadline Scaling Factor 2.6 Fig. 7 Effects of change in decrements of CF for Painvise Clustering 151

10 Algorithm, was developed as a fairly straight-forward and efficient way of applying heuristics when achieving a scheduling decision. Graph expansion, as discussed in [8][16], was implemented as a pre-scheduling step in order to represent multiple tasks with possibly different periods. The algorithms were evaluated through extensive experimentation and it was found that the Base Algorithm by itself performed better than when the clustering techniques were used before the scheduling process. Also, it was found that the Base Algorithm without clustering performed better when deadlines were assigned by ignoring communication costs. The algorithms here were developed as a part of the PARSA project at The University of Texas at Arlington. One main feature of this environment is the ability to accurately estimate communication delays in a point-topoint multiprocessor interconnection network. Therefore, a next step would be to perform link scheduling when considering interprocessor communication. Link scheduling can be introduced to the Base Algorithm in the following way. When calculating the DSM values for a subtask on a processor and communication from a parent subtask is considered, traverse the architecture graph from source to destination and account for all delays, including contention, that occur on every link on the path. When a subtask and processor is chosen for a decision, the same procedure is repeated, but now the communication is added to the schedules of the involved links. References [l] Burns, A., Scheduling Hard Real-Time Systems: A Review, Software Engineering Journal, May 1991, pp [2] Chen, G-H., and Yur, J-S. A Branch-and Boundwith-underestimates Algorithm for the Task Assignment Problem with Precedence Constraint, IEEE 10th International Conference on Distributed Computing Systems, 1990, pp [3] Cheng, S-C., Stankovic, J. A., and Ramamritham, K, Scheduling Groups of Tasks in Distributed Hard Real-Time Systems, COINS Technical Report, Department of Computer and Information Science, University of Massachusetts at Amherst, Nov. 9, [41 ---, Scheduling Algorithms for Hard Real-Time Systems: A Brief Survey, Tutorial Hard Real-Time Systems, 1988, pp [5] Kim, S.J., and Browne, J.C., A General Approach to Mapping of Parallel Computation upon Multiprocessor Architectures, International Conference on Parallel Processing, 1988, Vol 3, pp. [6] iz C-M Distributed Real-Time Scheduling Based on the RTG Model, Master s Thesis, Dept. of Computer Science and Engineering, The University of Texas at Arlington, [7] Lin, Shihchung, A Comparative Study of Real-Time Scheduling Methods, Technical Report, Department of Computer Science and Engineering, University of Texas at Arlington, April [8] Ramamritham, K., Allocation and Scheduling of Complex Periodic Tasks, IEEE 10th International Conference on Distributed Computing Systems, 1990, pp. 108-l 15. [9] Ramamritham, K., Stankovic, J. A., and Shiah, P-F, Efficient Scheduling Algorithms for Real-Time Multiprocessor Systems, IEEE Trans. Parallel and Distributed Systems, Vol. 1, No. 2, April 1990, pp [lo] Ronngren, Stefan, Lorts, Dan, and Shirazi, Behrooz, Empirical Evaluation of Compound Static Scheduling Heuristics for Real-Time Multiprocessing, Proceedings of the 2nd Workshop on Parallel and Distributed Real-time Systems, April [ 1 l] Shirazi, B., Kavi, K., Hurson, A.R., Biswas, P., PARSA: a PARallel program Scheduling and Assessment environment, 1993 Int l Conf on Parallel Processing, Aug [12] Stankovic, J. A., Misconceptions About Real-Time Computing, Computer, Oct [13] ---, Real-Time Computing Systems: The Next Generation, Tutorial Hard Real-Time Systems, 1988, pp [14] Stankovic, J. A., and Ramamritham, K., What is Predictability for Real-Time Systems?, n The Journal of Real-Time Systems, 2, 1990, pp [ 151 Verhoosel, J. P. C., Luit, E. J., and Hammer, D. K., A Static Scheduling Algorithm for Distributed Hard Real-Time Systems, The Journal of Real-Time Systems, 3, 1991, pp [ 161 Xu, J, Multiprocessor Scheduling of Processes with Release Times, Deadlines, Precedence, and Exclusion Relations, IEEE Trans. SofiWare Engineering, Vol. 19, No. 2, Feb. 1993, pp [17] Xu, J., and Parnas, D. L., On Satisfying Timing Constraints in Hard Real-Time Systems, IEEE Trans. Software Engineering, 1993, pp Vol. 19, No. 1, Jan. 152

Effects of Hard Real-Time Constraints in Implementing the Myopic Scheduling Algorithm

Effects of Hard Real-Time Constraints in Implementing the Myopic Scheduling Algorithm Effects of Hard Real-Time Constraints in Implementing the Myopic Scheduling Algorithm Abstract- Institute of Information Technology, University of Dhaka, Dhaka 1 muheymin@yahoo.com, K M. Sakib, M S. Hasan

More information

Scheduling in Multiprocessor System Using Genetic Algorithms

Scheduling in Multiprocessor System Using Genetic Algorithms Scheduling in Multiprocessor System Using Genetic Algorithms Keshav Dahal 1, Alamgir Hossain 1, Benzy Varghese 1, Ajith Abraham 2, Fatos Xhafa 3, Atanasi Daradoumis 4 1 University of Bradford, UK, {k.p.dahal;

More information

A Scalable Scheduling Algorithm for Real-Time Distributed Systems

A Scalable Scheduling Algorithm for Real-Time Distributed Systems A Scalable Scheduling Algorithm for Real-Time Distributed Systems Yacine Atif School of Electrical & Electronic Engineering Nanyang Technological University Singapore E-mail: iayacine@ntu.edu.sg Babak

More information

Controlled duplication for scheduling real-time precedence tasks on heterogeneous multiprocessors

Controlled duplication for scheduling real-time precedence tasks on heterogeneous multiprocessors Controlled duplication for scheduling real-time precedence tasks on heterogeneous multiprocessors Jagpreet Singh* and Nitin Auluck Department of Computer Science & Engineering Indian Institute of Technology,

More information

LIST BASED SCHEDULING ALGORITHM FOR HETEROGENEOUS SYSYTEM

LIST BASED SCHEDULING ALGORITHM FOR HETEROGENEOUS SYSYTEM LIST BASED SCHEDULING ALGORITHM FOR HETEROGENEOUS SYSYTEM C. Subramanian 1, N.Rajkumar 2, S. Karthikeyan 3, Vinothkumar 4 1 Assoc.Professor, Department of Computer Applications, Dr. MGR Educational and

More information

Mapping of Parallel Tasks to Multiprocessors with Duplication *

Mapping of Parallel Tasks to Multiprocessors with Duplication * Mapping of Parallel Tasks to Multiprocessors with Duplication * Gyung-Leen Park Dept. of Comp. Sc. and Eng. Univ. of Texas at Arlington Arlington, TX 76019-0015 gpark@cse.uta.edu Behrooz Shirazi Dept.

More information

A Level-wise Priority Based Task Scheduling for Heterogeneous Systems

A Level-wise Priority Based Task Scheduling for Heterogeneous Systems International Journal of Information and Education Technology, Vol., No. 5, December A Level-wise Priority Based Task Scheduling for Heterogeneous Systems R. Eswari and S. Nickolas, Member IACSIT Abstract

More information

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS An Undergraduate Research Scholars Thesis by DENISE IRVIN Submitted to the Undergraduate Research Scholars program at Texas

More information

Constraint Analysis and Heuristic Scheduling Methods

Constraint Analysis and Heuristic Scheduling Methods Constraint Analysis and Heuristic Scheduling Methods P. Poplavko, C.A.J. van Eijk, and T. Basten Eindhoven University of Technology, Department of Electrical Engineering, Eindhoven, The Netherlands peter@ics.ele.tue.nl

More information

A Heuristic Real-Time Parallel Scheduler Based on Task Structures

A Heuristic Real-Time Parallel Scheduler Based on Task Structures A Heuristic Real-Time Parallel Scheduler Based on Task Structures Qiegang Long and Victor Lesser Department of Computer Science University of Massachusetts Technical Report 95-92 Abstract The development

More information

9/24/ Hash functions

9/24/ Hash functions 11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way

More information

Scheduling with Bus Access Optimization for Distributed Embedded Systems

Scheduling with Bus Access Optimization for Distributed Embedded Systems 472 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 8, NO. 5, OCTOBER 2000 Scheduling with Bus Access Optimization for Distributed Embedded Systems Petru Eles, Member, IEEE, Alex

More information

Karthik Narayanan, Santosh Madiraju EEL Embedded Systems Seminar 1/41 1

Karthik Narayanan, Santosh Madiraju EEL Embedded Systems Seminar 1/41 1 Karthik Narayanan, Santosh Madiraju EEL6935 - Embedded Systems Seminar 1/41 1 Efficient Search Space Exploration for HW-SW Partitioning Hardware/Software Codesign and System Synthesis, 2004. CODES + ISSS

More information

Time Triggered and Event Triggered; Off-line Scheduling

Time Triggered and Event Triggered; Off-line Scheduling Time Triggered and Event Triggered; Off-line Scheduling Real-Time Architectures -TUe Gerhard Fohler 2004 Mälardalen University, Sweden gerhard.fohler@mdh.se Real-time: TT and ET Gerhard Fohler 2004 1 Activation

More information

Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems

Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems Gamal Attiya and Yskandar Hamam Groupe ESIEE Paris, Lab. A 2 SI Cité Descartes, BP 99, 93162 Noisy-Le-Grand, FRANCE {attiyag,hamamy}@esiee.fr

More information

An Approach to Task Attribute Assignment for Uniprocessor Systems

An Approach to Task Attribute Assignment for Uniprocessor Systems An Approach to ttribute Assignment for Uniprocessor Systems I. Bate and A. Burns Real-Time Systems Research Group Department of Computer Science University of York York, United Kingdom e-mail: fijb,burnsg@cs.york.ac.uk

More information

Applying Real-Time Scheduling Techniques to Software Processes: A Position Paper

Applying Real-Time Scheduling Techniques to Software Processes: A Position Paper To Appear in Proc. of the 8th European Workshop on Software Process Technology.19-21 June 2001. Witten, Germany. Applying Real-Time Scheduling Techniques to Software Processes: A Position Paper Aaron G.

More information

ISHFAQ AHMAD 1 AND YU-KWONG KWOK 2

ISHFAQ AHMAD 1 AND YU-KWONG KWOK 2 Optimal and Near-Optimal Allocation of Precedence-Constrained Tasks to Parallel Processors: Defying the High Complexity Using Effective Search Techniques Abstract Obtaining an optimal schedule for a set

More information

Real-Time Scheduling of Sensor-Based Control Systems

Real-Time Scheduling of Sensor-Based Control Systems In Proceedings of Eighth IEEE Workshop on Real-Time Operatings Systems and Software, in conjunction with 7th IFAC/IFIP Workshop on Real-Time Programming, Atlanta, GA, pp. 44-50, May 99. Real-Time Scheduling

More information

Backtracking. Chapter 5

Backtracking. Chapter 5 1 Backtracking Chapter 5 2 Objectives Describe the backtrack programming technique Determine when the backtracking technique is an appropriate approach to solving a problem Define a state space tree for

More information

An Experimental Investigation into the Rank Function of the Heterogeneous Earliest Finish Time Scheduling Algorithm

An Experimental Investigation into the Rank Function of the Heterogeneous Earliest Finish Time Scheduling Algorithm An Experimental Investigation into the Rank Function of the Heterogeneous Earliest Finish Time Scheduling Algorithm Henan Zhao and Rizos Sakellariou Department of Computer Science, University of Manchester,

More information

A Simple Placement and Routing Algorithm for a Two-Dimensional Computational Origami Architecture

A Simple Placement and Routing Algorithm for a Two-Dimensional Computational Origami Architecture A Simple Placement and Routing Algorithm for a Two-Dimensional Computational Origami Architecture Robert S. French April 5, 1989 Abstract Computational origami is a parallel-processing concept in which

More information

Scheduling Real Time Parallel Structure on Cluster Computing with Possible Processor failures

Scheduling Real Time Parallel Structure on Cluster Computing with Possible Processor failures Scheduling Real Time Parallel Structure on Cluster Computing with Possible Processor failures Alaa Amin and Reda Ammar Computer Science and Eng. Dept. University of Connecticut Ayman El Dessouly Electronics

More information

Different Optimal Solutions in Shared Path Graphs

Different Optimal Solutions in Shared Path Graphs Different Optimal Solutions in Shared Path Graphs Kira Goldner Oberlin College Oberlin, OH 44074 (610) 324-3931 ksgoldner@gmail.com ABSTRACT We examine an expansion upon the basic shortest path in graphs

More information

SAFETY-CRITICAL applications have to function correctly

SAFETY-CRITICAL applications have to function correctly IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 17, NO. 3, MARCH 2009 389 Design Optimization of Time- and Cost-Constrained Fault-Tolerant Embedded Systems With Checkpointing and

More information

Multiprocessor and Real-Time Scheduling. Chapter 10

Multiprocessor and Real-Time Scheduling. Chapter 10 Multiprocessor and Real-Time Scheduling Chapter 10 1 Roadmap Multiprocessor Scheduling Real-Time Scheduling Linux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor Systems

More information

OPTIMAL MULTI-CHANNEL ASSIGNMENTS IN VEHICULAR AD-HOC NETWORKS

OPTIMAL MULTI-CHANNEL ASSIGNMENTS IN VEHICULAR AD-HOC NETWORKS Chapter 2 OPTIMAL MULTI-CHANNEL ASSIGNMENTS IN VEHICULAR AD-HOC NETWORKS Hanan Luss and Wai Chen Telcordia Technologies, Piscataway, New Jersey 08854 hluss@telcordia.com, wchen@research.telcordia.com Abstract:

More information

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof.

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof. Priority Queues 1 Introduction Many applications require a special type of queuing in which items are pushed onto the queue by order of arrival, but removed from the queue based on some other priority

More information

Code generation for modern processors

Code generation for modern processors Code generation for modern processors Definitions (1 of 2) What are the dominant performance issues for a superscalar RISC processor? Refs: AS&U, Chapter 9 + Notes. Optional: Muchnick, 16.3 & 17.1 Instruction

More information

A Hybrid Interconnection Network for Integrated Communication Services

A Hybrid Interconnection Network for Integrated Communication Services A Hybrid Interconnection Network for Integrated Communication Services Yi-long Chen Northern Telecom, Inc. Richardson, TX 7583 kchen@nortel.com Jyh-Charn Liu Department of Computer Science, Texas A&M Univ.

More information

Code generation for modern processors

Code generation for modern processors Code generation for modern processors What are the dominant performance issues for a superscalar RISC processor? Refs: AS&U, Chapter 9 + Notes. Optional: Muchnick, 16.3 & 17.1 Strategy il il il il asm

More information

Abstract. NSWC/NCEE contract NCEE/A303/41E-96.

Abstract. NSWC/NCEE contract NCEE/A303/41E-96. A Distributed Architecture for QoS Management of Dynamic, Scalable, Dependable, Real-Time Systems 1 Lonnie R. Welch and Behrooz A.Shirazi Computer Science and Engineering Dept. The University of Texas

More information

Quiz 1 Solutions. (a) f(n) = n g(n) = log n Circle all that apply: f = O(g) f = Θ(g) f = Ω(g)

Quiz 1 Solutions. (a) f(n) = n g(n) = log n Circle all that apply: f = O(g) f = Θ(g) f = Ω(g) Introduction to Algorithms March 11, 2009 Massachusetts Institute of Technology 6.006 Spring 2009 Professors Sivan Toledo and Alan Edelman Quiz 1 Solutions Problem 1. Quiz 1 Solutions Asymptotic orders

More information

6. Algorithm Design Techniques

6. Algorithm Design Techniques 6. Algorithm Design Techniques 6. Algorithm Design Techniques 6.1 Greedy algorithms 6.2 Divide and conquer 6.3 Dynamic Programming 6.4 Randomized Algorithms 6.5 Backtracking Algorithms Malek Mouhoub, CS340

More information

A Fast Recursive Mapping Algorithm. Department of Computer and Information Science. New Jersey Institute of Technology.

A Fast Recursive Mapping Algorithm. Department of Computer and Information Science. New Jersey Institute of Technology. A Fast Recursive Mapping Algorithm Song Chen and Mary M. Eshaghian Department of Computer and Information Science New Jersey Institute of Technology Newark, NJ 7 Abstract This paper presents a generic

More information

Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network

Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network Thomas Nolte, Hans Hansson, and Christer Norström Mälardalen Real-Time Research Centre Department of Computer Engineering

More information

Thus, it is reasonable to compare binary search trees and binary heaps as is shown in Table 1.

Thus, it is reasonable to compare binary search trees and binary heaps as is shown in Table 1. 7.2 Binary Min-Heaps A heap is a tree-based structure, but it doesn t use the binary-search differentiation between the left and right sub-trees to create a linear ordering. Instead, a binary heap only

More information

Lecture 13: AVL Trees and Binary Heaps

Lecture 13: AVL Trees and Binary Heaps Data Structures Brett Bernstein Lecture 13: AVL Trees and Binary Heaps Review Exercises 1. ( ) Interview question: Given an array show how to shue it randomly so that any possible reordering is equally

More information

New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm

New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm So-Yeong Jeon 1 and Yong-Hyuk Kim 2,* 1 Department of Computer Science, Korea Advanced Institute of Science

More information

Backtracking and Branch-and-Bound

Backtracking and Branch-and-Bound Backtracking and Branch-and-Bound Usually for problems with high complexity Exhaustive Search is too time consuming Cut down on some search using special methods Idea: Construct partial solutions and extend

More information

ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AND RESOURCE CONSTRAINTS

ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AND RESOURCE CONSTRAINTS ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AND RESOURCE CONSTRAINTS Santhi Baskaran 1 and P. Thambidurai 2 1 Department of Information Technology, Pondicherry Engineering

More information

APPROXIMATING A PARALLEL TASK SCHEDULE USING LONGEST PATH

APPROXIMATING A PARALLEL TASK SCHEDULE USING LONGEST PATH APPROXIMATING A PARALLEL TASK SCHEDULE USING LONGEST PATH Daniel Wespetal Computer Science Department University of Minnesota-Morris wesp0006@mrs.umn.edu Joel Nelson Computer Science Department University

More information

Fast optimal task graph scheduling by means of an optimized parallel A -Algorithm

Fast optimal task graph scheduling by means of an optimized parallel A -Algorithm Fast optimal task graph scheduling by means of an optimized parallel A -Algorithm Udo Hönig and Wolfram Schiffmann FernUniversität Hagen, Lehrgebiet Rechnerarchitektur, 58084 Hagen, Germany {Udo.Hoenig,

More information

Distributed STDMA in Ad Hoc Networks

Distributed STDMA in Ad Hoc Networks Distributed STDMA in Ad Hoc Networks Jimmi Grönkvist Swedish Defence Research Agency SE-581 11 Linköping, Sweden email: jimgro@foi.se Abstract Spatial reuse TDMA is a collision-free access scheme for ad

More information

Admission Control in Time-Slotted Multihop Mobile Networks

Admission Control in Time-Slotted Multihop Mobile Networks dmission ontrol in Time-Slotted Multihop Mobile Networks Shagun Dusad and nshul Khandelwal Information Networks Laboratory Department of Electrical Engineering Indian Institute of Technology - ombay Mumbai

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

A Binary Integer Linear Programming-Based Approach for Solving the Allocation Problem in Multiprocessor Partitioned Scheduling

A Binary Integer Linear Programming-Based Approach for Solving the Allocation Problem in Multiprocessor Partitioned Scheduling A Binary Integer Linear Programming-Based Approach for Solving the Allocation Problem in Multiprocessor Partitioned Scheduling L. Puente-Maury, P. Mejía-Alvarez, L. E. Leyva-del-Foyo Department of Computer

More information

Practice Problems for the Final

Practice Problems for the Final ECE-250 Algorithms and Data Structures (Winter 2012) Practice Problems for the Final Disclaimer: Please do keep in mind that this problem set does not reflect the exact topics or the fractions of each

More information

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS Chapter 6 Indexing Results 6. INTRODUCTION The generation of inverted indexes for text databases is a computationally intensive process that requires the exclusive use of processing resources for long

More information

Multiprocessor scheduling

Multiprocessor scheduling Chapter 10 Multiprocessor scheduling When a computer system contains multiple processors, a few new issues arise. Multiprocessor systems can be categorized into the following: Loosely coupled or distributed.

More information

Computer Science 210 Data Structures Siena College Fall Topic Notes: Priority Queues and Heaps

Computer Science 210 Data Structures Siena College Fall Topic Notes: Priority Queues and Heaps Computer Science 0 Data Structures Siena College Fall 08 Topic Notes: Priority Queues and Heaps Heaps and Priority Queues From here, we will look at some ways that trees are used in other structures. First,

More information

Real-Time Scalability of Nested Spin Locks. Hiroaki Takada and Ken Sakamura. Faculty of Science, University of Tokyo

Real-Time Scalability of Nested Spin Locks. Hiroaki Takada and Ken Sakamura. Faculty of Science, University of Tokyo Real-Time Scalability of Nested Spin Locks Hiroaki Takada and Ken Sakamura Department of Information Science, Faculty of Science, University of Tokyo 7-3-1, Hongo, Bunkyo-ku, Tokyo 113, Japan Abstract

More information

A Static Scheduling Heuristic for. Heterogeneous Processors. Hyunok Oh and Soonhoi Ha

A Static Scheduling Heuristic for. Heterogeneous Processors. Hyunok Oh and Soonhoi Ha 1 Static Scheduling Heuristic for Heterogeneous Processors Hyunok Oh and Soonhoi Ha The Department of omputer Engineering, Seoul National University, Seoul, 11-742, Korea: e-mail: foho,shag@comp.snu.ac.kr

More information

Load Balancing for Problems with Good Bisectors, and Applications in Finite Element Simulations

Load Balancing for Problems with Good Bisectors, and Applications in Finite Element Simulations Load Balancing for Problems with Good Bisectors, and Applications in Finite Element Simulations Stefan Bischof, Ralf Ebner, and Thomas Erlebach Institut für Informatik Technische Universität München D-80290

More information

Authors Abugchem, F. (Fathi); Short, M. (Michael); Xu, D. (Donglai)

Authors Abugchem, F. (Fathi); Short, M. (Michael); Xu, D. (Donglai) TeesRep - Teesside's Research Repository A Note on the Suboptimality of Nonpreemptive Real-time Scheduling Item type Article Authors Abugchem, F. (Fathi); Short, M. (Michael); Xu, D. (Donglai) Citation

More information

Module 2: Classical Algorithm Design Techniques

Module 2: Classical Algorithm Design Techniques Module 2: Classical Algorithm Design Techniques Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Module

More information

A Novel Task Scheduling Algorithm for Heterogeneous Computing

A Novel Task Scheduling Algorithm for Heterogeneous Computing A Novel Task Scheduling Algorithm for Heterogeneous Computing Vinay Kumar C. P.Katti P. C. Saxena SC&SS SC&SS SC&SS Jawaharlal Nehru University Jawaharlal Nehru University Jawaharlal Nehru University New

More information

Register Allocation via Hierarchical Graph Coloring

Register Allocation via Hierarchical Graph Coloring Register Allocation via Hierarchical Graph Coloring by Qunyan Wu A THESIS Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN COMPUTER SCIENCE MICHIGAN TECHNOLOGICAL

More information

Administrative Stuff. We are now in week 11 No class on Thursday About one month to go. Spend your time wisely Make any major decisions w/ Client

Administrative Stuff. We are now in week 11 No class on Thursday About one month to go. Spend your time wisely Make any major decisions w/ Client Administrative Stuff We are now in week 11 No class on Thursday About one month to go Spend your time wisely Make any major decisions w/ Client Real-Time and On-Line ON-Line Real-Time Flight avionics NOT

More information

Digital Filter Synthesis Considering Multiple Adder Graphs for a Coefficient

Digital Filter Synthesis Considering Multiple Adder Graphs for a Coefficient Digital Filter Synthesis Considering Multiple Graphs for a Coefficient Jeong-Ho Han, and In-Cheol Park School of EECS, Korea Advanced Institute of Science and Technology, Daejeon, Korea jhhan.kr@gmail.com,

More information

Lecture 5. Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs

Lecture 5. Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs Lecture 5 Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs Reading: Randomized Search Trees by Aragon & Seidel, Algorithmica 1996, http://sims.berkeley.edu/~aragon/pubs/rst96.pdf;

More information

Repeating Segment Detection in Songs using Audio Fingerprint Matching

Repeating Segment Detection in Songs using Audio Fingerprint Matching Repeating Segment Detection in Songs using Audio Fingerprint Matching Regunathan Radhakrishnan and Wenyu Jiang Dolby Laboratories Inc, San Francisco, USA E-mail: regu.r@dolby.com Institute for Infocomm

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (10 th Week) (Advanced) Operating Systems 10. Multiprocessor, Multicore and Real-Time Scheduling 10. Outline Multiprocessor

More information

A Duplication Based List Scheduling Genetic Algorithm for Scheduling Task on Parallel Processors

A Duplication Based List Scheduling Genetic Algorithm for Scheduling Task on Parallel Processors A Duplication Based List Scheduling Genetic Algorithm for Scheduling Task on Parallel Processors Dr. Gurvinder Singh Department of Computer Science & Engineering, Guru Nanak Dev University, Amritsar- 143001,

More information

The Generalized Weapon Target Assignment Problem

The Generalized Weapon Target Assignment Problem 10th International Command and Control Research and Technology Symposium The Future of C2 June 13-16, 2005, McLean, VA The Generalized Weapon Target Assignment Problem Jay M. Rosenberger Hee Su Hwang Ratna

More information

Incremental Query Optimization

Incremental Query Optimization Incremental Query Optimization Vipul Venkataraman Dr. S. Sudarshan Computer Science and Engineering Indian Institute of Technology Bombay Outline Introduction Volcano Cascades Incremental Optimization

More information

Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems

Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems J.M. López, M. García, J.L. Díaz, D.F. García University of Oviedo Department of Computer Science Campus de Viesques,

More information

Appropriate Item Partition for Improving the Mining Performance

Appropriate Item Partition for Improving the Mining Performance Appropriate Item Partition for Improving the Mining Performance Tzung-Pei Hong 1,2, Jheng-Nan Huang 1, Kawuu W. Lin 3 and Wen-Yang Lin 1 1 Department of Computer Science and Information Engineering National

More information

ISA[k] Trees: a Class of Binary Search Trees with Minimal or Near Minimal Internal Path Length

ISA[k] Trees: a Class of Binary Search Trees with Minimal or Near Minimal Internal Path Length SOFTWARE PRACTICE AND EXPERIENCE, VOL. 23(11), 1267 1283 (NOVEMBER 1993) ISA[k] Trees: a Class of Binary Search Trees with Minimal or Near Minimal Internal Path Length faris n. abuali and roger l. wainwright

More information

FILTER SYNTHESIS USING FINE-GRAIN DATA-FLOW GRAPHS. Waqas Akram, Cirrus Logic Inc., Austin, Texas

FILTER SYNTHESIS USING FINE-GRAIN DATA-FLOW GRAPHS. Waqas Akram, Cirrus Logic Inc., Austin, Texas FILTER SYNTHESIS USING FINE-GRAIN DATA-FLOW GRAPHS Waqas Akram, Cirrus Logic Inc., Austin, Texas Abstract: This project is concerned with finding ways to synthesize hardware-efficient digital filters given

More information

Data Structures (CS 1520) Lecture 28 Name:

Data Structures (CS 1520) Lecture 28 Name: Traeling Salesperson Problem (TSP) -- Find an optimal (ie, minimum length) when at least one exists A (or Hamiltonian circuit) is a path from a ertex back to itself that passes through each of the other

More information

Multiprocessor Scheduling Using Task Duplication Based Scheduling Algorithms: A Review Paper

Multiprocessor Scheduling Using Task Duplication Based Scheduling Algorithms: A Review Paper Multiprocessor Scheduling Using Task Duplication Based Scheduling Algorithms: A Review Paper Ravneet Kaur 1, Ramneek Kaur 2 Department of Computer Science Guru Nanak Dev University, Amritsar, Punjab, 143001,

More information

Energy-Constrained Scheduling of DAGs on Multi-core Processors

Energy-Constrained Scheduling of DAGs on Multi-core Processors Energy-Constrained Scheduling of DAGs on Multi-core Processors Ishfaq Ahmad 1, Roman Arora 1, Derek White 1, Vangelis Metsis 1, and Rebecca Ingram 2 1 University of Texas at Arlington, Computer Science

More information

A Heuristic Algorithm for the Multi-constrained Multicast Tree

A Heuristic Algorithm for the Multi-constrained Multicast Tree A Heuristic Algorithm for the Multi-constrained Multicast Tree Wen-Lin Yang Department of Information Technology National Pingtung Institute of Commerce No.51, Ming-Sheng East Road, Pingtung City,Taiwan

More information

A Rapid Heuristic for Scheduling Non-Preemptive Dependent Periodic Tasks onto Multiprocessor

A Rapid Heuristic for Scheduling Non-Preemptive Dependent Periodic Tasks onto Multiprocessor A Rapid Heuristic for Scheduling Non-Preemptive Dependent Periodic Tasks onto Multiprocessor Omar Kermia, Yves Sorel INRIA Rocquencourt, BP 105-78153 Le Chesnay Cedex, France Phone: +33 1 39 63 52 60 -

More information

Tree-Based Minimization of TCAM Entries for Packet Classification

Tree-Based Minimization of TCAM Entries for Packet Classification Tree-Based Minimization of TCAM Entries for Packet Classification YanSunandMinSikKim School of Electrical Engineering and Computer Science Washington State University Pullman, Washington 99164-2752, U.S.A.

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 018 D/Q Greed SP s DP LP, Flow B&B, Backtrack Metaheuristics P, NP Design and Analysis of Algorithms Lecture 8: Greed Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Optimization

More information

CS 771 Artificial Intelligence. Informed Search

CS 771 Artificial Intelligence. Informed Search CS 771 Artificial Intelligence Informed Search Outline Review limitations of uninformed search methods Informed (or heuristic) search Uses problem-specific heuristics to improve efficiency Best-first,

More information

UNIT 4 Branch and Bound

UNIT 4 Branch and Bound UNIT 4 Branch and Bound General method: Branch and Bound is another method to systematically search a solution space. Just like backtracking, we will use bounding functions to avoid generating subtrees

More information

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing 244 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 10, NO 2, APRIL 2002 Heuristic Algorithms for Multiconstrained Quality-of-Service Routing Xin Yuan, Member, IEEE Abstract Multiconstrained quality-of-service

More information

An Attempt to Identify Weakest and Strongest Queries

An Attempt to Identify Weakest and Strongest Queries An Attempt to Identify Weakest and Strongest Queries K. L. Kwok Queens College, City University of NY 65-30 Kissena Boulevard Flushing, NY 11367, USA kwok@ir.cs.qc.edu ABSTRACT We explore some term statistics

More information

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies 55 4 INFORMED SEARCH AND EXPLORATION We now consider informed search that uses problem-specific knowledge beyond the definition of the problem itself This information helps to find solutions more efficiently

More information

MODEL FOR DELAY FAULTS BASED UPON PATHS

MODEL FOR DELAY FAULTS BASED UPON PATHS MODEL FOR DELAY FAULTS BASED UPON PATHS Gordon L. Smith International Business Machines Corporation Dept. F60, Bldg. 706-2, P. 0. Box 39 Poughkeepsie, NY 12602 (914) 435-7988 Abstract Delay testing of

More information

Computational Optimization ISE 407. Lecture 16. Dr. Ted Ralphs

Computational Optimization ISE 407. Lecture 16. Dr. Ted Ralphs Computational Optimization ISE 407 Lecture 16 Dr. Ted Ralphs ISE 407 Lecture 16 1 References for Today s Lecture Required reading Sections 6.5-6.7 References CLRS Chapter 22 R. Sedgewick, Algorithms in

More information

ENERGY EFFICIENT SCHEDULING SIMULATOR FOR DISTRIBUTED REAL-TIME SYSTEMS

ENERGY EFFICIENT SCHEDULING SIMULATOR FOR DISTRIBUTED REAL-TIME SYSTEMS I J I T E ISSN: 2229-7367 3(1-2), 2012, pp. 409-414 ENERGY EFFICIENT SCHEDULING SIMULATOR FOR DISTRIBUTED REAL-TIME SYSTEMS SANTHI BASKARAN 1, VARUN KUMAR P. 2, VEVAKE B. 2 & KARTHIKEYAN A. 2 1 Assistant

More information

A Synchronization Algorithm for Distributed Systems

A Synchronization Algorithm for Distributed Systems A Synchronization Algorithm for Distributed Systems Tai-Kuo Woo Department of Computer Science Jacksonville University Jacksonville, FL 32211 Kenneth Block Department of Computer and Information Science

More information

Mobile Agent Driven Time Synchronized Energy Efficient WSN

Mobile Agent Driven Time Synchronized Energy Efficient WSN Mobile Agent Driven Time Synchronized Energy Efficient WSN Sharanu 1, Padmapriya Patil 2 1 M.Tech, Department of Electronics and Communication Engineering, Poojya Doddappa Appa College of Engineering,

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015 CS161, Lecture 2 MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015 1 Introduction Today, we will introduce a fundamental algorithm design paradigm, Divide-And-Conquer,

More information

Lecture 5 Sorting Arrays

Lecture 5 Sorting Arrays Lecture 5 Sorting Arrays 15-122: Principles of Imperative Computation (Spring 2018) Frank Pfenning, Rob Simmons We begin this lecture by discussing how to compare running times of functions in an abstract,

More information

Contention-Aware Scheduling with Task Duplication

Contention-Aware Scheduling with Task Duplication Contention-Aware Scheduling with Task Duplication Oliver Sinnen, Andrea To, Manpreet Kaur Department of Electrical and Computer Engineering, University of Auckland Private Bag 92019, Auckland 1142, New

More information

High-Level Synthesis (HLS)

High-Level Synthesis (HLS) Course contents Unit 11: High-Level Synthesis Hardware modeling Data flow Scheduling/allocation/assignment Reading Chapter 11 Unit 11 1 High-Level Synthesis (HLS) Hardware-description language (HDL) synthesis

More information

Clustering Using Graph Connectivity

Clustering Using Graph Connectivity Clustering Using Graph Connectivity Patrick Williams June 3, 010 1 Introduction It is often desirable to group elements of a set into disjoint subsets, based on the similarity between the elements in the

More information

Algorithm classification

Algorithm classification Types of Algorithms Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We ll talk about a classification scheme for algorithms This classification scheme

More information

register allocation saves energy register allocation reduces memory accesses.

register allocation saves energy register allocation reduces memory accesses. Lesson 10 Register Allocation Full Compiler Structure Embedded systems need highly optimized code. This part of the course will focus on Back end code generation. Back end: generation of assembly instructions

More information

INFREQUENT WEIGHTED ITEM SET MINING USING NODE SET BASED ALGORITHM

INFREQUENT WEIGHTED ITEM SET MINING USING NODE SET BASED ALGORITHM INFREQUENT WEIGHTED ITEM SET MINING USING NODE SET BASED ALGORITHM G.Amlu #1 S.Chandralekha #2 and PraveenKumar *1 # B.Tech, Information Technology, Anand Institute of Higher Technology, Chennai, India

More information

Computing Submesh Reliability in Two-Dimensional Meshes

Computing Submesh Reliability in Two-Dimensional Meshes Computing Submesh Reliability in Two-Dimensional Meshes Chung-yen Chang and Prasant Mohapatra Department of Electrical and Computer Engineering Iowa State University Ames, Iowa 511 E-mail: prasant@iastate.edu

More information

Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities

Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities Arpan Gujarati, Felipe Cerqueira, and Björn Brandenburg Multiprocessor real-time scheduling theory Global

More information

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS BY MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER

More information

Homework # 2 Due: October 6. Programming Multiprocessors: Parallelism, Communication, and Synchronization

Homework # 2 Due: October 6. Programming Multiprocessors: Parallelism, Communication, and Synchronization ECE669: Parallel Computer Architecture Fall 2 Handout #2 Homework # 2 Due: October 6 Programming Multiprocessors: Parallelism, Communication, and Synchronization 1 Introduction When developing multiprocessor

More information