A Modified Maximum Urgency First Scheduling Algorithm for Real-Time Tasks

Similar documents
Real-Time Scheduling of Sensor-Based Control Systems

A Fuzzy-based Multi-criteria Scheduler for Uniform Multiprocessor Real-time Systems

Fault tolerant scheduling in real time systems

An Improved Priority Ceiling Protocol to Reduce Context Switches in Task Synchronization 1

Effects of Hard Real-Time Constraints in Implementing the Myopic Scheduling Algorithm

Microkernel/OS and Real-Time Scheduling

1.1 CPU I/O Burst Cycle

On-Line Scheduling Algorithm for Real-Time Multiprocessor Systems with ACO and EDF

International Journal of Advanced Research in Computer Science and Software Engineering

Multiprocessor and Real-Time Scheduling. Chapter 10

CS4514 Real Time Scheduling

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

Event-Driven Scheduling. (closely following Jane Liu s Book)

Introduction to Embedded Systems

An Improved Priority Dynamic Quantum Time Round-Robin Scheduling Algorithm

Real-Time Architectures 2004/2005

Schedulability with resource sharing. Priority inheritance protocol Priority ceiling protocol Stack resource policy

Fixed-Priority Multiprocessor Scheduling. Real-time Systems. N periodic tasks (of different rates/periods) i Ji C J. 2 i. ij 3

Authors Abugchem, F. (Fathi); Short, M. (Michael); Xu, D. (Donglai)

Simulation of Priority Driven Algorithms to Schedule Real-Time Systems T.S.M.Priyanka a*, S.M.K.Chaitanya b

requests or displaying activities, hence they usually have soft deadlines, or no deadlines at all. Aperiodic tasks with hard deadlines are called spor

An Integration of Imprecise Computation Model and Real-Time Voltage and Frequency Scaling

A Categorization of Real-time Multiprocessor. Scheduling Problems and Algorithms

Fixed-Priority Multiprocessor Scheduling

4/6/2011. Informally, scheduling is. Informally, scheduling is. More precisely, Periodic and Aperiodic. Periodic Task. Periodic Task (Contd.

Simplified design flow for embedded systems

Multiprocessor and Real- Time Scheduling. Chapter 10

Survey of different Task Scheduling Algorithm

Scheduling Algorithms for Real-Time Systems

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

Real-Time and Concurrent Programming Lecture 6 (F6): Scheduling and bounded response times

REAL-TIME SCHEDULING OF SOFT PERIODIC TASKS ON MULTIPROCESSOR SYSTEMS: A FUZZY MODEL

Chapter 5: CPU Scheduling

Ch 4 : CPU scheduling

CPU THREAD PRIORITIZATION USING A DYNAMIC QUANTUM TIME ROUND-ROBIN ALGORITHM

Computer Science 4500 Operating Systems

Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems

Real-Time Architectures 2003/2004. Resource Reservation. Description. Resource reservation. Reinder J. Bril

A Schedulability Analysis for Weakly Hard Real- Time Tasks in Partitioning Scheduling on Multiprocessor Systems

On Latency Management in Time-Shared Operating Systems *

What s an Operating System? Real-Time Operating Systems. Cyclic Executive. Do I Need One? Handling an Interrupt. Interrupts

Implementing Sporadic Servers in Ada

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institution of Technology, IIT Delhi

Multimedia Applications Require Adaptive CPU Scheduling. Veronica Baiceanu, Crispin Cowan, Dylan McNamee, Calton Pu, and Jonathan Walpole

Overview. Sporadic tasks. Recall. Aperiodic tasks. Real-time Systems D0003E 2/26/2009. Loosening D = T. Aperiodic tasks. Response-time analysis

A Transaction Processing Technique in Real-Time Object- Oriented Databases

Exam Review TexPoint fonts used in EMF.

Scheduling Sporadic and Aperiodic Events in a Hard Real-Time System

Task Management Techniques for Enforcing ED Scheduling on Periodic Task Set

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI

Comparative evaluation of limited preemptive methods

Analyzing Real-Time Systems

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

TIMES A Tool for Modelling and Implementation of Embedded Systems

The Priority Ceiling Protocol: A Method for Minimizing the Blocking of High-Priority Ada Tasks

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Scheduling Hard-Real-Time Tasks with Backup Phasing Delay

Scheduling Algorithm for Hard Real-Time Communication in Demand Priority Network

Operating Systems Unit 3

THE integration of multiple functionalities on a single

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

A test bed for distributed real-time scheduling experimentation based on the CHORUS micro-kernel

Embedded Systems: OS. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Using Fixed Priority Pre-emptive Scheduling in Real-Time Systems

Scheduling in Multiprocessor System Using Genetic Algorithms

Back to RTOS. CSE466 Autumn 00-1

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Controlled duplication for scheduling real-time precedence tasks on heterogeneous multiprocessors

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Multimedia Systems 2011/2012

Comparison of Empirical Success Rates of Global vs. Partitioned Fixed-Priority and EDF Scheduling for Hard Real Time TR

AUTOSAR Extensions for Predictable Task Synchronization in Multi- Core ECUs

Last Class: Processes

Multimedia-Systems. Operating Systems. Prof. Dr.-Ing. Ralf Steinmetz Prof. Dr. rer. nat. Max Mühlhäuser Prof. Dr.-Ing. Wolfgang Effelsberg

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

6.1 Motivation. Fixed Priorities. 6.2 Context Switch. Real-time is about predictability, i.e. guarantees. Real-Time Systems

Process behavior. Categories of scheduling algorithms.

Embedded Systems: OS

An application-based EDF scheduler for OSEK/VDX

Overview of Scheduling a Mix of Periodic and Aperiodic Tasks

CSCI 4210 Operating Systems CSCI 6140 Computer Operating Systems Project 2 (document version 1.4) CPU Scheduling Algorithms

Multi-Epoch Scheduling Within the Real-Time Execution Performance Agent Framework

Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ

Introduction to Real-Time Systems ECE 397-1

Mixed Criticality Scheduling in Time-Triggered Legacy Systems

Computer Systems Assignment 4: Scheduling and I/O

An Evaluation of the Dynamic and Static Multiprocessor Priority Ceiling Protocol and the Multiprocessor Stack Resource Policy in an SMP System

A Predictable RTOS. Mantis Cheng Department of Computer Science University of Victoria

Introduction. Real Time Systems. Flies in a bottle. Real-time kernel

Compositional Schedulability Analysis of Hierarchical Real-Time Systems

Homework index. Processing resource description. Goals for lecture. Communication resource description. Graph extensions. Problem definition

Editor. Analyser XML. Scheduler. generator. Code Generator Code. Scheduler. Analyser. Simulator. Controller Synthesizer.

Scheduling of processes

Scheduling. CSC400 - Operating Systems. 7: Scheduling. J. Sumey. one of the main tasks of an OS. the scheduler / dispatcher

Schedulability analysis of periodic and aperiodic tasks with resource constraints

Aperiodic Task Scheduling

Implementation and modeling of two-phase locking concurrency control a performance study

Operating Systems ECE344. Ding Yuan

Transcription:

Vol:, o:9, 2007 A Modified Maximum Urgency irst Scheduling Algorithm for Real-Time Tasks Vahid Salmani, Saman Taghavi Zargar, and Mahmoud aghibzadeh International Science Index, Computer and Information Engineering Vol:, o:9, 2007 waset.org/publication/874 Abstract This paper presents a modified version of the maximum urgency first scheduling algorithm. The maximum urgency algorithm combines the advantages of fixed and dynamic scheduling to provide the dynamically changing systems with flexible scheduling. This algorithm, however, has a major shortcoming due to its scheduling mechanism which may cause a critical task to fail. The modified maximum urgency first scheduling algorithm resolves the mentioned problem. In this paper, we propose two possible implementations for this algorithm by using either earliest deadline first or modified least laxity first algorithms for calculating the dynamic priorities. These two approaches are compared together by simulating the two algorithms. The earliest deadline first algorithm as the preferred implementation is then recommended. Afterwards, we make a comparison between our proposed algorithm and maximum urgency first algorithm using simulation and results are presented. It is shown that modified maximum urgency first is superior to maximum urgency first, since it usually has less task preemption and hence, less related overhead. It also leads to less failed non-critical tasks in overloaded situations. Keywords Modified maximum urgency first, maximum urgency first, real-time systems, scheduling. I. ITRODUCTIO real-time systems, every real-time task has a deadline I before or at which it must be completed. Due to the criticality of the tasks, the scheduling algorithms of these systems must be timely and predictable. Tasks could either be periodic or aperiodic. In this paper, we will only consider periodic tasks. Real-time scheduling algorithms may assign priorities statically, dynamically, or in a hybrid manner, which are called fixed, dynamic and mixed scheduling algorithms, respectively. These algorithms may allow preemptions to occur or may impose a non-preemptive method. A dynamically reconfigurable system can change in time without the need to halt the system and hence needs a dynamic scheduler. The Earliest Deadline irst (ED) algorithm proposed by Liu and Layland is an optimal dynamic scheduling algorithm []. This algorithm, however, is not V. Salmani is a graduate student with the Computer Engineering Department, erdowsi University, Mashad, Iran (e-mail: salmani@um.ac.ir). S. Taghavi Zargar is a graduate student with the Computer Engineering Department, erdowsi University, Mashad, Iran (e-mail: taghavi@stumail.um.ac.ir). M. aghibzadeh is a professor with the Computer Engineering Department, erdowsi University, Mashad, Iran (e-mail: naghib@um.ac.ir). predictable in a transient overloaded situation in the sense that it does not guarantee to execute the set of higher priority tasks before their deadlines. This paper proposes a Modified Maximum Urgency irst (MMU) scheduling algorithm as an optimization of Maximum Urgency irst algorithm (MU) [2] which is used to predictably schedule dynamically changing systems. The MMU is a preemptive mixed priority algorithm for predictable scheduling of periodic real-time tasks. The rest of this paper is organized as follows. Section 2 briefly describes dynamic priority scheduling algorithms including ED, Least Laxity irst (LL), Modified Least Laxity irst (MLL) and MU. In section 3 we propose our MMU scheduling algorithm. Section 4 describes our simulation experiments and results. inally, Section 5 is a short conclusion. II. RELATED WORKS The earliest deadline first [] and least laxity first [3] algorithms are presented as optimal dynamic priority scheduling algorithms. However, the later is impractical to implement because laxity ties result in poor system performance due to the frequent context switches among the tasks. The modified least laxity first algorithm proposed by Oh and Yang solves the problem of the LL algorithm by reducing the number of context switches [4]. With all these algorithms a transient overload in the system may cause a critical task to fail, which is certainly undesirable. Stewart and Khosla have presented a mixed priority algorithm called Maximum Urgency irst which defines a critical set of tasks that is guaranteed to meet all its deadlines during a transient overload [2]. A. Earliest Deadline irst Algorithm The earliest deadline first (also known as nearest deadline first) is a dynamic priority algorithm which uses the deadline of a task as its priority. The task with the earliest deadline has the highest priority, while the lowest priority belongs to the task with the latest deadline. Since priorities are dynamic, the periods of tasks can change at anytime. In addition, this algorithm has the schedulability bound of 00% for all task sets. The CPU utilization of task T i is computed as the ratio of its worst-case execution time E i to its request period P i. The total utilization U n for n periodic tasks is calculated as follows []: International Scholarly and Scientific Research & Innovation (9) 2007 2827 scholar.waset.org/307-6892/874

Vol:, o:9, 2007 International Science Index, Computer and Information Engineering Vol:, o:9, 2007 waset.org/publication/874 U E n i n = i= P () i A significant disadvantage of this algorithm is that there is no way to guarantee which tasks will fail in a transient overloaded situation. A transient overload occurs when the worst case utilization is above 00%, in which case it is possible that a critical task may fail at the expense of a lesser important task. This may happen because there is no control of which task fails and the deadline is the only scheduling parameter. However, in a transient overloaded situation we may want to consider task s criticalness as the second parameter. The ED algorithm was decided to be considered as a basis for the MMU algorithm which is proposed in this paper. B. Least Laxity irst Algorithm The least laxity first (also known as minimum laxity first) assigns higher priority to a task with the least laxity. The laxity of a real-time task T i at time t, L i (t), is defined as follows: Li() t = Di() t Ei() t (2) where D i (t) is the deadline by which the task must be completed and E i (t) is the amount of computation remaining to be performed. In other words, laxity is a measure of the flexibility available for scheduling a task. A laxity of L i (t) means that if the task T i is delayed at the most by L i (t) time units, it will still meet its deadline. A task with zero laxity must be scheduled right away and executed without preemption or it will fail to meet its deadline. The negative laxity indicates that the task will miss the deadline, no matter when it is picked up for execution. A major problem with LL algorithm is that it is impractical to implement because laxity ties result in the frequent context switches among the corresponding tasks. This will cause the system performance to remarkably degrade. A laxity tie occurs when two or more tasks have the same laxities. Similar to ED, LL has a 00% schedulability bound. evertheless, there is no way to control which tasks are guaranteed to execute during a transient overload. C. Modified Least Laxity irst Algorithm Our purpose in describing the modified least laxity first algorithm in this section is to introduce it as an alternative to the ED in order to be used as a basis for the MMU algorithm which is proposed in this paper. As it is mentioned earlier, the MLL scheduling algorithm solves the problem of LL algorithm by significantly reducing the number of context switches. By decreasing the system overhead due to unnecessary context switches, MLL algorithm avoids the degradation of system performance and conserves more system resources for unanticipated aperiodic tasks. As long as there is no laxity tie, MLL schedules the task sets the same as LL algorithm. If the laxity tie occurs, the running task continues to run with no preemption as far as the deadlines of other tasks are not missed. The MLL algorithm defers the context switching until necessary and it is safe even if the laxity tie occurs. That is, it allows the laxity inversion where a task with the least laxity may not be scheduled immediately. Laxity inversion applies to the duration that the currently running task can continue running with no loss in schedulability, even if there exist a task (or tasks) whose laxity is smaller than the current running task [4]. D. Maximum Urgency irst Algorithm The maximum urgency first algorithm solves the problem of unpredictability during a transient overload for ED, LL and MLL algorithms. This algorithm is a combination of fixed and dynamic priority scheduling, also called mixed priority scheduling. With this algorithm, each task is given an urgency which is defined as a combination of two fixed priorities (criticality and user priority) and a dynamic priority that is inversely proportional to the laxity. The criticality has higher precedence over the dynamic priority while user priority has lower precedence than the dynamic priority. The MU algorithm assigns priorities in two phases. Phase One concerns the assignment of static priorities to tasks. Static priorities are assigned once and do not change after the system starts. Phase Two deals with the run-time behavior of the MU scheduler as it is clarified later. The first phase consists of these steps [2]: ) It sorts the tasks from the shortest period to the longest period. Then it defines the critical set as the first tasks such that the total CPU load factor does not exceed 00%. These tasks are guaranteed not to fail even during a transient overload. 2) All tasks in the critical set are assigned high criticality. The remaining tasks are considered to have low criticality. 3) Every task in the system is assigned an optional unique user priority. In the second phase, the MU scheduler follows an algorithm to select a task for execution. This algorithm is executed whenever a new task is arrived to the ready queue. The algorithm is as follows: ) If there is only one highly critical task, pick it up and execute it. 2) If there are more than one highly critical task, select the one with the highest dynamic priority. Here, the task with the least laxity is considered to be the one with the highest priority. 3) If there is more than one task with the same laxity, select the one with the highest user priority. The MU scheduler has been implemented as the default scheduler of CHIMERA II, a real-time operating system being used to control sensor-based control systems [5]. III. MODIIED MAXIMUM URGECY IRST ALGORITHM Although MU is an efficient algorithm, it has a major disadvantage. Since the rescheduling operation is performed whenever a task is arrived to the ready queue [2], [6], there is International Scholarly and Scientific Research & Innovation (9) 2007 2828 scholar.waset.org/307-6892/874

Vol:, o:9, 2007 International Science Index, Computer and Information Engineering Vol:, o:9, 2007 waset.org/publication/874 the possibility of failing a critical task in certain situations. In these situations, a task with minimum laxity may be selected whose remaining execution time is greater than remaining time to another task s laxity. This problem is due to performing the rescheduling operation whenever a new task is added to the ready queue. According to [7], the scheduling should be performed at any given instant and the scheduler will choose the highest priority task to run among all available tasks. Therefore, the schedule should be produced in such a way that the task having the highest priority always be running. Consider two tasks, T and T 2, shown in Table I. ig. shows the schedule which is produced by the MU algorithm for the task set in Table I. TABLE I A EXAMPLE O TASK SET Remaining Execution Time Deadline Remaining Time to Laxity T 4 6 2 T 2 4 3 As it is shown in ig. the MU will select the task with minimum laxity (T ) at time zero. The remaining execution time of task T is greater than remaining time to T 2 s laxity. This selection will cause task T 2 to miss its deadline. T T 2 0 2 3 4 5 6 CPU Load actor T 2 misses its deadline = 9.7% ig. Schedule generated by the MU scheduling algorithm The modified maximum urgency first algorithm we propose in this paper is a modified version of MU algorithm which resolves the mentioned MU algorithm s deficiency. In addition it has some extra advantages which will be explained later. The modifications are as follows: With this algorithm, we use a unique importance parameter, instead of using tasks request intervals, to create the critical set. The importance parameter is a fixed priority which can be defined as user priority or any other optional parameter which expresses the degree of the task s criticalness. It is trivial that the task with the shortest request period is not necessarily the most important one. urthermore, using the importance parameter, it is not needed to use period transformation [8] as it is done in MU algorithm [2]. With the MMU algorithm, either ED or MLL can be used to define the dynamic priority. Another t optimization made in this algorithm is the elimination of unnecessary context switches which in turn reduced the overall system overhead. This is done firstly by using MLL instead of LL and secondly, by letting the currently running task to keep running while there are some other tasks with the same priority. The MMU algorithm consists of two phases with the following details: Phase : In this phase fixed priorities are defined only once as follows. These fixed priorities will not change during execution time. ) Order the tasks from the most importance to the least importance 2) Add the first tasks to the critical set such that the total CPU load factor does not exceed 00% Phase 2: This phase calculates the dynamic priorities at every scheduling event and selects the task to be executed next. ) If there is at least one critical task in the ready queue a. Select the critical task with the earliest deadline (ED algorithm) if there is no tie b. If there are two or more critical tasks with the same earliest deadline i. If any of these critical tasks is already running select it to continue running ii. Otherwise, select the critical task with the highest importance 2) If there is no critical task in the ready queue a. Select the task with earliest deadline (ED algorithm) if there is no tie b. If there are two or more tasks with the same earliest deadline i. If any of these tasks is already running select it to continue running ii. Otherwise, select the task with the highest importance As it is mentioned earlier, the MLL algorithm can be used instead of the ED in the above algorithm. The MMU algorithm needs the start time, deadline parameter and worst-case execution time of each task to be specified before the system starts. When using ED algorithm, a scheduling event occurs whenever a task is arrived to the ready queue or a task finishes its execution time. or the MLL algorithm, there is a third scheduling event which is explained in [4]. A. RM, ED, LL, MLL and MU as Special Cases of MMU It was shown in [5] that the RM (Rate Monotonic [9]), ED and LL algorithms are special cases of the MU algorithm. If the period is used as the importance parameter and the optimization which is made for the reduction of context switches is omitted, the MMU scheduler with either of ED or MLL behaves like the MU scheduler. If all tasks specify zero as the worst-case execution time, the ED (used in MMU) behaves like the LL (used in MU). In this case the laxity would be a function of deadline [5]. As long as there exists no laxity tie, the MLL (used in MMU) behaves like International Scholarly and Scientific Research & Innovation (9) 2007 2829 scholar.waset.org/307-6892/874

Vol:, o:9, 2007 International Science Index, Computer and Information Engineering Vol:, o:9, 2007 waset.org/publication/874 LL (used in MU) [4]. Therefore, the RM, ED, LL and MU algorithms are special cases of the MMU algorithm. When the processor utilization is less than or equal to one (i.e. there is no non-critical task in the system), if a task which was already running is not selected in the case there are two or more tasks with the same dynamic priority, the MMU scheduler behaves like the MLL scheduler. Therefore, MLL is also a special case of MMU. B. MMU as an Efficient Hybrid Algorithm The efficiency of MMU can be inferred from optimality of ED [0] and MLL [4] algorithms. As it is explained earlier, ED and MLL are special cases of MMU algorithm. If there is no non-critical task in the system, that is the CPU utilization is less than or equal 00%, the MMU algorithm behaves exactly like ED or MLL algorithms depending on which one is chosen within the MMU, and hence it is optimal too. If the CPU load factor exceeds 00%, i.e., there are some non-critical tasks, the same things hold for ED and MLL. That is, there is no guarantee for these tasks to meet their deadlines. IV. PERORMACE EVALUATIO In this section we first compare our proposed MMU algorithm in case of using either ED or MLL and will show that the MMU is more efficient when the ED is used. Therefore, we propose using ED instead of MLL in the MMU algorithm. Then the MMU algorithm is compared with the MU algorithm. We analyze the number of context switches, global performance ratio and the number of failed non-critical tasks by performing the simulation results. In our simulation experiments, we assume that: ) All tasks are periodic and the deadline parameter of each task is equal to its request period. 2) The period of tasks is chosen randomly between 0 and 200 time units. 3) The worst-case execution time is chosen randomly. It is at least one and at most 30% of the corresponding task s request period. 4) All tasks start simultaneously at time zero. It is worth mentioning that when a task is being started for execution we have first check to see whether it can be completed in time or not, assuming that it will not be preempted. If the task cannot meet its deadline, it will not be started and it is considered as a failed task []. A. Comparison of MMU using ED and MLL The notations used to evaluate performance are as follows: ) MMU-ED, MMU-MLL: the MMU algorithm using either ED or MLL to calculate the dynamic priorities, respectively. 2) MMU-ED, MMU-MLL : the number of context switches that is produced by MMU-ED and MMU-MLL scheduling algorithms, respectively. 3) T MMU-ED, T MMU-MLL : the total scheduling overhead. 4) MMU-ED, MMU-MLL : the number of failed non-critical tasks. In ig. 2, / (i.e. the number of context MMU ED MMU MLL switches ratio) is shown as a function of processor utilization with the fixed number of tasks = 0 and 20. As the processor utilization increases, the number of context switches ratio goes down. 0.8 0.7 0. 0.2 0.3 0.4 0.5 0.6 0.7 0.8.0..2.3.4.5 2 3 4 5 6 7 8 9 0 2 3 4 5 ig. 2 Comparison of the number of context switches ig. 3 shows the T / T (i.e. the global MMU ED MMU MLL performance ratio) with the fixed number of tasks = 0 and 20. As it is shown the global performance ratio, in most cases, is less than one. urthermore, it is less than one in all cases in which the processor utilization is greater than one. T T MMU ED MMU MLL MMU ED MMU MLL.05 5 0. 0.2 0.3 0.4 0.5 0.6 0.7 0.8.0..2.3.4.5 2 3 4 5 6 7 8 9 0 2 3 4 5 ig. 3 Global performance ratio In ig. 4, the / (i.e. the number of failed MMU ED MMU MLL non-critical tasks ratio) is shown as a function of processor utilization with the fixed number of tasks = 0 and 20. The number of failed non-critical tasks ratio is less than one and as the number of tasks increases, the number of failed noncritical tasks ratio reduces too. It is trivial that the number of failed non-critical tasks is zero when the processor utilization is less than or equal to one, since there in no non-critical task in the system. International Scholarly and Scientific Research & Innovation (9) 2007 2830 scholar.waset.org/307-6892/874

Vol:, o:9, 2007.05.05 International Science Index, Computer and Information Engineering Vol:, o:9, 2007 waset.org/publication/874 MMU ED MMU MLL 5..2 2.3 3.44.55 ig. 4 Comparison of the number of failed non-critical tasks B. Comparison of MMU and MU The notations used to evaluate performance are as follows: ) MMU: the MMU algorithm using ED to calculate the dynamic priorities. 2) MMU, MU : the number of context switches that is produced by MMU and MU scheduling algorithms, respectively. 3) MMU, MU : the number of failed non-critical tasks. In ig. 5, / (i.e. the number of context switches MMU MU ratio) is shown as a function of processor utilization with the fixed number of tasks = 0 and 20. In all cases, the number of context switches ratio is less than one and it shows an improvement in MMU algorithm. MMU MU. 0.8 0.7 0.6 0. 0.2 0.3 0.4 0.5 0.6 0.7 0.8.0..2.3.4.5 2 3 4 5 6 7 8 9 0 2 3 4 5 CPU utilization ig. 5 Comparison of the number of context switches ig. 6 shows the / (i.e. the number of failed noncritical tasks ratio) with the fixed number of tasks = 0 and MMU MU 20. The number of failed non-critical tasks ratio is less than one and as the number of tasks increases, the number of failed non-critical tasks ratio reduces. MMU MU 5..2.3.4.5 2 3 4 5 ig. 6 Comparison of the number of failed non-critical tasks V. COCLUSIO In this paper, we presented a modified version of MU scheduling algorithm called MMU which resolves the deficiency of the MU algorithm in which a critical task may miss its deadline in certain situations. Moreover, some additional optimizations are applied in the MMU algorithm. The performance of the MMU was compared to MU algorithm and showed to be superior. It usually has less task preemption and hence, less related overhead. It also leads to less failed non-critical tasks in overloaded situations in which the CPU load factor is greater than 00%. REERECES [] C. L. Liu, and J. W. Layland, Scheduling Algorithms for Multiprogramming in a Hard real Time Environment, Journal of the Association for Computing Machinery, vol.20, no., pp. 44-6, January 973. [2] D. B. Stewart, and P. k. Khosla, Real-Time Scheduling of Dynamically Reconfigurable Systems, in Proc. IEEE International Conference on Systems Engineering, Dayton Ohio, August 99, pp. 39-42. [3] A. Mok. undamental Design Problems of Distributed Systems for Hard Real-time Environments. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, 983. [4] S. H. Oh, and S. M. Yang, A Modified Least-Laxity-irst Scheduling Algorithm for Real-Time Tasks, in Proc. ifth International Conference on Real-Time Computing Systems and Applications, October 998, pp. 3-36 [5] D. B. Stewart, D. E. Schmitz, and P. K. Khosla, Implementing Real- Time Robotic Systems using CHIMERA II, in Proc. IEEE International Conference on Robotics and Automation, Cincinnati, OH, May 990, pp. 598-603. [6] D. B. Stewart, and P. k. Khosla, Real-Time Scheduling of Sensor- Based Control Systems, in Proc. Eighth IEEE Workshop on Real-Time Operating Systems and Software, in conjunction with 7th IAC/IIP Workshop on Real-Time Programming, Atlanta, GA, May 99, pp. 44-50. [7] J. Goossens, and P. Richard, Overview of real-time scheduling problems, in Proc. the ninth international workshop on project management and scheduling, ancy, rance, April 2004. [8] L. Sha, J. P. Lehoczky, and R. Rajkumar, Solutions for Some Practical Problems in Prioritized Preemtive Scheduling, in Proc. 0th IEEE Real-Time Systems Symposium, Santa Monica, CA, December 989. [9] M. aghibzadeh, M. athi, Intelligent Rate-Monotonic Scheduling Algorithm for Real-Time Systems, Kuwait Journal of Science and Engineering, vol. 30, no. 2, pp. 97-20, 2003. [0] M. L. Dertouzos. Control robotics: The procedural control of physical processes, in Proc. the IIP Congress, 974, pp. 807 83. [] M. aghibzadeh, A modified Version of Rate-Monotonic Scheduling Algorithm and its efficiency Assessment, in Proc. Seventh IEEE International Workshop on Object-Oriented Real-time Dependable Systems, San Diego, January 2002. International Scholarly and Scientific Research & Innovation (9) 2007 283 scholar.waset.org/307-6892/874