A Scheduling Technique Providing a Strict Isolation of Real-time Threads

Size: px
Start display at page:

Download "A Scheduling Technique Providing a Strict Isolation of Real-time Threads"

Transcription

1 A Scheduling Technique Providing a Strict Isolation of Real-time Threads U. Brinkschulte ¾, J. Kreuzinger ½, M. Pfeffer, Th. Ungerer ½ Institute for Computer Design and Fault Tolerance University of Karlsruhe, D Karlsruhe, Germany ¾ Institute for Process Control Automation and Robotics University of Karlsruhe, D Karlsruhe, Germany Institute for Computer Science University of Augsburg, D Augsburg, Germany brinks, pfeffer, Abstract Highly dynamic programming environments for embedded real-time systems require a strict isolation of real-time threads from each other to achieve dependable systems. We propose a new real-time scheduling technique, called guaranteed percentage (GP) scheme that assigns each thread a specific percentage of the processor power. A hardware scheduler in conjunction with a multithreaded processor guarantees the execution of instructions of each thread according to their assigned percentages within a time interval of 100 processor cycles. We compare performance and implementation overhead of GP scheduling against fixed priority preemptive (FPP), earliest deadline first (EDF), and least laxity first (LLF) scheduling using several benchmarks on our Komodo microcontroller that features a multithreaded Java processor kernel. Our evaluations show that GP scheduling reaches a speed-up similar to EDF and FPP but worse than LLF. However, its hardware implementation costs are still reasonable, whereas the LLF overhead is prohibitive. Only GP reaches the isolation goal among the examined scheduling schemes. Keywords: real-time scheduling, performance analysis, timing isolation, multithreaded processor, Java processor, guaranteed percentage scheduling. 1 Introduction and Motivation Employment of Java is an upcoming trend for the development of embedded real-time systems that can significantly reduce system development times and provide more dependable systems. The use of Java offers a highly dynamic and portable programming environment, which allows the construction of systems, where threads enter and leave the system during runtime. In particular, new threads with potentially unknown behavior are allocated dynamically on the system processor. Furthermore, real-time threads and non real-time threads coexist and cooperate. In such an environment, it is a good advice to isolate threads from each other. In terms of real-time systems this means, the behavior or misbehavior of any thread may not harm the real-time properties of any other threads running on the system processor. Standard real-time scheduling policies like fixed priority preemptive (FPP) or earliest deadline first (EDF) scheduling cannot offer this isolation. A misbehaving thread with a high priority or a short deadline may in fact harm the rest of the system. So in this paper we propose a real-time scheduling scheme, called guaranteed percentage (GP) scheduling, that completely isolates threads from each other. The GP scheduling scheme assigns each thread a specific percentage of the processor power over a short interval. GP scheduling is dedicated to the features of a multithreaded processor and allows to offer very fast event response times. A multithreaded processor is able to pursue multiple threads of control in parallel within the processor pipeline. The functional units are multiplexed between the thread contexts. Most approaches store the thread contexts in different sets on the processor chip. Latencies that arise by cache misses, long running operations or other pipeline hazards are masked by switching to another thread. The Komodo project [1] explores the suitability of hardware multithreading techniques in object-oriented embedded real-time systems on the basis of a Java microcontroller, called Komodo microcontroller. One of the key features of a multithreaded microcontroller is the ability of very rapid context switching. So we propose hardware multithreading as an event handling mechanism that allows efficient handling of simultaneous overlapping events with hard realtime requirements. The Komodo microcontroller realizes a zero cycle context switch overhead. Scheduling on an instruction-per-instruction basis is enabled by embedding the hardware scheduler deeply in the processor pipeline. Instruction scheduling is determined first by the real-time scheme and second by latency bridging. I.e. if the real-time scheme schedules an instruction that cannot be fed into the pipeline because of a control, data or structural hazard arising from previously issued instructions, then an instruction of another thread can be issued instead. Section 2 discusses related work. Section 3 introduces the GP scheduling scheme and its principal implementation. Section 4 presents the Komodo microcontroller, which is the testbed for the evaluation. The evaluation in section 5 compares GP scheduling to standard scheduling schemes with respect to the multithreaded processor environment. The final section 6 summarizes our results.

2 2 Related Work We introduce a new scheduling scheme based on a fixed percental processor share, the following subsections present related approaches that are typically implemented in software. None of these schemes have ever been evaluated in context of a multithreaded microcontroller. Proportional share stands for a class of algorithms, where every thread gets a share of the processor power, which is proportional to its importance. This technique arises from the fields of multimedia systems and networks, where data packets must be delivered using a fixed part of the available bandwidth. Mostly those systems have soft real-time requirements, which means the missing of a deadline is not desired, but as well not fatal. There are several versions of proportional share scheduling, e.g. lottery scheduling [10] or stride scheduling [11]. A common feature of all proportional share algorithms is the fact that the available processor power for each thread is not fixed like in GP, but depends on the number of threads in the system. The overall available processor power is shared among the active threads. If a new thread enters the system, the processor power for the old threads is decreased. Furthermore a thread gets its desired portion on a long term view, but in short terms no guarantees can be made. The fair share scheme [3] assigns a percentage of the available processor power to each thread. The scheduler gives a priority to each thread and monitors which part of the processor power they use. If a thread exceeds its dedicated percentage, its priority is decreased. This technique assigns each thread an average part of the processor power, which corresponds to the requested percentage. But this is a long-term statistical value. In a short-term view, the real percentages may differ strongly from the requested values, thus making it difficult to guarantee hard deadlines or quick event response times. Aperiodic servers were designed to handle aperiodic tasks. The most well known servers of this type are the deferrable server [7] and the sporadic server [9]. The basic idea is to assign a server to each task. The server gets an amount of tickets in a given period to run the task. After that period, the tickets are replenished. The deferrable server replenishes the tickets independent of the ticket usage. This allows a simple implementation, but only one such server can run in a system node, if hard deadlines must be guaranteed. The sporadic server replenishes tickets dependant on their usage. A used ticket is replenished after a given time-unit since start of usage. This leads to a more complex implementation, since the server must keep track of every ticket, but allows more than one server running in a system node. However, both server types don t have a global instance to prevent tasks from requesting more tickets than specified and thus harming the real-time behavior of other tasks in the system. The bandwidth share server (BSS) [2, 8] was introduced to isolate threads or tasks in a highly dynamic system. It uses a two-level scheduling mechanism. On the local level, several tasks or threads of an application are scheduled by a local scheduler. On the global level, each of the applications gets a defined bandwidth of the processor power. This is reached by using a hybrid global scheduler, which combines EDF scheduling with a budged based scheduling. Each application has a deadline (which is the shortest deadline of all tasks belonging to the application) and a budged. Among all active applications the one with the shortest deadline gets the processor until its goal is reached or its budget is exhausted. This technique allows a strict isolation of applications and can guarantee hard deadlines. A drawback of the BSS scheme is a high complexity of calculating a budged, which must be calculated every time a new task enters the system. Therefore BBS is still too complex to be realized in hardware. Another problem arises, if a new thread enters an application, where an old thread has exhausted the application s budged. The new thread will not get any budged before the deadline of the old thread thus delaying its execution. 3 Guaranteed Percentage Scheduling This section presents the GP scheduling scheme, which is designed for the use on a multithreaded processor. Its main goals are first, a strict isolation of the real-time threads, and second quick and predictable event response times needed to handle hard real-time events. As a third goal, the scheduling scheme should promote the ability of a multithreaded processor to hide latencies by context switching, for an additional performance boost by yielding a better utilization of the processor resources. 3.1 Basic Principles The GP scheduling scheme assigns each thread a specific percentage of the processor power over a short time period of 100 processor cycles. In case of three threads A, B and C, we may assign 20% of the processor power to thread A, 30% to thread B and 50% to thread C. The scheduling scheme guarantees that each thread receives the requested percentage. The fixed percentages of GP allow a strict isolation of each thread. A new incoming thread cannot harm the already running threads, because the new thread does not affect their guaranteed percentages. Furthermore admission control is rather simple. A new thread may be rejected, if the sum of the percentages of real-time threads exceeds 100% of the processor power. GP offers three classes, how a requested percentage is assigned to a thread:

3 Exact: a thread gets exactly the requested percentage of the processor power in the interval, not more and not less. This mode is very useful, if a thread has to keep a specific data rate, e.g. while reading or writing data to an interface with a low amount of jitter. Minimum: a thread gets at least the requested percentage of the processor power in the interval. If there is processor power left in the interval, the thread may receive more. This mode is useful in keeping deadlines. Maximum: a thread gets at most the requested percentage of the processor power. This is useful for non realtime threads, which run concurrently with real-time threads and may not charge the system beyond a given limit. Threads of different classes may be mixed within a processor workload. Finally, GP should support latency utilization on multithreaded processors. A workload with hard real-time threads of classes exact and minimum may not exceed the 100% processor power that can be statically assigned. Nevertheless, the existence of latencies slots allow dynamically a utilization of more than 100% that can be used by additional soft real-time or non real-time threads of class maximum. Beneath the ability of very fast context switching, the quality of latency utilization strongly depends on the number of threads waiting for execution. The processor needs a pool of these threads to switch in case of a latency. If there are not enough threads active, the processor may not find an appropriate thread to switch. Standard real-time scheduling schemes like EDF or FPP tend to lessen this pool, because they first execute the most important thread, then the second most important thread, etc. [5]. In contrast, GP and LLF tend to keep threads alive as long as possible. So, as the evaluation shows, GP and LLF perform on a multithreaded processor more efficiently than standard schemes like EDF or FPP. reaches the value 0, the thread gets no more time until the next interval starts. A minimum thread will be executed after a counter value of 0 when no other exact or minimum thread has a counter value greater 0. If a thread is blocked due to latencies, other threads use these cycles and the counters of both threads are decreased. The counter of the blocked thread must also be decreased to monitor the correct runtime of the thread. Threads that are suspended are excluded from the schedule. The implementation guarantees a strict isolation of the real-time threads (of classes exact and minimum) by enforcing the statically assigned percentage for each thread without influencing the other threads (in contrast to a proportional share scheme see section 2) 4 Evaluation Testbed: The Komodo Microcontroller The Komodo microcontroller [1] features a multithreaded Java processor core, which supports multiple threads with zero-cycle context switching overhead. Because of its application for embedded systems, the processor core of the Komodo microcontroller is kept at the hardware level of a simple scalar processor. As shown in Fig. 1, the four stage pipelined processor core consists of an instruction-fetch, a decode, an operand fetch, and an memory access and execution unit. Four stack sets are provided on the processor chip. A signal unit triggers the execution of the real-time threads on the occurrence of external signals. Memory interface Address Instructions µrom Instruction fetch PC1 PC2 PC3 PC4 IW1 IW2 IW3 IW4 Instruction decode Operand fetch Priority manager Signal unit 3.2 GP Implementation on a Multithreaded Processor Address Data Memory access Execute The GP scheme is implemented on our multithreaded processor by defining 100 cycle intervals in which the percentages are guaranteed. At the beginning of an interval, one counter per thread is initialized with a value corresponding to the thread s percentage. In each cycle a thread is determined for execution and its counter is decreased. To fulfill all hard real-time requests, first of all the threads of the classes exact and minimum are executed. Second the threads of the class minimum that already reached their assigned percentages and third the threads of the class maximum are taken into account. If a counter of an exact thread set set 1 set set 2 set set 3 set set 4 Figure 1. Block diagram of the Komodo microcontroller The instruction fetch unit holds four program counters (PCs) with dedicated status bits (e.g. thread active/suspended), each PC is assigned to a different thread. Four byte portions are fetched over the memory interface

4 and put in the according instruction window (IW). Several instructions may be contained in the fetch portion, because of the average Java bytecode length of 1.8 bytes. The instruction decode unit contains the abovementioned IWs, dedicated status bits (e.g. priority) and counters. A priority manager decides each cycle subject to the bits and counters from which IW the next instruction will be decoded. We implemented the FPP, the EDF, the LLF, and the GP scheduling schemes within the priority manager for IW selection. However, latencies may result from branches or memory accesses. To avoid pipeline stalls, instructions from other threads than the highest priority threads can be fed into the pipeline. The decode unit predicts the latency after such an instruction, and proceeds with instructions from other IWs. There is no overhead for such a context switch. No save/restore of s or removal of instructions from the pipeline is needed, because each thread has it s own stack set. Additionally, due to the fetch bandwidth of 4 bytes and the average bytecode length of 1.8 bytes there are almost always fetched instructions in the IWs. 5 Evaluation The GP scheduling scheme provides a strict isolation of threads in a highly dynamic environment. This is an obvious advantage yielding more dependable systems. In the following we evaluate the performance of GP measured in shortened deadlines over standard real-time scheduling schemes FPP, EDF, and LLF on the Komodo processor core. In particular, we investigate the performance impact of latency usage for the different scheduling schemes. The Komodo processor core is software simulated and hardware implemented on a Xilinx FPGA yielding chipspace requirements of about gates for a fourthreaded processor kernel with 64 stack entries for each thread, where the stacks itself need gates ([6]). Our workload consists of three application programs, which are typical for real-time systems. The first program is an impulse counter (IC), which reads data from an interface, scales it and stores it in the memory. The other two programs are a PID-element (PID) and a rather costly Fast Fourier Transform (FFT). Latencies from memory accesses (1 cycle) and branches (2 cycles) are utilized by instructions of other than the high priority thread. Table 1 specifies the size of the three workload programs and the rate of memory accesses and branches causing latencies. Program Bytecodes Load/Store Branches Impulse counter 9 22,2± 11± PID-element ,3± 7,8± FFT ,2± 6± Table 1. The benchmark programs 5.1 First Evaluation: Threads with Similar Deadlines In the first part of the evaluation we executed four equal programs on the processor. In this first experiment, all four threads were given the same real-time parameters (deadline = period, starting processor utilization 1 = 0.25 for each thread). Under FPP all threads obtain the same priority, under GP each thread is a member of the class Exact and obtains ¾ ± of the available computing time. Then the common deadline is shortened until the scheduler can t keep them any more. So the result is a performance indication of the scheduling technique. The results of the different schedulers are compared in figure 2. Here the presentation is scaled to an ideal nonmultithreaded processor, i.e. a value of 1 corresponds to the performance of a processor, that uses no latencies, but needs no additional clock cycles for a context switch as well. Real processor kernels perform worse due to their context switching overhead. speed-up 1,70 1,60 1,50 1,40 1,30 1,20 1,10 1,00 0,90 0,80 1,19 1,19 IC PID FFT 1,26 1,26 1,24 1,24 1,29 FPP EDF LLF GP Figure 2. Speed-up of the computation times of different schedulers with the same threads The multithreaded processor raises the speed-up for all benchmark programs and scheduling schemes and thus enhances the possible sample rates. Concerning the single scheduling strategies, all of them provide the same speed-up for the impulse counter. This is explained with the extreme shortness of the program, that doesn t leave any range for variations to the scheduling strategies. For the PID-element 1 processor utilization = execution time without latency utilization deadline 1,30

5 and the FFT differences in the improvement of the deadline can be noticed for the different scheduling strategies. These differences are caused as follows: The performance gain of the multithreaded processor arises from the use of latencies by context switches. It is important to hold the pool of threads waiting for execution stuffed, which is done best by GP and LLF. 5.2 Second Evaluation: Threads with Different Deadlines number of context switchs (% cycles) ,95 4,15 4,15 4,17 FPP EDF LLF GP The second experiment uses all three programs and an additional non real-time dummy thread. The chosen realtime parameters were Ð Ò Ô Ö Ó and a starting processor utilization of 0.3 for each of the real-time threads. To evaluate the performance, we fix the deadline for the IC and the FFT and shorten the deadline for the PID-element until the first missed deadline occurs. The priorities for FPP are assigned under the terms of rate monotonic analysis. In case of GP the impulse counter and FFT belong to the class exact and the PID-element is in the class minimal. The start conditions are 30% of execution time for every real-time thread and 10% for the non real-time thread. Figure 3 shows the results of our experiment. speed-up 1,9 1,7 1,5 1,3 1,1 0,9 0,7 0,5 1,70 1,70 1,70 1,50 FPP EDF LLF GP Figure 3. Speed-ups of the workload with mixed application programs The figure shows that again all scheduling algorithms profit from the multithreaded processor. It is remarkable that in this experiment LLF doesn t perform better than FPP or EDF. This can be explained by the mixture of the threads, too. Due to the big differences of execution time of the threads and the corresponding deadlines, the thread with the least laxity is the same over a long period. This leads to a similar behavior of LLF and EDF, which can be seen in nearly the same number of context switches for LLF and EDF in figure 4. The behavior of the GP scheduler is unexpected. Actually GP should be an ideal scheduler, because the threads in the class exact are held active until the deadline arrives. A Figure 4. Number of context switches by different threads thread that needs 10 msec for execution and has a deadline of 40 msec terminates by a share of 25% exactly at the given deadline. The drawback of GP in this experiment is caused by the GP implementation. Figure 5 shows the principal execution sequence of the four given threads within a single interval. As you can see, after the termination of the two exact threads (after about 90 cycles depending on the usage of the latencies) only the non real-time thread can utilize the latencies of the PID element. Therefore, the non real-time thread gets much more cycles than by LLF or EDF and the performance for handling real-time events decreases. In this case, the number of executable threads always decreases at the end of each interval Exact (FFT) Exact (IC) Minimal (PID) Non real-time 100 Cycles Figure 5. Thread execution within an interval of GP So in GP scheduling a global and a local point of view must be distinguished. In the global point of view, from the start of a thread up to its deadline, GP is optimal, because it keeps threads alive as long as possible. But this optimal latency usage may not be the case for the local point of view, a single interval, because threads of the class exact that reached their allowance are no more available for execution towards the end of each interval. 5.3 Third Evaluation: An Application Example The third evaluation experiment is derived from a real industrial application example an autonomous guided vehicle (AGV). AGVs are guided by a reflective tape glued on the floor. A vehicle pursues its track by use of a CCD

6 line camera producing periodic events at a rate of 10 milliseconds. This period gives the deadline for converting and reading the camera information and for executing the control loop, which keeps the vehicle on the track. A second time-critical event is produced asynchronously by transponder-based position marks, which notify the vehicle that some default position is reached (e.g. a docking station). If the vehicle notices a position mark, the corresponding transponder, which is installed in the floor beside the track, must be read. The precision needed for position detection using these marks is 1 cm. This gives a vehicle speed dependent deadline for reading the transponder information. To solve this job, the AGV software is structured into three tasks. The first task (control task) performs the control loop based on the actual camera information. This task is triggered by a timer event with a period of 10 milliseconds. The second task (camera task) converts and reads the next camera information. This task is triggered by the same timer event. The third task (transponder task) is triggered by the position mark events. It reads the transponder information and calculates the current position and the action to be taken. task deadline runtime percentage control task 10 ms 5 ms 50± camera task 10 ms 1 ms 10± transponder task 15,4 ms 5,5 ms 35± Table 2. Threads of the AGV Table 2 shows realistic values for the three tasks: deadlines, runtime and assigned percentages when assuming a GP scheduling. Additionally a non real-time thread may use the remaining ± of the computing power. The deadline of the transponder thread depends on the velocity of the vehicle when passing the transponder. To determine the maximum vehicle speed, we reduce the deadline of the transponder thread step by step until the deadline is missed the first time. Thus the maximum velocity of the vehicle when passing a transponder is computed. Because of the requested accuracy for position detection, we yield the following formula: ¼ ¼½Ñ Ñ Ü ÑÙÑ Ú ÐÓ ØÝ Ñ Ò ÑÙÑ ÖÙÒØ Ñ Figure 6 shows the maximum velocities yielded without latency utilization but assuming a zero-cycle context switching for the different real-time scheduling strategies. The figure demonstrates the well-known disadvantage of FPP, which cannot guarantee a processor utilization of 100%. On the other hand, a zero cycle context switch is not realistic for EDF, LLF, and GP on a conventional nonmultithreaded microcontroller or microprocessor. velocity (m/s) 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 0,57 0,72 0,72 0,72 FPP EDF GP LLF Figure 6. Reachable velocities without latency utilization velocity (m/s) 1,4 1,2 1 0,8 0,6 0,4 0,2 0 0,94 1,02 1,01 1,37 FPP EDF GP LLF Figure 7. Reachable velocities with latency utilization Figure 7 shows the increased velocity when using a multithreaded processor for scheduling with zero cycle context switching that is additionally able to utilize latencies. GP performs similar to EDF and better than FPP. Again, the non optimal local behavior of the GP implementation prevents a better result like LLF which provides the best thread mix in this evaluation. 6 Conclusions In this paper we propose a new real-time scheduling technique for systems with a strict isolation of real-time threads from each other. This is the GP scheduling technique. Each thread is assigned a specific guaranteed percentage of the processor power. The threads are executed isolated, i.e. no thread has any influence on other threads. A hardware scheduler in conjunction with a multithreaded processor core guarantees the percentage execution of instruction of each thread within a time interval of 100 processor cycles. We offer a fast response time on events by a fast context switch on the multithreaded microcontroller. The microcontroller can take advantage of several tasks with

7 similar deadlines by bridging latencies and thus offering additional performance gains. When the deadlines are very different, our GP implementation loses some performance due to a non-optimal local thread mix. We compared performance and implementation overhead of GP scheduling against FPP, EDF and LLF scheduling using several benchmarks on our Komodo microcontroller that features a multithreaded Java processor kernel. The evaluation results show that all real-time scheduling schemes strongly benefit from the multithreaded processor architecture of the Komodo microcontroller. Only GP offers an isolation of real-time threads. A misbehaving thread cannot harm the real-time properties of other threads. FPP, EDF and LLF cannot offer this advantage. Our evaluations show that GP scheduling reaches a speed-up similar to EDF and FPP but worse than LLF. GP scheduling works best, if the threads have similar deadlines. However, additional evaluations ([4]) show that its hardware costs are still reasonable, whereas the LLF overhead is prohibitive. The hardware implementation overhead of GP is similar to EDF. Both schemes can be realized in hardware and give a scheduling decision in one clock cycle. LLF is too complex to react in that short time. Finally, GP relies on the quick context switching ability of the upcoming multithreaded processor generation. If the costs for context switching increase, GP like LLF become inefficient. From the view point of dependable systems, GP scheduling is the only scheme that provides the advantage of isolation, but may suffer a slight performance degradation compared to the conventional schemes. microcontroller. Second Annual Workshop on Hardware Support for Objects and Microarchitectures for Java in conjunction with ICCD 00, Austin, Texas, USA, pages 32 36, September [7] J. Lehoczky, L. Sha, and J. Strasnider. Enhanced aperiodic responsiveness in hard real-time environments. 8th International Real-Time System Symposion RTSS, [8] G. Lipari and S. Baruah. Efficient scheduling of real-time multi-task applications in dynamic systems. 6th Real-Time Technology and Applications Symposium RTAS, [9] B. Sprunt. Aperiodic task scheduling for real-time systems. Ph.D. Thesis, Dep. of Electrical and Computer Engineering, Carnegie Mellon University, [10] C. A. Waldspurger and W. E. Weihl. Lottery scheduling: Flexible proportional-share resource management. In Proceedings of the first Symposium on Operating System Design and Implementation, November [11] C. A. Waldspurger and W. E. Weihl. Stride scheduling: Deterministic propotional-share resource management. Technical report, MIT Laboratory for Computer Science, June References [1] U. Brinkschulte, C. Krakowski, J. Kreuzinger, and T. Ungerer. A multithreaded Java microcontroller for threadoriented real-time event-handling. PACT, Newport Beach, pages 34 39, October [2] G.Lipari and G. Buttazzo. Scheduling real-time multi-task applications in an open system. 11th Euromicro Workshop on Real-Time Systems, [3] J. Kay and P. Lauder. A fair share scheduler. Communications of the ACM, 31(1):44 55, Jan [4] J. Kreuzinger. Echtzeitfähige Ereignisbehandlung mit Hilfe eines mehrfädigen Java-Microcontrollers. Logos Verlag, Berlin, [5] J. Kreuzinger, A. Schulz, M. Pfeffer, T. Ungerer, U. Brinkschulte, and C. Krakowski. Real-time scheduling on multithreaded processors. The 7th International Conference on Real-Time Computing Systems and Applications (RTCSA 2000), Cheju Island, South Korea, pages , December [6] J. Kreuzinger, R. Zulauf, A. Schulz, T. Ungerer, M. Pfeffer, U. Brinkschulte, and C. Krakowski. Performance evaluations and chip-space requirements of a multithreaded java

Real-time Scheduling on Multithreaded Processors

Real-time Scheduling on Multithreaded Processors Real-time Scheduling on Multithreaded Processors J. Kreuzinger, A. Schulz, M. Pfeffer, Th. Ungerer Institute for Computer Design, and Fault Tolerance University of Karlsruhe D-76128 Karlsruhe, Germany

More information

Real-time Scheduling on Multithreaded Processors

Real-time Scheduling on Multithreaded Processors Real-time Scheduling on Multithreaded Processors J. Kreuzinger, A. Schulz, M. Pfeffer, Th. Ungerer U. Brinkschulte, C. Krakowski Institute for Computer Design, Institute for Process Control, and Fault

More information

instruction fetch memory interface signal unit priority manager instruction decode stack register sets address PC2 PC3 PC4 instructions extern signals

instruction fetch memory interface signal unit priority manager instruction decode stack register sets address PC2 PC3 PC4 instructions extern signals Performance Evaluations of a Multithreaded Java Microcontroller J. Kreuzinger, M. Pfeer A. Schulz, Th. Ungerer Institute for Computer Design and Fault Tolerance University of Karlsruhe, Germany U. Brinkschulte,

More information

Interrupt Service Threads - A New Approach to Handle Multiple Hard Real-Time Events on a Multithreaded Microcontroller

Interrupt Service Threads - A New Approach to Handle Multiple Hard Real-Time Events on a Multithreaded Microcontroller Interrupt Service Threads - A New Approach to Handle Multiple Hard Real-Time Events on a Multithreaded Microcontroller U. Brinkschulte, C. Krakowski J. Kreuzinger, Th. Ungerer Institute of Process Control,

More information

Priority manager. I/O access

Priority manager. I/O access Implementing Real-time Scheduling Within a Multithreaded Java Microcontroller S. Uhrig 1, C. Liemke 2, M. Pfeffer 1,J.Becker 2,U.Brinkschulte 3, Th. Ungerer 1 1 Institute for Computer Science, University

More information

A Real-Time Java System on a Multithreaded Java Microcontroller

A Real-Time Java System on a Multithreaded Java Microcontroller A Real-Time Java System on a Multithreaded Java Microcontroller M. Pfeffer, S. Uhrig, Th. Ungerer Institute for Computer Science University of Augsburg D-86159 Augsburg fpfeffer, uhrig, ungererg @informatik.uni-augsburg.de

More information

A Multithreaded Java Microcontroller for Thread-Oriented Real-Time Event-Handling

A Multithreaded Java Microcontroller for Thread-Oriented Real-Time Event-Handling A Multithreaded Java Microcontroller for Thread-Oriented Real-Time Event-Handling U. Brinkschulte, C. Krakowski J. Kreuzinger, Th. Ungerer Institute of Process Control, Institute of Computer Design Automation

More information

A Microkernel Architecture for a Highly Scalable Real-Time Middleware

A Microkernel Architecture for a Highly Scalable Real-Time Middleware A Microkernel Architecture for a Highly Scalable Real-Time Middleware U. Brinkschulte, C. Krakowski,. Riemschneider. Kreuzinger, M. Pfeffer, T. Ungerer Institute of Process Control, Institute of Computer

More information

Multimedia Systems 2011/2012

Multimedia Systems 2011/2012 Multimedia Systems 2011/2012 System Architecture Prof. Dr. Paul Müller University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY http://www.icsy.de Sitemap 2 Hardware

More information

CARUSO Project Goals and Principal Approach

CARUSO Project Goals and Principal Approach CARUSO Project Goals and Principal Approach Uwe Brinkschulte *, Jürgen Becker #, Klaus Dorfmüller-Ulhaas +, Ralf König #, Sascha Uhrig +, and Theo Ungerer + * Department of Computer Science, University

More information

Real-time Garbage Collection for a Multithreaded Java Microcontroller

Real-time Garbage Collection for a Multithreaded Java Microcontroller Real-time Garbage Collection for a Multithreaded Java Microcontroller S. Fuhrmann, M. Pfeffer, J. Kreuzinger, Th. Ungerer Institute for Computer Design and Fault Tolerance University of Karlsruhe D-76128

More information

Constant Bandwidth vs Proportional Share Resource Allocation

Constant Bandwidth vs Proportional Share Resource Allocation Proceedings of IEEE ICMCS, Florence, Italy, June 1999 1 Constant Bandwidth vs Proportional Share Resource Allocation Luca Abeni, Giuseppe Lipari and Giorgio Buttazzo Scuola Superiore S. Anna, Pisa luca,lipari,giorgio@sssup.it

More information

The Komodo Project: Thread-based Event Handling Supported by a Multithreaded Java Microcontroller

The Komodo Project: Thread-based Event Handling Supported by a Multithreaded Java Microcontroller The Komodo Project: Thread-based Event Handling Supported by a Multithreaded Java Microcontroller J. Kreuzinger, R. Marston, Th. Ungerer Dept. of Computer Design and Fault Tolerance University of Karlsruhe

More information

Learning Outcomes. Scheduling. Is scheduling important? What is Scheduling? Application Behaviour. Is scheduling important?

Learning Outcomes. Scheduling. Is scheduling important? What is Scheduling? Application Behaviour. Is scheduling important? Learning Outcomes Scheduling Understand the role of the scheduler, and how its behaviour influences the performance of the system. Know the difference between I/O-bound and CPU-bound tasks, and how they

More information

Practice Exercises 305

Practice Exercises 305 Practice Exercises 305 The FCFS algorithm is nonpreemptive; the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive. Multilevel queue algorithms allow

More information

Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ

Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ Dynamic Voltage Scaling of Periodic and Aperiodic Tasks in Priority-Driven Systems Λ Dongkun Shin Jihong Kim School of CSE School of CSE Seoul National University Seoul National University Seoul, Korea

More information

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections ) CPU Scheduling CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections 6.7.2 6.8) 1 Contents Why Scheduling? Basic Concepts of Scheduling Scheduling Criteria A Basic Scheduling

More information

MPEG-2 Video Decompression on Simultaneous Multithreaded Multimedia Processors

MPEG-2 Video Decompression on Simultaneous Multithreaded Multimedia Processors MPEG- Video Decompression on Simultaneous Multithreaded Multimedia Processors Heiko Oehring Ulrich Sigmund Theo Ungerer VIONA Development GmbH Karlstr. 7 D-733 Karlsruhe, Germany uli@viona.de VIONA Development

More information

Multimedia-Systems. Operating Systems. Prof. Dr.-Ing. Ralf Steinmetz Prof. Dr. rer. nat. Max Mühlhäuser Prof. Dr.-Ing. Wolfgang Effelsberg

Multimedia-Systems. Operating Systems. Prof. Dr.-Ing. Ralf Steinmetz Prof. Dr. rer. nat. Max Mühlhäuser Prof. Dr.-Ing. Wolfgang Effelsberg Multimedia-Systems Operating Systems Prof. Dr.-Ing. Ralf Steinmetz Prof. Dr. rer. nat. Max Mühlhäuser Prof. Dr.-Ing. Wolfgang Effelsberg WE: University of Mannheim, Dept. of Computer Science Praktische

More information

Analyzing Real-Time Systems

Analyzing Real-Time Systems Analyzing Real-Time Systems Reference: Burns and Wellings, Real-Time Systems and Programming Languages 17-654/17-754: Analysis of Software Artifacts Jonathan Aldrich Real-Time Systems Definition Any system

More information

8: Scheduling. Scheduling. Mark Handley

8: Scheduling. Scheduling. Mark Handley 8: Scheduling Mark Handley Scheduling On a multiprocessing system, more than one process may be available to run. The task of deciding which process to run next is called scheduling, and is performed by

More information

Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network

Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network Thomas Nolte, Hans Hansson, and Christer Norström Mälardalen Real-Time Research Centre Department of Computer Engineering

More information

A Predictable RTOS. Mantis Cheng Department of Computer Science University of Victoria

A Predictable RTOS. Mantis Cheng Department of Computer Science University of Victoria A Predictable RTOS Mantis Cheng Department of Computer Science University of Victoria Outline I. Analysis of Timeliness Requirements II. Analysis of IO Requirements III. Time in Scheduling IV. IO in Scheduling

More information

Multiprocessor and Real-Time Scheduling. Chapter 10

Multiprocessor and Real-Time Scheduling. Chapter 10 Multiprocessor and Real-Time Scheduling Chapter 10 1 Roadmap Multiprocessor Scheduling Real-Time Scheduling Linux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor Systems

More information

CPU Scheduling: Objectives

CPU Scheduling: Objectives CPU Scheduling: Objectives CPU scheduling, the basis for multiprogrammed operating systems CPU-scheduling algorithms Evaluation criteria for selecting a CPU-scheduling algorithm for a particular system

More information

Sporadic Server Scheduling in Linux Theory vs. Practice. Mark Stanovich Theodore Baker Andy Wang

Sporadic Server Scheduling in Linux Theory vs. Practice. Mark Stanovich Theodore Baker Andy Wang Sporadic Server Scheduling in Linux Theory vs. Practice Mark Stanovich Theodore Baker Andy Wang Real-Time Scheduling Theory Analysis techniques to design a system to meet timing constraints Schedulability

More information

Supporting Multithreading in Configurable Soft Processor Cores

Supporting Multithreading in Configurable Soft Processor Cores Supporting Multithreading in Configurable Soft Processor Cores Roger Moussali, Nabil Ghanem, and Mazen A. R. Saghir Department of Electrical and Computer Engineering American University of Beirut P.O.

More information

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable What s An OS? Provides environment for executing programs Process abstraction for multitasking/concurrency scheduling Hardware abstraction layer (device drivers) File systems Communication Do we need an

More information

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13 PROCESS SCHEDULING II CS124 Operating Systems Fall 2017-2018, Lecture 13 2 Real-Time Systems Increasingly common to have systems with real-time scheduling requirements Real-time systems are driven by specific

More information

Overview of Scheduling a Mix of Periodic and Aperiodic Tasks

Overview of Scheduling a Mix of Periodic and Aperiodic Tasks Overview of Scheduling a Mix of Periodic and Aperiodic Tasks Minsoo Ryu Department of Computer Science and Engineering 2 Naive approach Background scheduling Algorithms Under static priority policy (RM)

More information

Computer Science 4500 Operating Systems

Computer Science 4500 Operating Systems Computer Science 4500 Operating Systems Module 6 Process Scheduling Methods Updated: September 25, 2014 2008 Stanley A. Wileman, Jr. Operating Systems Slide 1 1 In This Module Batch and interactive workloads

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

Copyright Notice. COMP9242 Advanced Operating Systems S2/2014 Week 9: Real-Time Systems. Real-Time System: Definition

Copyright Notice. COMP9242 Advanced Operating Systems S2/2014 Week 9: Real-Time Systems. Real-Time System: Definition Copyright Notice These slides are distributed under the Creative Commons Attribution.0 License COMP94 Advanced Operating Systems S/014 Week 9: Real- Systems @GernotHeiser You are free: to share to copy,

More information

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada ken@unb.ca Micaela Serra

More information

Estimation of worst case latency of periodic tasks in a real time distributed environment

Estimation of worst case latency of periodic tasks in a real time distributed environment Estimation of worst case latency of periodic tasks in a real time distributed environment 1 RAMESH BABU NIMMATOORI, 2 Dr. VINAY BABU A, 3 SRILATHA C * 1 Research Scholar, Department of CSE, JNTUH, Hyderabad,

More information

1.1 Explain the difference between fast computing and real-time computing.

1.1 Explain the difference between fast computing and real-time computing. i 1.1 Explain the difference between fast computing and real-time computing. 1.2 What are the main limitations of the current real-time kernels for the development of critical control applications? 1.3

More information

Real-Time Scheduling of Sensor-Based Control Systems

Real-Time Scheduling of Sensor-Based Control Systems In Proceedings of Eighth IEEE Workshop on Real-Time Operatings Systems and Software, in conjunction with 7th IFAC/IFIP Workshop on Real-Time Programming, Atlanta, GA, pp. 44-50, May 99. Real-Time Scheduling

More information

An application-based EDF scheduler for OSEK/VDX

An application-based EDF scheduler for OSEK/VDX An application-based EDF scheduler for OSEK/VDX Claas Diederichs INCHRON GmbH 14482 Potsdam, Germany claas.diederichs@inchron.de Ulrich Margull 1 mal 1 Software GmbH 90762 Fürth, Germany margull@1mal1.com

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (10 th Week) (Advanced) Operating Systems 10. Multiprocessor, Multicore and Real-Time Scheduling 10. Outline Multiprocessor

More information

Operating Systems. Scheduling

Operating Systems. Scheduling Operating Systems Scheduling Process States Blocking operation Running Exit Terminated (initiate I/O, down on semaphore, etc.) Waiting Preempted Picked by scheduler Event arrived (I/O complete, semaphore

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9 Implementing Scheduling Algorithms Real-Time and Embedded Systems (M) Lecture 9 Lecture Outline Implementing real time systems Key concepts and constraints System architectures: Cyclic executive Microkernel

More information

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured System Performance Analysis Introduction Performance Means many things to many people Important in any design Critical in real time systems 1 ns can mean the difference between system Doing job expected

More information

Why Study Multimedia? Operating Systems. Multimedia Resource Requirements. Continuous Media. Influences on Quality. An End-To-End Problem

Why Study Multimedia? Operating Systems. Multimedia Resource Requirements. Continuous Media. Influences on Quality. An End-To-End Problem Why Study Multimedia? Operating Systems Operating System Support for Multimedia Improvements: Telecommunications Environments Communication Fun Outgrowth from industry telecommunications consumer electronics

More information

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system Real world exposed programs Programs written to interact with the real world, outside the computer Programs handle input and output of data in pace matching the real world processes Necessitates ability

More information

Node Prefetch Prediction in Dataflow Graphs

Node Prefetch Prediction in Dataflow Graphs Node Prefetch Prediction in Dataflow Graphs Newton G. Petersen Martin R. Wojcik The Department of Electrical and Computer Engineering The University of Texas at Austin newton.petersen@ni.com mrw325@yahoo.com

More information

Computer Architecture

Computer Architecture Computer Architecture Slide Sets WS 2013/2014 Prof. Dr. Uwe Brinkschulte M.Sc. Benjamin Betting Part 10 Thread and Task Level Parallelism Computer Architecture Part 10 page 1 of 36 Prof. Dr. Uwe Brinkschulte,

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2019 Lecture 8 Scheduling Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ POSIX: Portable Operating

More information

Real-Time Architectures 2003/2004. Resource Reservation. Description. Resource reservation. Reinder J. Bril

Real-Time Architectures 2003/2004. Resource Reservation. Description. Resource reservation. Reinder J. Bril Real-Time Architectures 2003/2004 Resource reservation Reinder J. Bril 03-05-2004 1 Resource Reservation Description Example Application domains Some issues Concluding remark 2 Description Resource reservation

More information

CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS

CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS UNIT-I OVERVIEW & INSTRUCTIONS 1. What are the eight great ideas in computer architecture? The eight

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

Integrating Multimedia Applications in Hard Real-Time Systems

Integrating Multimedia Applications in Hard Real-Time Systems Proceedings of IEEE Real-Time System Symposium, Madrid, Spain, 998 Integrating Multimedia Applications in Hard Real-Time Systems Luca Abeni and Giorgio Buttazzo Scuola Superiore S Anna, Pisa luca@hartiksssupit,

More information

Single-Path Programming on a Chip-Multiprocessor System

Single-Path Programming on a Chip-Multiprocessor System Single-Path Programming on a Chip-Multiprocessor System Martin Schoeberl, Peter Puschner, and Raimund Kirner Vienna University of Technology, Austria mschoebe@mail.tuwien.ac.at, {peter,raimund}@vmars.tuwien.ac.at

More information

SMD149 - Operating Systems

SMD149 - Operating Systems SMD149 - Operating Systems Roland Parviainen November 3, 2005 1 / 45 Outline Overview 2 / 45 Process (tasks) are necessary for concurrency Instance of a program in execution Next invocation of the program

More information

CPU offloading using SoC fabric Avnet Silica & Enclustra Seminar Getting started with Xilinx Zynq SoC Fribourg, April 26, 2017

CPU offloading using SoC fabric Avnet Silica & Enclustra Seminar Getting started with Xilinx Zynq SoC Fribourg, April 26, 2017 1 2 3 Introduction The next few slides give a short introduction of what CPU offloading is and how it can help improving system performance. 4 What is Offloading? Offloading means taking load from one

More information

Operating System Support for Multimedia. Slides courtesy of Tay Vaughan Making Multimedia Work

Operating System Support for Multimedia. Slides courtesy of Tay Vaughan Making Multimedia Work Operating System Support for Multimedia Slides courtesy of Tay Vaughan Making Multimedia Work Why Study Multimedia? Improvements: Telecommunications Environments Communication Fun Outgrowth from industry

More information

COMPUTER ORGANISATION CHAPTER 1 BASIC STRUCTURE OF COMPUTERS

COMPUTER ORGANISATION CHAPTER 1 BASIC STRUCTURE OF COMPUTERS Computer types: - COMPUTER ORGANISATION CHAPTER 1 BASIC STRUCTURE OF COMPUTERS A computer can be defined as a fast electronic calculating machine that accepts the (data) digitized input information process

More information

A hardware operating system kernel for multi-processor systems

A hardware operating system kernel for multi-processor systems A hardware operating system kernel for multi-processor systems Sanggyu Park a), Do-sun Hong, and Soo-Ik Chae School of EECS, Seoul National University, Building 104 1, Seoul National University, Gwanakgu,

More information

Chapter 07: Instruction Level Parallelism VLIW, Vector, Array and Multithreaded Processors. Lesson 06: Multithreaded Processors

Chapter 07: Instruction Level Parallelism VLIW, Vector, Array and Multithreaded Processors. Lesson 06: Multithreaded Processors Chapter 07: Instruction Level Parallelism VLIW, Vector, Array and Multithreaded Processors Lesson 06: Multithreaded Processors Objective To learn meaning of thread To understand multithreaded processors,

More information

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou (  Zhejiang University Operating Systems (Fall/Winter 2018) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review Motivation to use threads

More information

1993 Paper 3 Question 6

1993 Paper 3 Question 6 993 Paper 3 Question 6 Describe the functionality you would expect to find in the file system directory service of a multi-user operating system. [0 marks] Describe two ways in which multiple names for

More information

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI CMPE 655- MULTIPLE PROCESSOR SYSTEMS OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI What is MULTI PROCESSING?? Multiprocessing is the coordinated processing

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 10 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Chapter 6: CPU Scheduling Basic Concepts

More information

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institution of Technology, IIT Delhi

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institution of Technology, IIT Delhi Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institution of Technology, IIT Delhi Lecture - 20 Fundamentals of Embedded Operating Systems In today s class, we shall

More information

UNIT I (Two Marks Questions & Answers)

UNIT I (Two Marks Questions & Answers) UNIT I (Two Marks Questions & Answers) Discuss the different ways how instruction set architecture can be classified? Stack Architecture,Accumulator Architecture, Register-Memory Architecture,Register-

More information

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke CSCI-GA.2250-001 Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke frankeh@cs.nyu.edu Processes Vs Threads The unit of dispatching is referred to as a thread or lightweight

More information

ait: WORST-CASE EXECUTION TIME PREDICTION BY STATIC PROGRAM ANALYSIS

ait: WORST-CASE EXECUTION TIME PREDICTION BY STATIC PROGRAM ANALYSIS ait: WORST-CASE EXECUTION TIME PREDICTION BY STATIC PROGRAM ANALYSIS Christian Ferdinand and Reinhold Heckmann AbsInt Angewandte Informatik GmbH, Stuhlsatzenhausweg 69, D-66123 Saarbrucken, Germany info@absint.com

More information

A Fuzzy-based Multi-criteria Scheduler for Uniform Multiprocessor Real-time Systems

A Fuzzy-based Multi-criteria Scheduler for Uniform Multiprocessor Real-time Systems 10th International Conference on Information Technology A Fuzzy-based Multi-criteria Scheduler for Uniform Multiprocessor Real-time Systems Vahid Salmani Engineering, Ferdowsi University of salmani@um.ac.ir

More information

Pull based Migration of Real-Time Tasks in Multi-Core Processors

Pull based Migration of Real-Time Tasks in Multi-Core Processors Pull based Migration of Real-Time Tasks in Multi-Core Processors 1. Problem Description The complexity of uniprocessor design attempting to extract instruction level parallelism has motivated the computer

More information

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005 INF060: Introduction to Operating Systems and Data Communication Operating Systems: Processes & CPU Pål Halvorsen /9-005 Overview Processes primitives for creation and termination states context switches

More information

CPU THREAD PRIORITIZATION USING A DYNAMIC QUANTUM TIME ROUND-ROBIN ALGORITHM

CPU THREAD PRIORITIZATION USING A DYNAMIC QUANTUM TIME ROUND-ROBIN ALGORITHM CPU THREAD PRIORITIZATION USING A DYNAMIC QUANTUM TIME ROUND-ROBIN ALGORITHM Maysoon A. Mohammed 1, 2, Mazlina Abdul Majid 1, Balsam A. Mustafa 1 and Rana Fareed Ghani 3 1 Faculty of Computer System &

More information

Efficiency and memory footprint of Xilkernel for the Microblaze soft processor

Efficiency and memory footprint of Xilkernel for the Microblaze soft processor Efficiency and memory footprint of Xilkernel for the Microblaze soft processor Dariusz Caban, Institute of Informatics, Gliwice, Poland - June 18, 2014 The use of a real-time multitasking kernel simplifies

More information

Mixed Criticality Scheduling in Time-Triggered Legacy Systems

Mixed Criticality Scheduling in Time-Triggered Legacy Systems Mixed Criticality Scheduling in Time-Triggered Legacy Systems Jens Theis and Gerhard Fohler Technische Universität Kaiserslautern, Germany Email: {jtheis,fohler}@eit.uni-kl.de Abstract Research on mixed

More information

Time Triggered and Event Triggered; Off-line Scheduling

Time Triggered and Event Triggered; Off-line Scheduling Time Triggered and Event Triggered; Off-line Scheduling Real-Time Architectures -TUe Gerhard Fohler 2004 Mälardalen University, Sweden gerhard.fohler@mdh.se Real-time: TT and ET Gerhard Fohler 2004 1 Activation

More information

An Improved Priority Dynamic Quantum Time Round-Robin Scheduling Algorithm

An Improved Priority Dynamic Quantum Time Round-Robin Scheduling Algorithm An Improved Priority Dynamic Quantum Time Round-Robin Scheduling Algorithm Nirali A. Patel PG Student, Information Technology, L.D. College Of Engineering,Ahmedabad,India ABSTRACT In real-time embedded

More information

Real-Time Internet of Things

Real-Time Internet of Things Real-Time Internet of Things Chenyang Lu Cyber-Physical Systems Laboratory h7p://www.cse.wustl.edu/~lu/ Internet of Things Ø Convergence of q Miniaturized devices: integrate processor, sensors and radios.

More information

Simulation of Priority Driven Algorithms to Schedule Real-Time Systems T.S.M.Priyanka a*, S.M.K.Chaitanya b

Simulation of Priority Driven Algorithms to Schedule Real-Time Systems T.S.M.Priyanka a*, S.M.K.Chaitanya b International Journal of Current Science, Engineering & Technology Original Research Article Open Access Simulation of Priority Driven Algorithms to Schedule Real-Time Systems T.S.M.Priyanka a*, S.M.K.Chaitanya

More information

Exploiting the Efficiency of Generational Algorithms for Hardware-Supported Real-Time Garbage Collection

Exploiting the Efficiency of Generational Algorithms for Hardware-Supported Real-Time Garbage Collection Exploiting the Efficiency of Generational Algorithms for Hardware-Supported Real-Time Garbage Collection Sylvain Stanchina sylvain.stanchina@ikr.uni-stuttgart.de Matthias Meyer matthias.meyer@ikr.uni-stuttgart.de

More information

Java for Real-Time Programming

Java for Real-Time Programming Java for Real-Time Programming Jan Lindström (jplindst@cs.helsinki.fi) Helsinki 5th November 1998 University of Helsinki Department of Computer Science Seminar on Object Architectures, Fall 1998 1 1 Introduction

More information

Process Scheduling Part 2

Process Scheduling Part 2 Operating Systems and Computer Networks Process Scheduling Part 2 pascal.klein@uni-due.de Alexander Maxeiner, M.Sc. Faculty of Engineering Agenda Process Management Time Sharing Synchronization of Processes

More information

Middleware Support for Aperiodic Tasks in Distributed Real-Time Systems

Middleware Support for Aperiodic Tasks in Distributed Real-Time Systems Outline Middleware Support for Aperiodic Tasks in Distributed Real-Time Systems Yuanfang Zhang, Chenyang Lu and Chris Gill Department of Computer Science and Engineering Washington University in St. Louis

More information

Precedence Graphs Revisited (Again)

Precedence Graphs Revisited (Again) Precedence Graphs Revisited (Again) [i,i+6) [i+6,i+12) T 2 [i,i+6) [i+6,i+12) T 3 [i,i+2) [i+2,i+4) [i+4,i+6) [i+6,i+8) T 4 [i,i+1) [i+1,i+2) [i+2,i+3) [i+3,i+4) [i+4,i+5) [i+5,i+6) [i+6,i+7) T 5 [i,i+1)

More information

2. REAL-TIME CONTROL SYSTEM AND REAL-TIME NETWORKS

2. REAL-TIME CONTROL SYSTEM AND REAL-TIME NETWORKS 2. REAL-TIME CONTROL SYSTEM AND REAL-TIME NETWORKS 2.1 Real-Time and Control Computer based digital controllers typically have the ability to monitor a number of discrete and analog inputs, perform complex

More information

A Comparison of Capacity Management Schemes for Shared CMP Caches

A Comparison of Capacity Management Schemes for Shared CMP Caches A Comparison of Capacity Management Schemes for Shared CMP Caches Carole-Jean Wu and Margaret Martonosi Princeton University 7 th Annual WDDD 6/22/28 Motivation P P1 P1 Pn L1 L1 L1 L1 Last Level On-Chip

More information

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Hyunchul Seok Daejeon, Korea hcseok@core.kaist.ac.kr Youngwoo Park Daejeon, Korea ywpark@core.kaist.ac.kr Kyu Ho Park Deajeon,

More information

Improving Real-Time Performance on Multicore Platforms Using MemGuard

Improving Real-Time Performance on Multicore Platforms Using MemGuard Improving Real-Time Performance on Multicore Platforms Using MemGuard Heechul Yun University of Kansas 2335 Irving hill Rd, Lawrence, KS heechul@ittc.ku.edu Abstract In this paper, we present a case-study

More information

CS450/650 Notes Winter 2013 A Morton. Superscalar Pipelines

CS450/650 Notes Winter 2013 A Morton. Superscalar Pipelines CS450/650 Notes Winter 2013 A Morton Superscalar Pipelines 1 Scalar Pipeline Limitations (Shen + Lipasti 4.1) 1. Bounded Performance P = 1 T = IC CPI 1 cycletime = IPC frequency IC IPC = instructions per

More information

Mark Sandstrom ThroughPuter, Inc.

Mark Sandstrom ThroughPuter, Inc. Hardware Implemented Scheduler, Placer, Inter-Task Communications and IO System Functions for Many Processors Dynamically Shared among Multiple Applications Mark Sandstrom ThroughPuter, Inc mark@throughputercom

More information

EC EMBEDDED AND REAL TIME SYSTEMS

EC EMBEDDED AND REAL TIME SYSTEMS EC6703 - EMBEDDED AND REAL TIME SYSTEMS Unit I -I INTRODUCTION TO EMBEDDED COMPUTING Part-A (2 Marks) 1. What is an embedded system? An embedded system employs a combination of hardware & software (a computational

More information

Real Time Operating Systems and Middleware

Real Time Operating Systems and Middleware Real Time Operating Systems and Middleware Introduction to Real-Time Systems Luca Abeni abeni@disi.unitn.it Credits: Luigi Palopoli, Giuseppe Lipari, Marco Di Natale, and Giorgio Buttazzo Scuola Superiore

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

Scheduling Multi-Periodic Mixed-Criticality DAGs on Multi-Core Architectures

Scheduling Multi-Periodic Mixed-Criticality DAGs on Multi-Core Architectures Scheduling Multi-Periodic Mixed-Criticality DAGs on Multi-Core Architectures Roberto MEDINA Etienne BORDE Laurent PAUTET December 13, 2018 1/28 Outline Research Context Problem Statement Scheduling MC-DAGs

More information

SPECULATIVE MULTITHREADED ARCHITECTURES

SPECULATIVE MULTITHREADED ARCHITECTURES 2 SPECULATIVE MULTITHREADED ARCHITECTURES In this Chapter, the execution model of the speculative multithreading paradigm is presented. This execution model is based on the identification of pairs of instructions

More information

INF1060: Introduction to Operating Systems and Data Communication. Pål Halvorsen. Wednesday, September 29, 2010

INF1060: Introduction to Operating Systems and Data Communication. Pål Halvorsen. Wednesday, September 29, 2010 INF1060: Introduction to Operating Systems and Data Communication Pål Halvorsen Wednesday, September 29, 2010 Overview Processes primitives for creation and termination states context switches processes

More information

6. Results. This section describes the performance that was achieved using the RAMA file system.

6. Results. This section describes the performance that was achieved using the RAMA file system. 6. Results This section describes the performance that was achieved using the RAMA file system. The resulting numbers represent actual file data bytes transferred to/from server disks per second, excluding

More information

Comparison of scheduling in RTLinux and QNX. Andreas Lindqvist, Tommy Persson,

Comparison of scheduling in RTLinux and QNX. Andreas Lindqvist, Tommy Persson, Comparison of scheduling in RTLinux and QNX Andreas Lindqvist, andli299@student.liu.se Tommy Persson, tompe015@student.liu.se 19 November 2006 Abstract The purpose of this report was to learn more about

More information

Scheduling. CSC400 - Operating Systems. 7: Scheduling. J. Sumey. one of the main tasks of an OS. the scheduler / dispatcher

Scheduling. CSC400 - Operating Systems. 7: Scheduling. J. Sumey. one of the main tasks of an OS. the scheduler / dispatcher CSC400 - Operating Systems 7: Scheduling J. Sumey Scheduling one of the main tasks of an OS the scheduler / dispatcher concerned with deciding which runnable process/thread should get the CPU next occurs

More information

Resource Reservation & Resource Servers

Resource Reservation & Resource Servers Resource Reservation & Resource Servers Resource Reservation Application Hard real-time, Soft real-time, Others? Platform Hardware Resources: CPU cycles, memory blocks 1 Applications Hard-deadline tasks

More information

Design and Implementation of a FPGA-based Pipelined Microcontroller

Design and Implementation of a FPGA-based Pipelined Microcontroller Design and Implementation of a FPGA-based Pipelined Microcontroller Rainer Bermbach, Martin Kupfer University of Applied Sciences Braunschweig / Wolfenbüttel Germany Embedded World 2009, Nürnberg, 03.03.09

More information

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) CPU Scheduling Daniel Mosse (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process

More information