Latencies in Linux and FreeBSD kernels with different schedulers O(1), CFS, 4BSD, ULE

Similar documents
8: Scheduling. Scheduling. Mark Handley

ò mm_struct represents an address space in kernel ò task represents a thread in the kernel ò A task points to 0 or 1 mm_structs

Scheduling. Don Porter CSE 306

SMD149 - Operating Systems

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

Operating Systems. Process scheduling. Thomas Ropars.

CS370 Operating Systems

238P: Operating Systems. Lecture 14: Process scheduling

Real-Time and Performance Improvements in the

Advanced Operating Systems (CS 202) Scheduling (2)

Real Time Linux patches: history and usage

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Course Syllabus. Operating Systems

CISC 7310X. C05: CPU Scheduling. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/1/2018 CUNY Brooklyn College

Evaluation of Real-time operating systems for FGC controls

ò Paper reading assigned for next Tuesday ò Understand low-level building blocks of a scheduler User Kernel ò Understand competing policy goals

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

EECS 750: Advanced Operating Systems. 01/29 /2014 Heechul Yun

Scheduling. Scheduling 1/51

Scheduling in the Supermarket

Timers 1 / 46. Jiffies. Potent and Evil Magic

Scheduling, part 2. Don Porter CSE 506

Scheduling. CSC400 - Operating Systems. 7: Scheduling. J. Sumey. one of the main tasks of an OS. the scheduler / dispatcher

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Last Class: Processes

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

Analysis of the BFS Scheduler in FreeBSD

Uniprocessor Scheduling

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

CS Computer Systems. Lecture 6: Process Scheduling

Scheduling. Scheduling 1/51

CS370 Operating Systems

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

INF1060: Introduction to Operating Systems and Data Communication. Pål Halvorsen. Wednesday, September 29, 2010

W4118: advanced scheduling

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

ECE 598 Advanced Operating Systems Lecture 22

Process Scheduling. Copyright : University of Illinois CS 241 Staff

CS418 Operating Systems

Scheduling. CS 161: Lecture 4 2/9/17

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

Comparison of soft real-time CPU scheduling in Linux kernel 2.6 series with Solaris 10

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Multimedia Systems 2011/2012

Operating System Review Part

Operating system concepts. Task scheduling

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Process management. Scheduling

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CS3733: Operating Systems

CS 4284 Systems Capstone. Resource Allocation & Scheduling Godmar Back

Filesystem Performance on FreeBSD

Scheduling of processes

CPU Scheduling: Objectives

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Linux Kernel Hacking Free Course

Scheduling the Intel Core i7

ayaz ali Micro & Macro Scheduling Techniques Ayaz Ali Department of Computer Science University of Houston Houston, TX

PROCESS SCHEDULING Operating Systems Design Euiseong Seo

Practice Exercises 305

Scheduling Mar. 19, 2018

Multitasking and scheduling

Latency on preemptible Real-Time Linux

Computer Systems Laboratory Sungkyunkwan University

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

Reference Model and Scheduling Policies for Real-Time Systems

Chapter 5 Scheduling

Chapter 9 Uniprocessor Scheduling

Scheduling. Today. Next Time Process interaction & communication

Comparison of Real-Time Scheduling in VxWorks and RTLinux

Computer Science Window-Constrained Process Scheduling for Linux Systems

19: I/O Devices: Clocks, Power Management

High level scheduling: Medium level scheduling: Low level scheduling. Scheduling 0 : Levels

Scheduling - Overview

Processes, Execution, and State. What is CPU Scheduling? Goals and Metrics. Rectified Scheduling Metrics. CPU Scheduling: Proposed Metrics 4/6/2016

SFO The Linux Kernel Scheduler. Viresh Kumar (PMWG)

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

Multiprocessor Systems. Chapter 8, 8.1

Recap. Run to completion in order of arrival Pros: simple, low overhead, good for batch jobs Cons: short jobs can stuck behind the long ones

Notes based on prof. Morris's lecture on scheduling (6.824, fall'02).

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

CS2506 Quick Revision

Scheduling II. Today. Next Time. ! Proportional-share scheduling! Multilevel-feedback queue! Multiprocessor scheduling. !

Interactive Scheduling

Two Level Scheduling. Interactive Scheduling. Our Earlier Example. Round Robin Scheduling. Round Robin Schedule. Round Robin Schedule

LECTURE 3:CPU SCHEDULING

Embedded Systems. 5. Operating Systems. Lothar Thiele. Computer Engineering and Networks Laboratory

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013

Chapter 9: Virtual Memory

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

Transcription:

Latencies in Linux and FreeBSD kernels with different schedulers O(1), CFS, 4BSD, ULE Jaroslav ABAFFY Faculty of Informatics and Information Technologies Slovak University of Technology Bratislava, 842 16, Slovakia and Tibor KRAJČOVIČ Faculty of Informatics and Information Technologies, Slovak University of Technology Bratislava, 842 16, Slovakia ABSTRACT This paper is a study of scheduler latencies in different versions of Linux 2.6 kernel with emphasis set on its usage in real-time systems. It tries to find the optimal kernel configuration leading to minimal latencies using some soft real-time tuning options. Benchmark tests under heavy load show differences between kernels and also between different scheduling policies. We compare Linux kernel 2.6.22 with a long-acting O(1) scheduler with the to date latest Linux kernel 2.6.28 that already uses a completely new CFS scheduler. Not only a scheduler, but also other kernel options can lead to latency reducing. We compare a kernel compiled for server employment and then tuned to act as a soft real time operating system. For better comparison we perform selected benchmarks also on FreeBSD 7.1 kernel compiled with the older 4BSD scheduler and with the newer ULE scheduler. ULE scheduler was improved in the version 7 of FreeBSD, so we compare it also with ULE in FreeBSD 6.3. The emphasis of this paper is set on finding a scheduler with minimal latencies on tested hardware. Keywords: Latency, Linux, FreeBSD, Scheduling, Benchmark, Kernel. 1. INTRODUCTION Linux and FreeBSD were developed as general purpose operating systems without any consideration for real-time applications and are widely used as server operating systems. In recent years they have become attractive also as desktop operating systems, and nowadays they find their way to the real time community due to their low cost and open standards. There is a big dilemma not only in programming operating systems called throughput vs. latency. There is a large throughput expected in servers, but in embedded systems the main goal is low latency. Methodology used for finding optimal kernel configuration is based on setting relevant kernel options, compiling the kernel, and running benchmarks on it. With this method we are able to acquire the optimal kernel with CFS scheduler and also with O(1) scheduler. These kernels are compared with one another and also compared with versions compiled in default configuration. For FreeBSD we use default configuration, but once with 4BSD and then with ULE scheduler. The benchmark primary used for comparison of these kernels was Interbench. During the tests we found out that this benchmark can under several conditions cause inaccurate results. That provoked us into developing other benchmark called PI-ping. With this tool we are able to compare latencies under heavy load and are also able to explain previous misleading results. Our results were then approved by another benchmark Hackbench. PI-ping is based on two different types of process that occur in operating systems. In the first group there are processes that demand a lot of CPU. Other processes are interactive they request good latencies. In this benchmark we use for the first group a computation of the number Л, and as interactive process we use network ping to localhost every 1 ms. This time was already a long time ago empirically determined as the maximum latency when the user considers the system as interactive. Therefore it is widely used in different operating systems as a default amount of time assigned by the scheduler to the process between two task switches. Computation of the number Л is used to prevent any optimization by the compiler because the ciphers are not predictable. With this tool we are able to compare how well the kernel and the scheduler is desirable for CPU consuming processes,

and we can also see how many interactive process can under heavy load meet their deadlines. 2. TESTED KERNELS Linux 2.6.22 with O(1) scheduler This kernel is the latest Linux kernel that uses O(1) scheduler. The name of the scheduler is based on the popular big O notation that is used to determine the complexity of algorithms. It doesn't mean that this scheduler is the fastest, but it means that it can schedule processes within a constant amount of time independent on the number of tasks running in the system. Therefore, this scheduler is suitable for real-time application because it guarantees the highest scheduling time. Linux 2.6.28 with CFS scheduler Since the version 2.6.23 Linux kernel comes with Completely Fair Scheduler (CFS) that is the first implementation of a fair queuing process scheduler in widely used general-purpose operating systems. Schedulers in other operating systems (and also O(1) scheduler) are based on run queues, but this scheduler arranges processes in a red-black tree. The complexity of this scheduler is O(log n). The main advantage of a red-black tree is that the longest path in this tree is at most twice as long as the shortest path. This scheduler was written to accept different requirements in desktops and in servers. FreeBSD 7.1 with 4BSD scheduler 4BSD is the default scheduler in all FreeBSD versions prior to 7.1, although there is a new scheduler called ULE since FreeBSD 5.. It is the traditional Unix scheduler inherited from 4.3BSD, but in FreeBSD there were added scheduling classes. It is also based on run queues as the Linux O(1) scheduler. FreeBSD 7.1 with ULE scheduler The name of this latest scheduler in FreeBSD comes from the filename where is it located in source code of the kernel /sys/kern/sched_ule.c. In comparison to O(1) and 4BSD there are not two run queues, but in this case three: idle, current and next. Processes are scheduled from the queue current, and after expiration of their time slice they are moved to the queue next. Rescheduling is made by switching these two queues. In the queue idle there are idle processes. The main advantage of this scheduler is that it can have run queues per processor, what enables better performance results on multiprocessors. In this paper we perform selected benchmarks also on older ULE scheduler from FreeBSD 6.3 to see if there were made significant improvements as presented in [7]. 3. INTERBENCH BENCHMARK For testing different kernels we used a program named Interbench that generates system load and measures latencies under different conditions. Tested interactive tasks are: Audio - simulated as a thread that tries to run at 5ms intervals that then requires 5% CPU (2 times in a second) Video - simulated as a thread that uses 4% CPU and tries to receive CPU 6 times per second. X - simulated as a thread that uses a variable amount of CPU ranging from to 1%. This simulates an idle GUI where a window is grabbed and then dragged across the screen. Server - simulated as a thread that uses 9% CPU and tries to run at 2ms intervals (2 times in a second). This simulates an overloaded server. These tasks were tested under different system loads: None - otherwise idle system. Video the video simulation thread is also used as a background load. X - the X simulation thread is used as a load. Burn 4 threads fully CPU bound. Write - a streaming write to disk repeatedly of a file the size of physical ram. Read - repeatedly reading a file from disk of the size of physical ram. Compile- simulating a heavy 'make -j4' compilation by running Burn, Write and Read concurrently. Memload - simulating heavy memory and swap pressure by repeatedly accessing 11% of available ram and moving it around and freeing it. Server - the server simulation thread is used as a load. Each test was performed for 3 seconds and used 1 55 31 CPU cycles per second, so it can be considered as a sufficient time to obtain relevant data. The whole test took ca. 2 minutes. Soft real-time kernel options For finding a kernel configuration that leads to minimal interrupt latency we used as a reference kernel the kernel in default configuration, and then we were adding some relevant options in kernel configuration. This tuned kernel was compared to the default configuration to see if it has improved the real-time performance. Best results were achieved using these kernel options: Dynamic ticks High Resolution Timer Support Timer frequency 1 HZ HPET Timer Support Preemptible Kernel Preempt The Big Kernel Lock

Kernel compiled with these options is in this paper called soft real-time SRT. Kernel without these options is called Server because it desires better throughput instead of low latencies. Interbench results In following graph (Fig.1) there are shown the average latencies, their standard deviations, and maximal latencies the simulated video thread under different system loads when using the default configuration of Linux kernel 2.6.22. We tested all mentioned tasks, but in the video thread benchmark there are the differences most visible. 1. 9. 8. 7. 6. 5. 4. 3. 2. 1.. Latency (ms) SD(ms) Max Latency (ms) None X Burn Write Read Compile Memload Server Fig. 1. Latencies in kernel 2.6.22 Server (lower is better) After compiling the kernel using soft real-time kernel options we were able to achieve improvements in these latencies (Fig. 2). Only under the load X which represents a user activity in graphical interface, there is degradation especially for the maximal latency. This can be explained so that in the time when the benchmarked video thread required the CPU also the X thread demanded a lot of CPU. Video thread requires 4% of CPU every one second, but X demands variable amount in random times. case to such observable impact on scheduling latencies, the reason is explained by the author of this scheduler Ingo Molnar: CFS uses nanosecond granularity accounting and does not rely on any jiffies or other HZ detail. Thus the CFS scheduler has no notion of 'timeslices' and has no heuristics whatsoever. [5] 1. 9. 8. 7. 6. 5. 4. 3. 2. 1.. Latency (ms) SD(ms) Max Latency (ms) None X Burn Write Read Compile Memload Server Fig. 3. Latencies in kernel 2.6.28 (lower is better) GFS scheduler Benchmarking these kernels using Interbench has shown big differences between Linux 2.6.22 and 2.6.28. The main question that appeared is why was CFS scheduler included into the main kernel when it has such performance overhead compared to O(1) under almost all conditions. CFS defeats only when no other load is in the system, or when the load simulating graphical user interface is presented. CFS should have perform with low latencies especially under the loads when some processes are interactive, and the other are CPU demanding. In our case these CPU demanding processes are the loads Burn, Compile, and Server where CFS theoretically has to have better results, but practically these were the loads where it has its biggest problems. 1. 9. 8. 7. 6. 5. 4. 3. 2. 1.. Latency (ms) SD(ms) Max Latency (ms) None X Burn Write Read Compile Memload Server Fig. 2. Latencies in kernel 2.6.22 SRT (lower is better) In the next graph (Fig. 3.) there are the results for Linux 2.6.28. Using different kernel options did not lead in this Problem with Interbench is that it runs benchmark and load as threads within one process. CFS scheduler tries to distribute the resources fair between processes, and parent processes share their assigned time quantum with their children. In the case when Interbench uses CPU demanding thread as a load and benchmarks interactive thread like video, CFS scheduler divides the quantum between these threads. In the next table (Tab. 1.) we run Interbench thread Video with no other load from one shell and the load Burn representing 4 processes demanding 1% CPU from another shell. This load was selected because it depends mostly on CPU; other loads use also a lot of memory and IO operations. Different kernel versions can have different IO schedulers and memory management, and we target now on the scheduler.

Latency (ms) Tab. 1. Latencies with GFS SD (ms) Max Latency (ms) % Desired CPU % Deadlines Met O(1).21.436 16.7 1. 99.9 CFS 2.7 25.3 59.5 69.7 19.3 CFS 2 shells CFS + GFS 16.2 2.5 36.2 83.8 32.9.7.8.18 1. 1. An extension called Group Fair Scheduler (GFS) was added to the CFS scheduler in Linux kernel 2.6.24. It enables fair CPU sharing between groups of processes, in this case based on user ID. If user A runs 1 processes and user B 5 processes, both of the users become 5% of CPU independent on the number of processes. So we run the benchmark under two different users and achieved so already better results than with O(1) scheduler. Interbench problems When using Interbench, we found out several problems. It benchmarks only one thread and says nothing about the threads that are used as load. When we run Interbench with 1 load processes in Linux 2.6.22, the system was very slow, and the test took 6.261 seconds compared to 32.537 seconds in 2.6.28. Also interaction to user inputs was very slow, but the results said something else. Interbench is available only for Linux, and we wanted to compare Linux and FreeBSD kernels. This motivated us to create an own benchmark. 4 PI-PING BENCHMARK Pi-ping uses two types of processes for benchmarking which appear in operating systems. One of them are interactive tasks demanding low latencies, the other group are processes with high CPU utilization. Modern operating systems have to perform well under both conditions. In the following graph (Fig.4.) there is shown how many deadlines were met by an interactive process ping. The calculation is simple, we run ping every 1 ms and measure the time of the whole benchmark rounded up to 1 ms. After dividing this time by 1 we obtain the number of expected successful pings. The ratio of measured pings compared to the number of expected says how many deadlines were met. Most of the benchmarks are designed either for latency measurements or for performance comparison, but we wanted to compare both of them. As Deadlines. met [%] 1.2 1.8.6.4.2 1 2 7 2 54 148 43 196 298 813 Number of load processes CFS 4BSD Fig. 4. Percentage of met deadlines in dependence of the number of load processes (higher is better) Using this benchmark we obtained completely different results than in using Interbench. CFS succeeded already under the load of more than 8 processes, and always 1% of the interactive processes met their deadlines. O(1), 4BSD and have similar characteristics, but be aware of the logarithmic axis used in this graph. The latest ULE scheduler in FreeBSD 7.1 can also be considered as a low latency scheduler since it also covers 1% deadlines when the number of load processes is in acceptable limits. Already servers are often limited by administrators to maximal 124 processes, and now we focus on latencies in small systems. Surprising was the decrease of O(1) scheduler, by only 2 load processes running in background the ping responsiveness was only around 8%. In Interbench benchmark we achieved with the same kernel 99.9% coverage of deadlines by 4 load processes using Video thread as test which demanded 4% of CPU. How is it possible that CPU consuming test scores better than a small ping process? PI-ping shows also problems with ULE scheduler in FreeBSD 6.3. Results in the graph (Fig.4.) are the average values measured in 3 benchmarks. Other schedulers have balanced characteristic, and the results were almost same in each experiment. In following graph there are the results of three measurements of scheduler. Deadlines met. 1.2 1.8.6.4.2 1 2 7 2 54 148 43 196 298 813 Number of load processes Fig. 5. Results of in 3 measurements O(1) 1 2 3

As you can see, by raising the number of processes the results of ULE scheduler become in FreeBSD 6.3 instable. We also wanted to perform benchmark not only for interactive tasks, but also for CPU demanding processes like computation of the number Л. In the following graph CFS is used as reference scheduler and the speed of other schedulers is calculated as the time of CFS divided by the time of the other scheduler. Relative speed tocfs 1.5 1.95.9.85.8.75 1 2 7 2 54 148 43 196 298 813 Number of load processes CFS O(1) 4BSD-6.3 4BSD-7.1 Fig. 6. Relative speed of Л computation compared to CFS (higher is better) In this test FreeBSD is approximately 1% slower than Linux. It can be caused by optimization because Linux was compiled for i686 and FreeBSD for i386, by the different implementation of task switching routine, by the concept of forking processes, or by any other kernel options than scheduler. Important is that for the computation of Л the scheduler doesn t have high impact. These processes do not use any shared memory, pipes, mutexes, lot of IO operations et cetera and always use the whole time slice assigned by the scheduler which is 1 ms by default in the both operating systems. If we run for example 1 processes, each running for 1 seconds, the computation should take 1 seconds in ideal case. But in the real case it is higher because of the overhead caused by operating system, rescheduling, and other running processes in the system. When comparing FreeBSD with ULE scheduler and with 4BSD, the results are almost the same. Small differences are only between different versions (7.1 versus 6.3), but the curves look similar. ULE and 4BSD are based on run queues with the complexity O(1), so for these noninteractive tasks they perform equal. But in case of Linux, we can see the confrontation of the CFS scheduler with O(log n) complexity with the older O(1) scheduler. For smaller numbers of processes CFS performs better, but if there are a lot of processes, O(1) takes advantage of its better complexity. In this case the intersection of O(1) and O(log n) was experimentally set at the point, where 5 processes are in the system. 5 HACKBENCH BENCHMARK In previous benchmark we have inspected instable results of ULE scheduler in FreeBSD-6.3 and demonstrated that CFS performs better for interactive processes and has also better results for non-interactive processes when there is not too much of them. To approve the results of PI-ping benchmark we used another test called Hackbench. This benchmark launches a selected number of processes that either listen on a socket or on a pipe and complimentary the same number of processes that send 1 messages to the listening processes. The results of the benchmark are in the graph (Fig.7.), but now we use not average values, but from only one benchmark, to depict the differences in stability of ULE- 6.3 and. In this benchmark performs better for smaller number of processes, but then becomes the instability predicted by PI-ping visible. CFS, O(1) and have linear characteristic. time [s]. 4 35 3 25 2 15 1 5 4 16 28 4 52 64 76 88 1 112 124 136 Number of processes 148 16 172 184 196 4BSD O(1)-2.6.22 CFS-2.6.28 Fig. 7. Results of Hackbench benchmark (lower is better) The curves of CFS and O(1) are very near to each other. To make it more visible, we show the results as the relative latency per process to CFS (Fig.8.). We can see, that O(1) nearest very slowly to CFS. In case of Pi-ping, the Л processes were CPU demanding, Hackbench processes are pure interactive. That is the reason why in PI-ping by already 5 processes O(1) performed better then CFS. CFS is designed to favoritism small and fast processes. time [s]..25.2.15.1.5 -.5 4 16 28 4 52 64 76 88 1 112 124 136 Number of processes 148 16 172 184 196 4BSD O(1)-2.6.22 Fig. 8. Relative latency per process compared with CFS (lower is better)

6. TESTING HARDWARE For testing and benchmarking was used a common laptop with 1 GB RAM and Celeron M processor at 1.6 GHz. Important is that the used processor is single-core. Using multi-core processor with enabled symmetric multiprocessing would affect the results in significant way. 7. CONCLUSION The results and graphs show that the new Linux scheduler CFS really competes in both employments in a server field demanding high throughput and also in embedded systems demanding low latencies. Other schedulers compared in this paper are based on run queues; this one organizes processes in a red-black tree. Computational complexity of other schedulers is O(1), CFS has the complexity O(log n). We have shown that CFS performs better than O(1) for computational tasks when the number of processes is smaller than approximately 5. This was by us experimentally defined as the intersection of O(1) and O(log n) functions in this case. But for interactive tasks it performs better in all tested situations. [5] Ingo Molnar, This is the CFS Scheduler http://people.redhat.com/mingo/cfs-scheduler/scheddesign-cfs.txt [27.1.29] [6] Gilad Ben-Yossef, Soft, hard and ruby hard real time with Linux http://www.scribd.com/doc/3469938/soft-hard-andruby-hard-real-time-with-linux [27.1.29] [7] Kris Kennaway, Introducing FreeBSD 7. http://people.freebsd.org/~kris/scaling/7.%2previe w.pdf [27.1.29] [8] Jaroslav Abaffy, Interrupt Latency in Linux 2.6, Informatics and Information Technologies Student Research Conference, Vydavatelstvo STU, 28, pp. 387-393 The goal of this work was to show that the complexity O(log n) of the CFS scheduler is not a handicap for real applications, and we can recommended also for embedded systems demanding real-time performance. We have also shown the improvement of ULE scheduler in the latest version of FreeBSD. ULE in FreeBSD 7.1 performs better than long acting 4BSD and does not suffer the problems inspected by using this scheduler in FreeBSD 6.3. 8. ACKNOWLEDGEMENT This work was supported by the Grant No.1/649/9 of the Slovak VEGA Grant Agency. 9. REFERENCES [1] Benchmark Programs. http://elinux.org/benchmark_programs [27.1.29] [2] Lukas Jelinek, Jadro systemu Linux, Computer press, 28 [3] Sanjoy Baruah and Joel Goossens, Scheduling Real- Time Tasks: Algorithms and Complexity, Handbook of Scheduling: Algorithms, Models and Performance Analysis, Chapman & Hall/CRC, 24 [4] Clark Williams, Linux Scheduler Latency http://www.linuxdevices.com/articles/at89659494 1.html [27.1.29]