Predicting the response time of a new task on a Beowulf cluster

Size: px
Start display at page:

Download "Predicting the response time of a new task on a Beowulf cluster"

Transcription

1 Predicting the response time of a new task on a Beowulf cluster Marta Beltrán and Jose L. Bosque ESCET, Universidad Rey Juan Carlos, Madrid, Spain, mbeltran@escet.urjc.es,jbosque@escet.urjc.es Abstract. In this paper the problem of making predictions of incoming tasks response times in a cluster node is focused. These predictions have a significant effect in areas such as dynamic load balancing, scalability analysis or parallel systems modelling. Response time predictions need an estimation of the CPU time that will be available for tasks during their execution. All the tasks in the run queue share the processor time in a balanced way but the CPU time consumed by each task depends on the type of task it is, CPU-bound or not. This paper presents two new response time prediction models. The first one is a mixed model based on two widely used models, CPU availability and Round Robin models. The second one, called RTP model, is a completely new model based on a detailed study of different kinds of tasks and their CPU time consuming. The predictive power of these models is evaluated by running a large set of tests and the predictions obtained with the second proposed model exhibit an error of less than 2 % in all these experiments. 1 Introduction Beowulf clusters are becoming very popular due to their good price-performance ratio, scalability and flexibility. Thus, the study of this kind of systems is a research area of increasing interest ([1], [11], [8]). In Beowulf clusters, the CPU is one of the most important resources([2]). Because of the dynamic nature of these systems, the CPU load on each of the cluster nodes can vary drastically in a very short time. Predicting the amount of work on the different cluster nodes is a basic problem that arises in a lot of contexts such as cluster modelling, performance analysis or dynamic load balancing. If the response time of a new task is known on each of the cluster nodes the load estimation is very easy. But response times can only be measured for completed processes and very often these times must be known before jobs begin their execution. Therefore, the mentioned applications require a prediction, implicit or explicit, of the response time for a new task on the different cluster nodes. In this paper the CPU assignment (A) is proposed to measure and compare the load of different cluster nodes, in other words, the response time of a new task in each of these nodes. The assignment is defined as the percentage of CPU time that would be available to a newly created task. CPU availability has been successfully used before, for example, to schedule programs in distributed systems ([6], [12]). The problem of predicting the available CPU in a cluster node is examined. The contributions of this paper are an analytical and experimental study of two well-known response time prediction models, two new static models for CPU assignment computation, a verification of these models through their application in a complete set of experiments and a comparison between all the obtained results. In contrast to the other approaches, the response time predictions with the second proposed model exhibits an error of less than 2%, so the experimental results indicate that this new model is accurate enough for all the mentioned contexts. The rest of the paper is organized as follows. Section 2 discusses related work on predicting processor workload. Section 3 presents two existing CPU assignment models and proposes two improved new models. Experimental results on comparing the four discussed models are reported in Section 4. And finally, Section 5 with conclusions and suggestions for future work. 2 Background Research that is closely related to this paper falls under two different models: based on predicting the future from past or based on the task queue model. As an example of the first kind of models, [13] focused on making short and medium term predictions of CPU availability on time-shared Unix systems. On the other hand, [10] presented a method based on neural networks for

2 automatically learning to predict CPU load. And finally [4] and [5] evaluated linear models for load prediction and implemented a system that could predict the running time of a compute-bound task on a host. Queueing models have been widely used for processors due to their simplicity, so the other kind of models is more extended. In a highly influential paper, Kunz ([9]) showed the influence of workload descriptors on the load balancing performance and concluded that the best workload descriptor is the number of tasks in the run queue. In [17] the CPU queue length is used too as an indication of processors load. And this load index is used again, for example, in [3], [15] and [16]. Finally, in [7], the number of tasks in the run queue is presented as the basis of good load indices but an improvement is proposed by averaging this length over a period of one to four seconds. 3 Response time prediction models In this paper the CPU assignment (A) is defined as the percentage of CPU time that would be available to a new incoming task in a cluster node. This parameter is used in this paper to analyse prediction models because the response time of a task is directly related to the average CPU assignment it has during its execution. If a process is able to obtain 50% of CPU time slices, it is expected to take twice as long to execute as it would if the CPU was completely unloaded. So a response time prediction for a new task will require a prediction of the CPU assignment for this task during its execution. There are two popular response time prediction models widely used, for example, in dynamic load balancing applications. These models decide how to map new tasks to cluster nodes, determining the less loaded node, thus, predicting in which node the response time of the new task will be shorter. They define a load index such as the percentage of available CPU or the number of tasks in the run queue and base their predictions on this index value. But they do not take into account the influence of new tasks on system performance. The assignment concept tries to consider the effects of executing new tasks on CPU availability. 3.1 Previous models analysis The most simple approach is to consider the less loaded node as the node with more free or idle CPU. Analysing this model from the CPU assignment point of view, it considers the CPU assignment as the available CPU at a given instant: A = Available CP U (1) Thus, the predicted assignment for a new task is the percentage of CPU idle time. This model, called CPU availability model in the rest of this paper, has one important drawback: it does not take into account processor timesharing between tasks. For example, if one cluster node is executing one CPU-bound task (consuming almost all of he CPU time), this model predicts around a 5% of CPU assignment for a new task, but in a time shared system, the new task would have around a 50% of CPU assignment because the two tasks would share the CPU time. In most of computer systems tasks share the processor time using a Round Robin scheduling policy ([14]). In this policy a time slice or quantum (q) is defined. The CPU scheduler picks a process from the run queue and dispatches it to the processor. If the process is still running at the end of its quantum, it is preempted and added to the queue tail. But if the process finishes or sleeps before the end of the quantum, it leaves the processor voluntarily. The other well-known response time prediction model is based on this scheduling and takes the node with less tasks in the run queue as the less loaded cluster node. So the assignment is predicted as the percentage of CPU time that corresponds to a new task with this scheduling policy. If the number of tasks in the run queue is N, the assignment prediction with the Round Robin model is: A = 1 N +1 Because the processor time will be shared in a balanced way between N+1 tasks. This model is widely used but it only considers CPU-bound tasks. These tasks are computing intensive CPU operations all the time but do not make memory swapping or I/O operations (with disks or network). Indeed, a node executing one CPU-bound task could give less assignment to a new task than a node executing several no CPU-bound tasks. But this model always predicts more assignment for a new task in the first case. (2)

3 3.2 Proposed models To overcome these limitations and take into account all kind of tasks without monitoring other resources like memory or network, a mixed model is proposed, combining the two previous prediction models. Let U denote the CPU utilization (percentage of CPU time used for the execution of all the tasks in the run queue). The CPU assignment prediction for a new task with this model is: A = ρ 1=(N +1)if U 1=N 1 U otherwise Therefore, if there are only CPU-bound tasks executing on a processor, assignment is obtained applying the Round Robin model. But when there are no CPU-bound tasks, they are not taking advantage of all their corresponding CPU 1 time and the CPU assignment for an incoming task will be all the available CPU, of course, greater than. This N+1 model takes the best of the two models exposed in the previous subsection, so it may perform well with a run queue with all CPU-bound tasks (Round Robin model) and, at the other extreme, with all no CPU-bound tasks (CPU availability model). But it is a mystery how this model will perform when there are different types of tasks in the run queue. Finally, an improvement for this model is proposed, with a more sophisticated explanation about how CPU time is shared between different tasks. This model is called RTP model (Response Time Prediction model). Considering the Round Robin scheduling, CPU-bound tasks always run until the end of their time slices while no CPU-bound tasks sometimes leave the processor without finishing their quantums. The remaining time of these slices is consumed by CPU-bound tasks, always ready to execute CPU intensive operations. The aim is to take into account this situation, so let t CPU denote the CPU time consumed by a no CPU-bound task when the CPU is completely unloaded and t denote the response time for this task in the same environment. The fraction of time spent in CPU intensive operations for this task is: X = t CPU t Suppose that there are n CPU-bound tasks (denoted by CPU-b) in the run queue and m no CPU-bound tasks (denoted by CPU b). Therefore, N = n+m and the proposed model predicts the following assignment for the i th no CPU-bound task when there is a new incoming task: (3) X i A(CPU b) i = (n +1) q P m + X (4) k=0 k The new task is supposed to be CPU-bound because it would be the worst case, when the new task would consume all its CPU slices. So, with the new incoming task there will be n+1 CPU-bound tasks in the run queue. Using the predicted assignments for all no CPU-bound tasks, the assignment for a new task can be computed as all the CPU time that is not consumed by no CPU-bound tasks shared with Round Robin policy between CPU-bound tasks: A = 1 P m i=0 A(CPU b) i n +1 (5) 4 Experimental results To determine the accuracy of these four models, a set of experiments has been developed in order to compare measured and predicted response times. The criteria used to evaluate the assignment models is the relative error of their predictions. All the experiments take place on a 550 MHz Pentium III PC with 128 MB of RAM memory. The operating system installed in this PC is Debian Linux kernel version and it uses a Round Robin scheduling policy with a 100 millisecond time slice (q = 100 ms).

4 Load X t CPU (s) t (s) test test test test test test test test Table 1. Test loads 4.1 Test loads Different synthetic workloads have been generated due to the lack of appropriate trace data and to their simplicity. The CPU-bound test load (test0) is a very simple program, loop consume CPU end loop And the CPU intensive operation used to consume processor time is a vectorial product. On the other hand, the no CPU-bound test loads (testi with i=1, 2,...,7) are: loop consume u milliseconds of CPU sleep s milliseconds end loop Different loads have been generated (table 1) controlling the percentage of consumed CPU with u and s parameters (because X = u=s). Besides, to avoid a possible influence of memory hierarchy on our experiments, all test loads use data stored in L1 cache memory. 4.2 Experiments The first set of experiments to evaluate the models validity and accuracy is performed statically. Thus different sets of test loads are executed simultaneously in our system, beginning and ending at the same time. As can be seen in the first column of the tables 2 and 3 these sets of loads combine different kind of tasks, CPU-bound, and no CPU-bound with different CPU utilization percentages. In each experiment, CPU and response times are measured for all test loads. In order to determine the most accurate model, assignment predictions are made for the task called test0 in each experiment. This task is selected because it is a CPU-bound task, as it is supposed to be a new incoming task in the system (supposing the worst case). If A P is the predicted assignment for this task, the predicted response time is: t t P = A P where t is the response time for the task called test0 when it is executed on the unloaded CPU. Therefore a model accuracy can be determined with the relative error of this prediction: 100 (tm t P ) e = abs (6) t m where t m is the response time measured when the load test0 is executed simultaneously with other tasks in a certain experiment. The results obtained with all these experiments are detailed in tables 2 and 3, and figures 1, 2 and 3. In the tables the response time measured for test0 (t m ) and the CPU time measured for tasks in each experiment (denoted by t CPU1,

5 Exp. t m tcpu1 t CPU2 t CPU3 t CPU4 t PA e A(%) t PB e B(%) t PC e C(%) t PD e D(%) test0, test test0, test test0, test test0, test test0, test test0, test0, test test0, test0, test test0, test0, test test0, test0, test test0, test0, test test0, test0, test0, test test0, test0, test0, test test0, test0, test0, test test0, test0, test0, test test0, test0, test0, test Table 2. Results with the four discussed models. Sets of experiments reported in figures. Exp. t m tcpu1 t CPU2 t CPU3 t CPU4 t PA e A(%) t PB e B(%) t PC e C(%) t PD e D(%) test0, test5, test test0, test2, test test0, test1, test test0, test3, test test0, test0, test6, test test0, test0, test5, test test0, test0, test4, test Table 3. Results with the four discussed models. Remaining experiments t CPU2, t CPU3 and t CPU4 ), are presented together with the predicted response time for test0 (t P ) and the percentage of relative prediction error (e). All the time values are measured in seconds. There are four predicted times and prediction errors because the four presented models are evaluated : CPU availability model (model A), Round Robin model (model B), mixed model (model C) and RTP model (model D). The assignment predictions have been computed using equations 1, 2, 3 and 5 respectively. Results in table 2 are reported in figures 1, 2 and 3. Figure 1 corresponds to a set of experiments with one CPUbound task (test0) and one variable no CPU-bound task. This task increases its X value from test7 (X =5%)to test1 (X = 66%) and some of these results are not in tables for space restrictions. The prediction error for the CPUbound task response time is plotted against the fraction of CPU time for the no CPU-bound task (defined as X in the previous section). This curve is plotted for the four discussed models. Figures 2 and 3 present the results for the same kind of experiments but with two and three CPU-bound tasks respectively. The remaining experiments, with other combinations of tasks, are showed in table 3. From both, the tables and the figures, it is clear that large reductions in prediction errors are experienced in all the experiments using the RTP model. Indeed the prediction error with this model is always less than 2 %. There are not instances in which one of the others models perform better than RTP model. Figures 1, 2 and 3 show how the prediction error with the CPU availability model varies with the X value for the no CPU-bound task. For low X values the prediction error is low too, but as this value increases, the prediction error is sharply increasing. As it is said before, this can be attributed to the model, which ignores the possibility of time-sharing between tasks. A task with X around 50 % (near CPU-bound tasks) would share the processor time in a balanced way with the tasks in the run queue and it is opposed to the prediction made with this model, which predicts a very low CPU assignment for this task.

6 90 80 CPU availability model Round Robin model Mixed model RTP model e (%) X of no CPU-bound task Fig. 1. Prediction error for the discussed models with one CPU-bound task and other varying task CPU availability model Round Robin model Mixed model RTP model e (%) X of no CPU-bound task Fig. 2. Prediction error for the discussed models with two CPU-bound tasks and other varying task CPU availability model Round Robin model Mixed model RTP model e (%) X of no CPU-bound task Fig. 3. Prediction error for the discussed models with three CPU-bound tasks and other varying task In contrast to this approach, the Round Robin model performs very well with large X values (near CPU-bound tasks) but the prediction error increases dramatically when X decreases. This was expected because this model does

7 not take into consideration the remaining time of the CPU time slices left by no CPU-bound tasks. So, the assignment prediction when there are this kind of tasks in the run queue is always less than its real value. Finally, for the mixed model, the value of e falls with low and large values of X. The previous results give some insight into why the error varies in this way. This model is proposed to take the best of the CPU availability and Round Robin models. Thus, the mixed model curve converges with the CPU availability model curve at low values of X and with the Round Robin model curve at large values. Notice that the error increases for medium values of X, and this is the disadvantage of this model, although it supposes a considerable improvement over the two previous models because the prediction error does not increases indefinitely. Still, even this last model is not superior to the RTP model. Besides the low prediction error values obtained with this model in all the experiments, figures and tables show the imperceptible dependence of this error on the kind of tasks considered. 5 Conclusions The selection of a response time prediction model is nontrivial when minimal prediction errors are required. The main contribution of this paper is a detailed analysis of two existing and two new prediction models for all kind of tasks in a real computer system. The CPU assignment concept has been introduced to predict response times. Using previous prediction models, the greatest source of error in making predictions comes from considering only the current system load. But due to Round Robin scheduling policies, the execution of a new incoming task has a significant influence on response times of all the tasks in the system run queue. Thus, CPU assignment is introduced to consider the current system load and the effects of executing a new task on the CPU availability. A wide variety of experiments has been performed to analyse, in terms of CPU assignment prediction, the CPU availability and Round Robin models accuracy. The results presented in the previous section reveal that these models perform very well in certain contexts but fail their predictions in others. The CPU availability model obtains errors between 0 and 10 % in experiments with CPU-bound tasks and one no CPU-bound task with low X. But errors increase dramatically, going beyond 1000 % when X increases. The Round Robin model results are completely different. The prediction error is near 0 % when all the tasks in the run queue are CPU-bound but increases when one or more tasks are no CPU-bound. In these cases error can reach more than a 100 %. These results suggest that it may not be unreasonable to combine these two models for improving their predictions. So, the first proposed model (mixed model) is a simple combination of these two discussed models. And the experimental results indicate that an important improvement over these models can be obtained. With high and low X values error is as low as it was with CPU availability and Round Robin models. And with the remaining experiments the prediction error does not exceed a 54 %. Finally, an optimized and relatively simple model is proposed. The RTP model is based on a study of the CPU time sharing and the scheduling policies used by the operating system. This model takes into consideration the influence of a new task execution on the set of tasks in the run queue. Experimental results demonstrate the validity and accuracy of this model, the prediction error is always less than 2 %. Thus the RTP model has been shown to be effective, simple and very accurate under static conditions. In the context of Beowulf clusters these results are encouraging. A very interesting line for future research is to extend the RTP model to dynamic environments. This may require some changes in the model to avoid using a priori information about tasks such as the tasks percentage of CPU utlization (X). But it would provide us with a dynamic model, very useful for predicting the response time of new incoming tasks on cluster nodes. References 1. Gordon Bell and Jim Gray. What s next in high-performance computing? Communications of the ACM, 45(2):91 95, Rajkumar Buyya. High Performance Cluster Computing, Volume 1: Architecture and Systems. Prentice-Hall PTR, K. Benmohammed-Mahieddine; P.M. Dew and M. Kara. A periodic symmetrically-initiated load balancing algorithm for distributed systems. In Proceedings of the 14th International Conference on Distributed Computing Systems, Peter A. Dinda. Online prediction of the running time of tasks. In Proceedings. 10th IEEE International Symposium on High Performance Distributed Computing, pages , 2001.

8 5. Peter A. Dinda. A prediction-based real-time scheduling advisor. In 16th International Parallel and Distributed Processing Symposium. IEEE, Francine D. Berman et al. Application-level scheduling on distributed heterogeneous networks. In Proceedings of Supercomputing 1996, D. Ferrari and S. Zhou. An empirical investigation of load indices for load balancing applications. In 12th IFIP International Symposium on Computer Performance Modelling, Measurement and Evaluation. Elsevier Science Publishers, John L. Hennessy and David A. Patterson. Computer Architecture: A Quantitative Approach. Morgan Kaufmann Publishers, Thomas Kunz. The influence of different workload descriptions on a heuristic load balancing scheme. IEEE Transactions on Software Engineering, 17(7): , Pankaj Mehra and Benjamin W. Wah. Automated learning of workload measures for load balancing on a distributed system. In Proceedings of the 1993 International Conference on Parallel Processing. Volume 3: Algorithms and Applications, pages , Gregory F. Pfister. In search of clusters: The Ongoing Battle in Lowly Parallel Computing, 2nd ed. Prentice Hall, Neil T. Spring and Richard Wolski. Application level scheduling of gene sequence comparison on metacomputers. In International Conference on Supercomputing, pages , R.Wolski; N. Spring and J. Hayes. Predicting the cpu availability of time-shared unix systems on the computational grid. In Proceedings of the Eighth International Symposium on High Performance Distributed Computing, pages IEEE, A. S. Tanenbaum. Distributed Operating Systems. Prentice-Hall, Inc., Gil-Haeng Lee; Wang-Don Woo and Byeong-Nam Yoon. An adaptive load balancing algorithm using simple prediction mechanism. In Proceedings of the Ninth International Workshop on Database and Expert Systems Applications, pages , Kai Shen; Tao Yang and Lingkun Chu;. Cluster load balancing for fine-grain network services. In International Parallel and Distributed Processing Symposium, pages 51 58, S. Zhou. A trace-driven simulation study of dynamic load balancing. In IEEE Transactions on Software Engineering, pages , 1988.

Scheduling of processes

Scheduling of processes Scheduling of processes Processor scheduling Schedule processes on the processor to meet system objectives System objectives: Assigned processes to be executed by the processor Response time Throughput

More information

Resource Management on a Mixed Processor Linux Cluster. Haibo Wang. Mississippi Center for Supercomputing Research

Resource Management on a Mixed Processor Linux Cluster. Haibo Wang. Mississippi Center for Supercomputing Research Resource Management on a Mixed Processor Linux Cluster Haibo Wang Mississippi Center for Supercomputing Research Many existing clusters were built as a small test-bed for small group of users and then

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 370: SYSTEM ARCHITECTURE & SOFTWARE [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University OpenMP compiler directives

More information

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5) Announcements Reading Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) 1 Relationship between Kernel mod and User Mode User Process Kernel System Calls User Process

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps Interactive Scheduling Algorithms Continued o Priority Scheduling Introduction Round-robin assumes all processes are equal often not the case Assign a priority to each process, and always choose the process

More information

SIMULATION-BASED COMPARISON OF SCHEDULING TECHNIQUES IN MULTIPROGRAMMING OPERATING SYSTEMS ON SINGLE AND MULTI-CORE PROCESSORS *

SIMULATION-BASED COMPARISON OF SCHEDULING TECHNIQUES IN MULTIPROGRAMMING OPERATING SYSTEMS ON SINGLE AND MULTI-CORE PROCESSORS * SIMULATION-BASED COMPARISON OF SCHEDULING TECHNIQUES IN MULTIPROGRAMMING OPERATING SYSTEMS ON SINGLE AND MULTI-CORE PROCESSORS * Hala ElAarag, David Bauschlicher, and Steven Bauschlicher Department of

More information

Course Syllabus. Operating Systems

Course Syllabus. Operating Systems Course Syllabus. Introduction - History; Views; Concepts; Structure 2. Process Management - Processes; State + Resources; Threads; Unix implementation of Processes 3. Scheduling Paradigms; Unix; Modeling

More information

Uniprocessor Scheduling

Uniprocessor Scheduling Uniprocessor Scheduling Chapter 9 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall CPU- and I/O-bound processes

More information

Uniprocessor Scheduling. Aim of Scheduling

Uniprocessor Scheduling. Aim of Scheduling Uniprocessor Scheduling Chapter 9 Aim of Scheduling Response time Throughput Processor efficiency Types of Scheduling Long-Term Scheduling Determines which programs are admitted to the system for processing

More information

Uniprocessor Scheduling. Aim of Scheduling. Types of Scheduling. Long-Term Scheduling. Chapter 9. Response time Throughput Processor efficiency

Uniprocessor Scheduling. Aim of Scheduling. Types of Scheduling. Long-Term Scheduling. Chapter 9. Response time Throughput Processor efficiency Uniprocessor Scheduling Chapter 9 Aim of Scheduling Response time Throughput Processor efficiency Types of Scheduling Long-Term Scheduling Determines which programs are admitted to the system for processing

More information

Ch 4 : CPU scheduling

Ch 4 : CPU scheduling Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,

More information

Chapter 9 Uniprocessor Scheduling

Chapter 9 Uniprocessor Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 9 Uniprocessor Scheduling Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall Aim of Scheduling Assign

More information

Correlation based Empirical Model for Estimating CPU Availability for Multi-Core Processor in a Computer Grid

Correlation based Empirical Model for Estimating CPU Availability for Multi-Core Processor in a Computer Grid Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'18 3 Correlation based Empirical Model for Estimating CPU Availability for Multi-Core Processor in a Computer Grid Khondker S. Hasan Department of

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

Frequently asked questions from the previous class survey

Frequently asked questions from the previous class survey CS 370: OPERATING SYSTEMS [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University L15.1 Frequently asked questions from the previous class survey Could we record burst times in

More information

Frequently asked questions from the previous class survey

Frequently asked questions from the previous class survey CS 370: OPERATING SYSTEMS [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University L14.1 Frequently asked questions from the previous class survey Turnstiles: Queue for threads blocked

More information

COSC243 Part 2: Operating Systems

COSC243 Part 2: Operating Systems COSC243 Part 2: Operating Systems Lecture 17: CPU Scheduling Zhiyi Huang Dept. of Computer Science, University of Otago Zhiyi Huang (Otago) COSC243 Lecture 17 1 / 30 Overview Last lecture: Cooperating

More information

Scheduling in the Supermarket

Scheduling in the Supermarket Scheduling in the Supermarket Consider a line of people waiting in front of the checkout in the grocery store. In what order should the cashier process their purchases? Scheduling Criteria CPU utilization

More information

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou (  Zhejiang University Operating Systems (Fall/Winter 2018) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review Motivation to use threads

More information

8: Scheduling. Scheduling. Mark Handley

8: Scheduling. Scheduling. Mark Handley 8: Scheduling Mark Handley Scheduling On a multiprocessing system, more than one process may be available to run. The task of deciding which process to run next is called scheduling, and is performed by

More information

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) CPU Scheduling The scheduling problem: - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) When do we make decision? 1 / 31 CPU Scheduling new admitted interrupt exit terminated

More information

Operating Systems. Process scheduling. Thomas Ropars.

Operating Systems. Process scheduling. Thomas Ropars. 1 Operating Systems Process scheduling Thomas Ropars thomas.ropars@univ-grenoble-alpes.fr 2018 References The content of these lectures is inspired by: The lecture notes of Renaud Lachaize. The lecture

More information

Network Load Balancing Methods: Experimental Comparisons and Improvement

Network Load Balancing Methods: Experimental Comparisons and Improvement Network Load Balancing Methods: Experimental Comparisons and Improvement Abstract Load balancing algorithms play critical roles in systems where the workload has to be distributed across multiple resources,

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Chapter 5: CPU Scheduling

More information

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Process Scheduling. Copyright : University of Illinois CS 241 Staff Process Scheduling Copyright : University of Illinois CS 241 Staff 1 Process Scheduling Deciding which process/thread should occupy the resource (CPU, disk, etc) CPU I want to play Whose turn is it? Process

More information

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections ) CPU Scheduling CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections 6.7.2 6.8) 1 Contents Why Scheduling? Basic Concepts of Scheduling Scheduling Criteria A Basic Scheduling

More information

Profile-Based Load Balancing for Heterogeneous Clusters *

Profile-Based Load Balancing for Heterogeneous Clusters * Profile-Based Load Balancing for Heterogeneous Clusters * M. Banikazemi, S. Prabhu, J. Sampathkumar, D. K. Panda, T. W. Page and P. Sadayappan Dept. of Computer and Information Science The Ohio State University

More information

Chapter 9. Uniprocessor Scheduling

Chapter 9. Uniprocessor Scheduling Operating System Chapter 9. Uniprocessor Scheduling Lynn Choi School of Electrical Engineering Scheduling Processor Scheduling Assign system resource (CPU time, IO device, etc.) to processes/threads to

More information

Operating Systems Unit 3

Operating Systems Unit 3 Unit 3 CPU Scheduling Algorithms Structure 3.1 Introduction Objectives 3.2 Basic Concepts of Scheduling. CPU-I/O Burst Cycle. CPU Scheduler. Preemptive/non preemptive scheduling. Dispatcher Scheduling

More information

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Week 05 Lecture 18 CPU Scheduling Hello. In this lecture, we

More information

A COMPARATIVE STUDY OF CPU SCHEDULING POLICIES IN OPERATING SYSTEMS

A COMPARATIVE STUDY OF CPU SCHEDULING POLICIES IN OPERATING SYSTEMS VSRD International Journal of Computer Science &Information Technology, Vol. IV Issue VII July 2014 / 119 e-issn : 2231-2471, p-issn : 2319-2224 VSRD International Journals : www.vsrdjournals.com REVIEW

More information

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM. Program #1 Announcements Is due at 9:00 AM on Thursday Program #0 Re-grade requests are due by Monday at 11:59:59 PM Reading Chapter 6 1 CPU Scheduling Manage CPU to achieve several objectives: maximize

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 2, Issue 11, November 2012 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Process Scheduling

More information

Uniprocessor Scheduling. Chapter 9

Uniprocessor Scheduling. Chapter 9 Uniprocessor Scheduling Chapter 9 1 Aim of Scheduling Assign processes to be executed by the processor(s) Response time Throughput Processor efficiency 2 3 4 Long-Term Scheduling Determines which programs

More information

Multitasking and scheduling

Multitasking and scheduling Multitasking and scheduling Guillaume Salagnac Insa-Lyon IST Semester Fall 2017 2/39 Previously on IST-OPS: kernel vs userland pplication 1 pplication 2 VM1 VM2 OS Kernel rchitecture Hardware Each program

More information

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) 1/32 CPU Scheduling The scheduling problem: - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) When do we make decision? 2/32 CPU Scheduling Scheduling decisions may take

More information

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling CSE120 Principles of Operating Systems Prof Yuanyuan (YY) Zhou Scheduling Announcement l Homework 2 due on October 26th l Project 1 due on October 27th 2 Scheduling Overview l In discussing process management

More information

Start of Lecture: February 10, Chapter 6: Scheduling

Start of Lecture: February 10, Chapter 6: Scheduling Start of Lecture: February 10, 2014 1 Reminders Exercise 2 due this Wednesday before class Any questions or comments? 2 Scheduling so far First-Come-First Serve FIFO scheduling in queue without preempting

More information

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science CPU Scheduling Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS, Lahore Pakistan Operating System Concepts Objectives To introduce CPU scheduling, which is the

More information

Performance Extrapolation for Load Testing Results of Mixture of Applications

Performance Extrapolation for Load Testing Results of Mixture of Applications Performance Extrapolation for Load Testing Results of Mixture of Applications Subhasri Duttagupta, Manoj Nambiar Tata Innovation Labs, Performance Engineering Research Center Tata Consulting Services Mumbai,

More information

An Evaluation of Alternative Designs for a Grid Information Service

An Evaluation of Alternative Designs for a Grid Information Service An Evaluation of Alternative Designs for a Grid Information Service Warren Smith, Abdul Waheed *, David Meyers, Jerry Yan Computer Sciences Corporation * MRJ Technology Solutions Directory Research L.L.C.

More information

Chap 7, 8: Scheduling. Dongkun Shin, SKKU

Chap 7, 8: Scheduling. Dongkun Shin, SKKU Chap 7, 8: Scheduling 1 Introduction Multiprogramming Multiple processes in the system with one or more processors Increases processor utilization by organizing processes so that the processor always has

More information

SF-LRU Cache Replacement Algorithm

SF-LRU Cache Replacement Algorithm SF-LRU Cache Replacement Algorithm Jaafar Alghazo, Adil Akaaboune, Nazeih Botros Southern Illinois University at Carbondale Department of Electrical and Computer Engineering Carbondale, IL 6291 alghazo@siu.edu,

More information

LECTURE 3:CPU SCHEDULING

LECTURE 3:CPU SCHEDULING LECTURE 3:CPU SCHEDULING 1 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation 2 Objectives

More information

Scheduling. Multiple levels of scheduling decisions. Classes of Schedulers. Scheduling Goals II: Fairness. Scheduling Goals I: Performance

Scheduling. Multiple levels of scheduling decisions. Classes of Schedulers. Scheduling Goals II: Fairness. Scheduling Goals I: Performance Scheduling CSE 451: Operating Systems Spring 2012 Module 10 Scheduling Ed Lazowska lazowska@cs.washington.edu Allen Center 570 In discussing processes and threads, we talked about context switching an

More information

Operating Systems. Figure: Process States. 1 P a g e

Operating Systems. Figure: Process States. 1 P a g e 1. THE PROCESS CONCEPT A. The Process: A process is a program in execution. A process is more than the program code, which is sometimes known as the text section. It also includes the current activity,

More information

Analytical Modeling of Parallel Programs

Analytical Modeling of Parallel Programs 2014 IJEDR Volume 2, Issue 1 ISSN: 2321-9939 Analytical Modeling of Parallel Programs Hardik K. Molia Master of Computer Engineering, Department of Computer Engineering Atmiya Institute of Technology &

More information

W4118: advanced scheduling

W4118: advanced scheduling W4118: advanced scheduling Instructor: Junfeng Yang References: Modern Operating Systems (3 rd edition), Operating Systems Concepts (8 th edition), previous W4118, and OS at MIT, Stanford, and UWisc Outline

More information

The Impact of Write Back on Cache Performance

The Impact of Write Back on Cache Performance The Impact of Write Back on Cache Performance Daniel Kroening and Silvia M. Mueller Computer Science Department Universitaet des Saarlandes, 66123 Saarbruecken, Germany email: kroening@handshake.de, smueller@cs.uni-sb.de,

More information

Practice Exercises 305

Practice Exercises 305 Practice Exercises 305 The FCFS algorithm is nonpreemptive; the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive. Multilevel queue algorithms allow

More information

CS 326: Operating Systems. CPU Scheduling. Lecture 6

CS 326: Operating Systems. CPU Scheduling. Lecture 6 CS 326: Operating Systems CPU Scheduling Lecture 6 Today s Schedule Agenda? Context Switches and Interrupts Basic Scheduling Algorithms Scheduling with I/O Symmetric multiprocessing 2/7/18 CS 326: Operating

More information

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition

More information

Scheduling Mar. 19, 2018

Scheduling Mar. 19, 2018 15-410...Everything old is new again... Scheduling Mar. 19, 2018 Dave Eckhardt Brian Railing Roger Dannenberg 1 Outline Chapter 5 (or Chapter 7): Scheduling Scheduling-people/textbook terminology note

More information

Properties of Processes

Properties of Processes CPU Scheduling Properties of Processes CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait. CPU burst distribution: CPU Scheduler Selects from among the processes that

More information

High level scheduling: Medium level scheduling: Low level scheduling. Scheduling 0 : Levels

High level scheduling: Medium level scheduling: Low level scheduling. Scheduling 0 : Levels Scheduling 0 : Levels High level scheduling: Deciding whether another process can run is process table full? user process limit reached? load to swap space or memory? Medium level scheduling: Balancing

More information

Performance Modeling and Evaluation of Web Systems with Proxy Caching

Performance Modeling and Evaluation of Web Systems with Proxy Caching Performance Modeling and Evaluation of Web Systems with Proxy Caching Yasuyuki FUJITA, Masayuki MURATA and Hideo MIYAHARA a a Department of Infomatics and Mathematical Science Graduate School of Engineering

More information

An Enhanced Binning Algorithm for Distributed Web Clusters

An Enhanced Binning Algorithm for Distributed Web Clusters 1 An Enhanced Binning Algorithm for Distributed Web Clusters Hann-Jang Ho Granddon D. Yen Jack Lee Department of Information Management, WuFeng Institute of Technology SingLing Lee Feng-Wei Lien Department

More information

2. The shared resource(s) in the dining philosophers problem is(are) a. forks. b. food. c. seats at a circular table.

2. The shared resource(s) in the dining philosophers problem is(are) a. forks. b. food. c. seats at a circular table. CSCI 4500 / 8506 Sample Questions for Quiz 3 Covers Modules 5 and 6 1. In the dining philosophers problem, the philosophers spend their lives alternating between thinking and a. working. b. eating. c.

More information

Real-Time Programming with GNAT: Specialised Kernels versus POSIX Threads

Real-Time Programming with GNAT: Specialised Kernels versus POSIX Threads Real-Time Programming with GNAT: Specialised Kernels versus POSIX Threads Juan A. de la Puente 1, José F. Ruiz 1, and Jesús M. González-Barahona 2, 1 Universidad Politécnica de Madrid 2 Universidad Carlos

More information

Efficient CPU Scheduling Algorithm Using Fuzzy Logic

Efficient CPU Scheduling Algorithm Using Fuzzy Logic 2012 International Conference on Computer Technology and Science (ICCTS 2012) IPCSIT vol. 47 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V47.3 Efficient CPU Scheduling Algorithm Using

More information

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution

More information

Operating System Review Part

Operating System Review Part Operating System Review Part CMSC 602 Operating Systems Ju Wang, 2003 Fall Virginia Commonwealth University Review Outline Definition Memory Management Objective Paging Scheme Virtual Memory System and

More information

An Efficient Web Cache Replacement Policy

An Efficient Web Cache Replacement Policy In the Proc. of the 9th Intl. Symp. on High Performance Computing (HiPC-3), Hyderabad, India, Dec. 23. An Efficient Web Cache Replacement Policy A. Radhika Sarma and R. Govindarajan Supercomputer Education

More information

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI

OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI CMPE 655- MULTIPLE PROCESSOR SYSTEMS OVERHEADS ENHANCEMENT IN MUTIPLE PROCESSING SYSTEMS BY ANURAG REDDY GANKAT KARTHIK REDDY AKKATI What is MULTI PROCESSING?? Multiprocessing is the coordinated processing

More information

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling Uniprocessor Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Three level scheduling 2 1 Types of Scheduling 3 Long- and Medium-Term Schedulers Long-term scheduler Determines which programs

More information

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Structure Page Nos. 2.0 Introduction 4 2. Objectives 5 2.2 Metrics for Performance Evaluation 5 2.2. Running Time 2.2.2 Speed Up 2.2.3 Efficiency 2.3 Factors

More information

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1 Today s class Scheduling Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1 Aim of Scheduling Assign processes to be executed by the processor(s) Need to meet system objectives regarding:

More information

CPU Scheduling: Objectives

CPU Scheduling: Objectives CPU Scheduling: Objectives CPU scheduling, the basis for multiprogrammed operating systems CPU-scheduling algorithms Evaluation criteria for selecting a CPU-scheduling algorithm for a particular system

More information

Subject Name:Operating system. Subject Code:10EC35. Prepared By:Remya Ramesan and Kala H.S. Department:ECE. Date:

Subject Name:Operating system. Subject Code:10EC35. Prepared By:Remya Ramesan and Kala H.S. Department:ECE. Date: Subject Name:Operating system Subject Code:10EC35 Prepared By:Remya Ramesan and Kala H.S. Department:ECE Date:24-02-2015 UNIT 1 INTRODUCTION AND OVERVIEW OF OPERATING SYSTEM Operating system, Goals of

More information

Last Class: Processes

Last Class: Processes Last Class: Processes A process is the unit of execution. Processes are represented as Process Control Blocks in the OS PCBs contain process state, scheduling and memory management information, etc A process

More information

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter Lecture Topics Today: Advanced Scheduling (Stallings, chapter 10.1-10.4) Next: Deadlock (Stallings, chapter 6.1-6.6) 1 Announcements Exam #2 returned today Self-Study Exercise #10 Project #8 (due 11/16)

More information

Object Placement in Shared Nothing Architecture Zhen He, Jeffrey Xu Yu and Stephen Blackburn Λ

Object Placement in Shared Nothing Architecture Zhen He, Jeffrey Xu Yu and Stephen Blackburn Λ 45 Object Placement in Shared Nothing Architecture Zhen He, Jeffrey Xu Yu and Stephen Blackburn Λ Department of Computer Science The Australian National University Canberra, ACT 2611 Email: fzhen.he, Jeffrey.X.Yu,

More information

CS3733: Operating Systems

CS3733: Operating Systems CS3733: Operating Systems Topics: Process (CPU) Scheduling (SGG 5.1-5.3, 6.7 and web notes) Instructor: Dr. Dakai Zhu 1 Updates and Q&A Homework-02: late submission allowed until Friday!! Submit on Blackboard

More information

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) CPU Scheduling Daniel Mosse (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process

More information

ayaz ali Micro & Macro Scheduling Techniques Ayaz Ali Department of Computer Science University of Houston Houston, TX

ayaz ali Micro & Macro Scheduling Techniques Ayaz Ali Department of Computer Science University of Houston Houston, TX ayaz ali Micro & Macro Scheduling Techniques Ayaz Ali Department of Computer Science University of Houston Houston, TX 77004 ayaz@cs.uh.edu 1. INTRODUCTION Scheduling techniques has historically been one

More information

Application of Parallel Processing to Rendering in a Virtual Reality System

Application of Parallel Processing to Rendering in a Virtual Reality System Application of Parallel Processing to Rendering in a Virtual Reality System Shaun Bangay Peter Clayton David Sewry Department of Computer Science Rhodes University Grahamstown, 6140 South Africa Internet:

More information

Improving Data Cache Performance via Address Correlation: An Upper Bound Study

Improving Data Cache Performance via Address Correlation: An Upper Bound Study Improving Data Cache Performance via Address Correlation: An Upper Bound Study Peng-fei Chuang 1, Resit Sendag 2, and David J. Lilja 1 1 Department of Electrical and Computer Engineering Minnesota Supercomputing

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information

Modeling and Synthesizing Task Placement Constraints in Google Compute Clusters

Modeling and Synthesizing Task Placement Constraints in Google Compute Clusters Modeling and Synthesizing Task Placement s in Google s Bikash Sharma Pennsylvania State University University Park 1 bikash@cse.psu.edu Rasekh Rifaat Google Inc. Seattle 93 rasekh@google.com Victor Chudnovsky

More information

Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism

Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism Dynamic Load balancing for I/O- and Memory- Intensive workload in Clusters using a Feedback Control Mechanism Xiao Qin, Hong Jiang, Yifeng Zhu, David R. Swanson Department of Computer Science and Engineering

More information

arxiv: v1 [cs.dc] 2 Apr 2016

arxiv: v1 [cs.dc] 2 Apr 2016 Scalability Model Based on the Concept of Granularity Jan Kwiatkowski 1 and Lukasz P. Olech 2 arxiv:164.554v1 [cs.dc] 2 Apr 216 1 Department of Informatics, Faculty of Computer Science and Management,

More information

Enhancing the Performance of Feedback Scheduling

Enhancing the Performance of Feedback Scheduling Enhancing the Performance of Feedback Scheduling Ayan Bhunia Student, M. Tech. CSED, MNNIT Allahabad- 211004 (India) ABSTRACT Feedback scheduling is a kind of process scheduling mechanism where process

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling COP 4610: Introduction to Operating Systems (Fall 2016) Chapter 5: CPU Scheduling Zhi Wang Florida State University Contents Basic concepts Scheduling criteria Scheduling algorithms Thread scheduling Multiple-processor

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2019 Lecture 8 Scheduling Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ POSIX: Portable Operating

More information

Clustering and Prefetching techniques for Network-based Storage Systems

Clustering and Prefetching techniques for Network-based Storage Systems Clustering and Prefetching techniques for Network-based Storage Systems By Dhawal N. Thakker Dr. Glenford Mapp Dr. Orhan Gemikonakli Networking Research Group Streaming applications Increased use of YouTube,

More information

SMD149 - Operating Systems

SMD149 - Operating Systems SMD149 - Operating Systems Roland Parviainen November 3, 2005 1 / 45 Outline Overview 2 / 45 Process (tasks) are necessary for concurrency Instance of a program in execution Next invocation of the program

More information

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System Preview Process Scheduler Short Term Scheduler Long Term Scheduler Process Scheduling Algorithms for Batch System First Come First Serve Shortest Job First Shortest Remaining Job First Process Scheduling

More information

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) CPU Scheduling The scheduling problem: - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) When do we make decision? 1 / 39 CPU Scheduling new admitted interrupt exit terminated

More information

Process behavior. Categories of scheduling algorithms.

Process behavior. Categories of scheduling algorithms. Week 5 When a computer is multiprogrammed, it frequently has multiple processes competing for CPU at the same time. This situation occurs whenever two or more processes are simultaneously in the ready

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

I/O, 2002 Journal of Software. Vol.13, No /2002/13(08)

I/O, 2002 Journal of Software. Vol.13, No /2002/13(08) 1-9825/22/13(8)1612-9 22 Journal of Software Vol13, No8 I/O, (,184) E-mail: {shijing,dcszlz}@mailstsinghuaeducn http://dbgroupcstsinghuaeducn :, I/O,,, - - - : ; I/O ; ; ; - : TP311 : A,, [1], 1 12 (Terabyte),

More information

Prefix Computation and Sorting in Dual-Cube

Prefix Computation and Sorting in Dual-Cube Prefix Computation and Sorting in Dual-Cube Yamin Li and Shietung Peng Department of Computer Science Hosei University Tokyo - Japan {yamin, speng}@k.hosei.ac.jp Wanming Chu Department of Computer Hardware

More information

Study of Load Balancing Schemes over a Video on Demand System

Study of Load Balancing Schemes over a Video on Demand System Study of Load Balancing Schemes over a Video on Demand System Priyank Singhal Ashish Chhabria Nupur Bansal Nataasha Raul Research Scholar, Computer Department Abstract: Load balancing algorithms on Video

More information

Comparative Evaluation of Probabilistic and Deterministic Tag Anti-collision Protocols for RFID Networks

Comparative Evaluation of Probabilistic and Deterministic Tag Anti-collision Protocols for RFID Networks Comparative Evaluation of Probabilistic and Deterministic Tag Anti-collision Protocols for RFID Networks Jihoon Choi and Wonjun Lee Division of Computer and Communication Engineering College of Information

More information

Power and Locality Aware Request Distribution Technical Report Heungki Lee, Gopinath Vageesan and Eun Jung Kim Texas A&M University College Station

Power and Locality Aware Request Distribution Technical Report Heungki Lee, Gopinath Vageesan and Eun Jung Kim Texas A&M University College Station Power and Locality Aware Request Distribution Technical Report Heungki Lee, Gopinath Vageesan and Eun Jung Kim Texas A&M University College Station Abstract With the growing use of cluster systems in file

More information

Experiments with Job Scheduling in MetaCentrum

Experiments with Job Scheduling in MetaCentrum Experiments with Job Scheduling in MetaCentrum Dalibor Klusáček, Hana Rudová, and Miroslava Plachá Faculty of Informatics, Masaryk University Botanická 68a, 602 00 Brno Czech Republic {xklusac,hanka@fi.muni.cz

More information

CSCE Operating Systems Scheduling. Qiang Zeng, Ph.D. Fall 2018

CSCE Operating Systems Scheduling. Qiang Zeng, Ph.D. Fall 2018 CSCE 311 - Operating Systems Scheduling Qiang Zeng, Ph.D. Fall 2018 Resource Allocation Graph describing the traffic jam CSCE 311 - Operating Systems 2 Conditions for Deadlock Mutual Exclusion Hold-and-Wait

More information

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006 Operating Systems Comprehensive Exam Spring 2006 Student ID # 3/16/2006 You must complete all of part I (60%) You must complete two of the three sections in part II (20% each) In Part I, circle or select

More information