D.A.S.T. Defragmentation And Scheduling of Tasks University of Twente Computer Science

Size: px
Start display at page:

Download "D.A.S.T. Defragmentation And Scheduling of Tasks University of Twente Computer Science"

Transcription

1 D.A.S.T. Defragmentation And Scheduling of Tasks University of Twente Computer Science Frank Vlaardingerbroek Joost van der Linden Stefan ten Heggeler Ruud Groen 14th November 2003 Abstract When mapping tasks on a Heterogeneous Tile Processor (HTP) situations arise that communicating tasks will be mapped on a processor surface in an inefficient way. Inefficiency may be caused when a process cannot be mapped on the tile that would give optimal performance. Moreover, the distance between two communicating processes might become too large when mapping processes on the tile of choice. Choices for less-than ideal situations might be forced when the processor surface has a high degree of occupation. Ways to prevent or reduce this inefficiency can be found on beforehand while scheduling tasks or mapping the processes of the task onto the processor. The best way of inefficiency reduction will be an optimal combination of these three factors. An intelligent scheduler, cooperating with the mapper to determine which process will be running when and where. At the same time defragmenter will keep a close watch, working to local optima when possible, and only interrupting the system and executing a major defragmentation when the gain will be high enough to compensate for the large extra cost and time. It is expected, however, that this will rarely be the case. 1 Introduction The abilities for mobile devices will become more demanding in the near future. Ideas to combine the cell-phone, PDA, photo and film camera and maybe even the wallet into one handheld device, might seem attractive and technologically possible. However, because the evolution of battery technology is not keeping up to solve these demands with traditional techniques like the inefficient General Purpose Processors (GPP), other solutions are sought. Investigations in Application Specific Integrated Circuits (ASIC) technology might yield a good performance in both speed and energy efficiency, but because of the high development time and costs, it might not be a good solution in this area of rapid changing technology. In the recent years, reconfigurable hardware, like the Field Programmable Gate Array (FPGA), proved a promising field of research. The 1

2 FPGA is more flexible than the ASIC, and has better performance than the GPP. But because the FPGA requires bit-level reconfiguration, the overhead for using this technology increases dramatically. The solution addressed in this article, is one using different kinds of processors in parallel. The Heterogeneous Tile Processor (HTP) is a chip containing a matrix of tiles, each being potentially a different kind of processor. When mapping a set of communicating tasks on such a structure, it might be possible that a single process can be mapped on different tiles. The choice of tile might than depend on how efficient the mapping on such a kind of tile is, but the distance to other tiles might be of importance when a process has to communicate with the processes on those tiles. Finding the optimal solution in this mapping problem, is an NP-hard problem. This paper introduces the problem as a graph theory problem. A solution to this problem is formally given in [1], and will be described here shortly. Next to intelligent mapping, intelligent scheduling might be of importance as well. There might be a choice between mapping a high priority task quite inefficient now, and mapping a task with some lower priority more efficient on the same processor space. Ways of scheduling that may improve efficiency in relation to a FIFO queue will be described with some advantages and disadvantages. Despite these preventive methods, the situation will probable arise that the degree of fragmentation of communicating processes will rise above acceptable values. In that case, defragmentation might be necessary. In this paper several approaches to defragmentation are discussed. This will include a view on the approach when the defragmentation is distributed between the three named stages. 2 Process Mapping on HTP The mapping of applications on a HTP might seem quite simple at first, but this proves not to be the case. An application is usually described by a task-graph, which shows the relation between the different processes of the application, with their communication. In the most simplistic point of view, each process might be mapped onto exactly one tile of a HTP. The problem that arises is that for the optimal mapping, the tile chosen for each process is not just one of the kind that might have the best speed and power consumption properties, but the chosen processors also have to be at close distance to minimize the communication cost. For a single task graph with only few processes, and only few choices for the processor tiles, the total amount of possibilities might already be quite large. For example, a linear task graph of three processes, with each three different choices for tiles, already has 3 3 = 27 possibilities, and task graphs are potentially larger and more complex. Modeling of this problem can be done using graph-theory. The construction of the problem graph is done in three steps: The first step is to inflate the base graph. For each vertex in the base graph representing a process, a collection of vertices is made, where each new vertex represents a specific tile where the process might be mapped onto. Step two consists of making the edges into the base graph. Where an edge 2

3 Figure 1: Base graph existed between two vertices of the base graph, the collections of vertices in the new graph are connected complete bipartite. In step three, each vertex and edge is weighted. The weight of vertices is constructed according to energy efficiency and speed of the mapping on that tile, the weight of the edges represents the energy cost of the transportation of the data. The problem can now be described as Finding a new graph by taking exactly one vertex from each collection, so that the total weight of vertices and edges together is as small as possible. This problem proves to be NP-hard[1]. However, when the base graph is not too complex, the problem can be solved by using graph reduction according to some simple paradigms[1]: Vertices in the base graph with degree 1 (end points), can be removed. The costs of the cheapest vertex - connecting edge combination can be added to the vertex where it was connected to. This vertex now represents 2 vertices and 1 edge. Vertices in the base graph with degree 2, can be removed as well. By using an algorithm like Dijkstra s [2], the shortest path in the problem graph can be found between each vertex in the collection before, to each vertex in the collection after the vertex with the original degree of 2. The weight of the vertex in the collection before the removed vertex, has to be increased with the weight of the removed vertex. The weight of the connection has to be increased with the weight of the connections of the shortest path. It is expected that task-graphs only contain few vertices with a degree of 3, and that higher degrees will seldom appear. In that case, the graph reduction Figure 2: Inflated graph, after step one 3

4 Figure 3: Connected graph, after step two as described above, will solve the problem in most cases. 3 Scheduling Scheduling is the process of allocating the CPU to a given task. In our research with a heterogeneous reconfigurable tile processor we will be scheduling one or more tasks on the chip (CPU). There are mainly two ways to schedule tasks on a processor: preemptive and non-preemptive. The disadvantage with preemptive scheduling on a reconfigurable tile-processor is that the overhead generated by the reconfiguration of the processor increases. Also the there is a possibility that the fragmentation of the processor increases, several tasks will be preempted and replaced by less fitting tasks. Those problems are of less importance when nonpreemptive scheduling is used. Disadvantage of non-preemptive scheduling is that the average waiting time of the tasks increases during time [3] [4]. Algorithms using deadlines are ignored because the future model doesn t generate tasks with deadlines. Because of the difficulties with preemption we have decided to choose for non-preemptive scheduling. Only for the defragmentation of the chip it is needed that the running tasks or processes can be preempted. Only for the use of defragmentation the tasks can be preempted. For non-preemptive scheduling there are several different algorithms: 3.1 First-Come First-Served This is by far the simplest CPU-scheduling algorithm. With this scheme the task that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a task enters the ready queue it is linked onto the tail of the queue. When the CPU becomes free it is allocated to the task at the head of the queue. The problem with this algorithm is that the average waiting time can be quit long because new tasks have to wait for the preceding tasks to complete [4]. 3.2 SJF Shortest Job First This algorithm associates with each task the execution time of the task. when the CPU is available it is assigned to the task that has the smallest execution [4]. If two tasks have the same length (execution time), FCFS is used to break the tie. The advantage of this algorithm is the short waiting time for short tasks. 4

5 3.3 SGF Smallest Graph First A variation on shortest job first only adapted for the HTP. A small graph is given priority above a larger, which has the advantage that a short graph is easier to map and keep probably the fragmentation of the graph smaller. The difference between the previous algorithm and this one is that the shortest job not always has the smallest graph. Another advantage is the fixed priority of the different tasks. Without knowing the layout of the HTP it is possible to reduce the possible fragmentation by sending easy to map tasks. 3.4 On-line scheduling The last option is online scheduling [5] [6]. This is a dynamic way of scheduling, the layout of the processor at a certain moment decides the order of the tasks. This allows the next pending task to be processed sooner. This could prevent further fragmentation of the chip by using the mapping and defragmentation algorithms proposed in the paper. The given task will fit better on the chip. Disadvantage of the use of dynamic scheduling is the increase of the complexity of the algorithm. Another disadvantage of this way of dynamic scheduling is that the more complex tasks can end up waiting, despite there be sufficient resources to serve them, so without a good defragmentation algorithm the utilization of the chip is less. On-line scheduling increases the problem of fragmentation, but by using the right mapping and defragmentation algorithms this problem can be solved [6]. The waiting time of the tasks can be reduced, and when defragmented the utilization can also be improved. The delay time of the task is reduced. 4 Fragmentation problem Fragmentation is the process, or result, of dividing a contiguous area into smaller non-contiguous parts through allocating parts of the area, and leaving the rest to be allocated. Apart from Tile Processor allocation, this is also a known problem in certain memory allocation schemes, and file systems. Fragmentation is expected to cause efficiency problems in the long run. After allocating and deallocating many processes to/from tiles of the Tile Processor, idle (unallocated) tiles will be scattered. In this situation newly allocated processes that are mapped to these tiles suffer from poor efficiency (concerning speed as well as power consumption) as the distance between tiles is much larger than necessary. We believe that efficiency gain by defragmentation will be worth the effort compared waiting for contiguous allocation space, when a process finishes its task. 4.1 Defragmentation The main difficulty of HTP defragmentation is that it deals with a heterogeneous structure of many different tiles. Existing fragmentation problems are found with mappings on homogeneous structures. This means that defragmentation methods used in these areas can not be applied simply. Points that need to be considered: 5

6 Complete or partial defragmentation. It cannot be stated that one is better than the other, as both have their advantages and disadvantages. Complete defragmentation may result in a better overall efficiency compared to partial defragmentation, because the allocation improvements are near a global optimum, however the costs of time and energy will increase severely. Complete defragmentation consists of making a plan of what tasks are to run on what tiles, and carrying out this plan. The use of this defragmentation will be lost, when some tasks are finished, and others are ready to be scheduled. Partial defragmentation has the benefit that it is less of a burden to the currently mapped tasks, but may also have less spectacular results. Any may be more appropriate in one situation or the other. A method for determining the amount of defragmentation is required. This amount is closely related to the distance of an ideal mapping and the distance between tiles of the current mapping. Existing mathematical theories may be used for this purpose. Performing a large defragmentation may be more cost-effective, when doing this with a few processes at a time, while the other processes continue running. The consideration is that we wish to make as few processes idle as possible, but it may be inefficient to place a process on the right spot in many steps. Strategies for defragmentation is a subject for discussion. Performing defragmentation in constant intervals has the advantage that scheduling of the process is kept simpler, and the situation of a completely fragmented mapping is avoided. The disadvantage is that scheduling a defragmentation when it is not necessary will impose an inefficiency, and there may be a situation possible that a mapping is fragmented and a defragmentation is not scheduled before a long while. The alternative is scheduling defragmentation based on properties of the current mapping. Monitoring of the mapping efficiency will give a ground to decide when to schedule a defragmentation. The former strategy will schedule a defragmentation when the mapping is less fragmented, what makes it faster to achieve, while the latter schedules a defragmentation when it is strictly necessary, what makes it worth the effort of the interruption of tasks, and the energy consumption of reconfiguration. 4.2 Proposed solution Currently, simulation results are not available to justify our decision on what method and strategy to employ. At the moment, the following process can be proposed as a part of the solution. Identify a process that suffers from poor efficiency due to large distances between tiles or less suitable tiles for specific tasks. Calculate layout improvements for any possible replacement tile, and select the replacement tile that will result in the highest increase in performance. Configure the selected replacement tile for its task. 6

7 Interrupt the task, transfer the state information to the replacement tile, and reconfigure the routing of the other tiles. Although performance characteristics are not available it can be estimated that this method will prove efficient. The following reasoning preceded this conclusion: The time that a process is interrupted is kept at a minimum. In order to meet a certain ratio of efficiency gain versus time and energy cost, a minimum efficiency gain can be set as a requirement. Modeling and simulation of mapping, scheduling and defragmentation methods is required to compare efficiency costs and benefits of this method of defragmentation and others. 5 Discussion Our research has indicated that the most frequently used method for mapping processes on a processor surface is online scheduling with a FIFO queue. Though this method might normally increase fragmentation, when used in close collaboration with an intelligent mapping algorithm we expect a resulting method that is both fast and efficient. The proposed solution of slack tile defragmentation is merely a partial solution that is expected to be most efficient. Conclusions about the use of complete defragmentation can not be made, as performance improvements cannot be estimated without simulation. Simulation of the different possibilities will be necessary to verify the efficiency of the separate parts, and the most efficient combination. References [1] Broersma H., Paulusma D., Smit G.J.M., Vlaardingerbroek F., Woeginger G.J. The computational complexity of the minimum weight processor assignment problem.working Article University of Twente, Enschede, The Netherlands, 2004 [2] Dijkstra E.W. A note on two problems in connexion with graphs Numerische Mathematik, 1: , 1959 [3] Butazzo G.C. Hard real-time computing systems Kluwer Acadamic Publisher Group, Dordrecht, 2002 [4] Silberschatz, Galvin, Gugne Operating system concepts sixth edition John Wiley & sons, inc., New York, 2003 [5] Walder H., Platzner M. Online scheduling for block-partitioned reconfigurable devices Computer engineering and networks lab Swiss federal institute of technology Zurich, Switzerland 7

8 [6] Diessel O.F., Math B., Hons B.E. On scheduling dynamic FPGA reconfigurations The department of computer science and software engineering, The university of Newcastle, Newcastle, Austalia,

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31 CPU scheduling CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In a single-processor

More information

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 370: SYSTEM ARCHITECTURE & SOFTWARE [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University OpenMP compiler directives

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Chapter 5: CPU Scheduling

More information

Process behavior. Categories of scheduling algorithms.

Process behavior. Categories of scheduling algorithms. Week 5 When a computer is multiprogrammed, it frequently has multiple processes competing for CPU at the same time. This situation occurs whenever two or more processes are simultaneously in the ready

More information

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling

More information

CPU Scheduling Algorithms

CPU Scheduling Algorithms CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying the textbook Operating Systems Concepts with Java, by Silberschatz, Galvin, and Gagne (2007).

More information

Frequently asked questions from the previous class survey

Frequently asked questions from the previous class survey CS 370: OPERATING SYSTEMS [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University L14.1 Frequently asked questions from the previous class survey Turnstiles: Queue for threads blocked

More information

1.1 CPU I/O Burst Cycle

1.1 CPU I/O Burst Cycle PROCESS SCHEDULING ALGORITHMS As discussed earlier, in multiprogramming systems, there are many processes in the memory simultaneously. In these systems there may be one or more processors (CPUs) but the

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 10 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Chapter 6: CPU Scheduling Basic Concepts

More information

Ch 4 : CPU scheduling

Ch 4 : CPU scheduling Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,

More information

Frequently asked questions from the previous class survey

Frequently asked questions from the previous class survey CS 370: OPERATING SYSTEMS [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University L15.1 Frequently asked questions from the previous class survey Could we record burst times in

More information

Operating Systems Unit 3

Operating Systems Unit 3 Unit 3 CPU Scheduling Algorithms Structure 3.1 Introduction Objectives 3.2 Basic Concepts of Scheduling. CPU-I/O Burst Cycle. CPU Scheduler. Preemptive/non preemptive scheduling. Dispatcher Scheduling

More information

3. CPU Scheduling. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn

3. CPU Scheduling. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn 3. CPU Scheduling Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn S P O I L E R operating system CPU Scheduling 3 operating system CPU Scheduling 4 Long-short-medium Scheduler

More information

CPU Scheduling: Objectives

CPU Scheduling: Objectives CPU Scheduling: Objectives CPU scheduling, the basis for multiprogrammed operating systems CPU-scheduling algorithms Evaluation criteria for selecting a CPU-scheduling algorithm for a particular system

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

Scheduling. The Basics

Scheduling. The Basics The Basics refers to a set of policies and mechanisms to control the order of work to be performed by a computer system. Of all the resources in a computer system that are scheduled before use, the CPU

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

Operating System Concepts Ch. 5: Scheduling

Operating System Concepts Ch. 5: Scheduling Operating System Concepts Ch. 5: Scheduling Silberschatz, Galvin & Gagne Scheduling In a multi-programmed system, multiple processes may be loaded into memory at the same time. We need a procedure, or

More information

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD OPERATING SYSTEMS #5 After A.S.Tanenbaum, Modern Operating Systems, 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD General information GENERAL INFORMATION Cooperating processes

More information

8: Scheduling. Scheduling. Mark Handley

8: Scheduling. Scheduling. Mark Handley 8: Scheduling Mark Handley Scheduling On a multiprocessing system, more than one process may be available to run. The task of deciding which process to run next is called scheduling, and is performed by

More information

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou (  Zhejiang University Operating Systems (Fall/Winter 2018) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review Motivation to use threads

More information

Chapter 9. Uniprocessor Scheduling

Chapter 9. Uniprocessor Scheduling Operating System Chapter 9. Uniprocessor Scheduling Lynn Choi School of Electrical Engineering Scheduling Processor Scheduling Assign system resource (CPU time, IO device, etc.) to processes/threads to

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 9 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 CPU Scheduling: Objectives CPU scheduling,

More information

A COMPARATIVE STUDY OF CPU SCHEDULING POLICIES IN OPERATING SYSTEMS

A COMPARATIVE STUDY OF CPU SCHEDULING POLICIES IN OPERATING SYSTEMS VSRD International Journal of Computer Science &Information Technology, Vol. IV Issue VII July 2014 / 119 e-issn : 2231-2471, p-issn : 2319-2224 VSRD International Journals : www.vsrdjournals.com REVIEW

More information

Operating Systems Unit 6. Memory Management

Operating Systems Unit 6. Memory Management Unit 6 Memory Management Structure 6.1 Introduction Objectives 6.2 Logical versus Physical Address Space 6.3 Swapping 6.4 Contiguous Allocation Single partition Allocation Multiple Partition Allocation

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

CS418 Operating Systems

CS418 Operating Systems CS418 Operating Systems Lecture 9 Processor Management, part 1 Textbook: Operating Systems by William Stallings 1 1. Basic Concepts Processor is also called CPU (Central Processing Unit). Process an executable

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System Preview Process Scheduler Short Term Scheduler Long Term Scheduler Process Scheduling Algorithms for Batch System First Come First Serve Shortest Job First Shortest Remaining Job First Process Scheduling

More information

So far. Next: scheduling next process from Wait to Run. 1/31/08 CSE 30341: Operating Systems Principles

So far. Next: scheduling next process from Wait to Run. 1/31/08 CSE 30341: Operating Systems Principles So far. Firmware identifies hardware devices present OS bootstrap process: uses the list created by firmware and loads driver modules for each detected hardware. Initializes internal data structures (PCB,

More information

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter Lecture Topics Today: Uniprocessor Scheduling (Stallings, chapter 9.1-9.3) Next: Advanced Scheduling (Stallings, chapter 10.1-10.4) 1 Announcements Self-Study Exercise #10 Project #8 (due 11/16) Project

More information

Properties of Processes

Properties of Processes CPU Scheduling Properties of Processes CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait. CPU burst distribution: CPU Scheduler Selects from among the processes that

More information

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne Histogram of CPU-burst Times 6.2 Silberschatz, Galvin and Gagne Alternating Sequence of CPU And I/O Bursts 6.3 Silberschatz, Galvin and Gagne CPU

More information

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition Chapter 5: CPU Scheduling Silberschatz, Galvin and Gagne 2011 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating

More information

COSC243 Part 2: Operating Systems

COSC243 Part 2: Operating Systems COSC243 Part 2: Operating Systems Lecture 17: CPU Scheduling Zhiyi Huang Dept. of Computer Science, University of Otago Zhiyi Huang (Otago) COSC243 Lecture 17 1 / 30 Overview Last lecture: Cooperating

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2019 Lecture 8 Scheduling Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ POSIX: Portable Operating

More information

A comparison between the scheduling algorithms used in RTLinux and in VxWorks - both from a theoretical and a contextual view

A comparison between the scheduling algorithms used in RTLinux and in VxWorks - both from a theoretical and a contextual view A comparison between the scheduling algorithms used in RTLinux and in VxWorks - both from a theoretical and a contextual view Authors and Affiliation Oskar Hermansson and Stefan Holmer studying the third

More information

Operating System Concepts

Operating System Concepts Chapter 9: Virtual-Memory Management 9.1 Silberschatz, Galvin and Gagne 2005 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped

More information

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) CPU Scheduling Daniel Mosse (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process

More information

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition

More information

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Week 05 Lecture 18 CPU Scheduling Hello. In this lecture, we

More information

Announcements/Reminders

Announcements/Reminders Announcements/Reminders Class news group: rcfnews.cs.umass.edu::cmpsci.edlab.cs377 CMPSCI 377: Operating Systems Lecture 5, Page 1 Last Class: Processes A process is the unit of execution. Processes are

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

Multiprocessor and Real- Time Scheduling. Chapter 10

Multiprocessor and Real- Time Scheduling. Chapter 10 Multiprocessor and Real- Time Scheduling Chapter 10 Classifications of Multiprocessor Loosely coupled multiprocessor each processor has its own memory and I/O channels Functionally specialized processors

More information

Scheduling of processes

Scheduling of processes Scheduling of processes Processor scheduling Schedule processes on the processor to meet system objectives System objectives: Assigned processes to be executed by the processor Response time Throughput

More information

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Process Scheduling. Copyright : University of Illinois CS 241 Staff Process Scheduling Copyright : University of Illinois CS 241 Staff 1 Process Scheduling Deciding which process/thread should occupy the resource (CPU, disk, etc) CPU I want to play Whose turn is it? Process

More information

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections ) CPU Scheduling CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections 6.7.2 6.8) 1 Contents Why Scheduling? Basic Concepts of Scheduling Scheduling Criteria A Basic Scheduling

More information

LECTURE 3:CPU SCHEDULING

LECTURE 3:CPU SCHEDULING LECTURE 3:CPU SCHEDULING 1 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation 2 Objectives

More information

Operating Systems. Process scheduling. Thomas Ropars.

Operating Systems. Process scheduling. Thomas Ropars. 1 Operating Systems Process scheduling Thomas Ropars thomas.ropars@univ-grenoble-alpes.fr 2018 References The content of these lectures is inspired by: The lecture notes of Renaud Lachaize. The lecture

More information

Last Class: Processes

Last Class: Processes Last Class: Processes A process is the unit of execution. Processes are represented as Process Control Blocks in the OS PCBs contain process state, scheduling and memory management information, etc A process

More information

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1 Today s class Scheduling Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1 Aim of Scheduling Assign processes to be executed by the processor(s) Need to meet system objectives regarding:

More information

A CPU Scheduling Algorithm Simulator

A CPU Scheduling Algorithm Simulator A CPU Scheduling Algorithm Simulator Sukanya Suranauwarat School of Applied Statistics, National Institute of Development Administration, 118 Seri Thai Rd., Bangkapi, Bangkok 10240, Thailand sukanya@as.nida.ac.th

More information

CSE 4/521 Introduction to Operating Systems

CSE 4/521 Introduction to Operating Systems CSE 4/521 Introduction to Operating Systems Lecture 9 CPU Scheduling II (Scheduling Algorithms, Thread Scheduling, Real-time CPU Scheduling) Summer 2018 Overview Objective: 1. To describe priority scheduling

More information

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms Operating System Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts Scheduling Criteria Scheduling Algorithms OS Process Review Multicore Programming Multithreading Models Thread Libraries Implicit

More information

Concurrent & Distributed Systems Supervision Exercises

Concurrent & Distributed Systems Supervision Exercises Concurrent & Distributed Systems Supervision Exercises Stephen Kell Stephen.Kell@cl.cam.ac.uk November 9, 2009 These exercises are intended to cover all the main points of understanding in the lecture

More information

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps Interactive Scheduling Algorithms Continued o Priority Scheduling Introduction Round-robin assumes all processes are equal often not the case Assign a priority to each process, and always choose the process

More information

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems Processes CS 475, Spring 2018 Concurrent & Distributed Systems Review: Abstractions 2 Review: Concurrency & Parallelism 4 different things: T1 T2 T3 T4 Concurrency: (1 processor) Time T1 T2 T3 T4 T1 T1

More information

SMD149 - Operating Systems

SMD149 - Operating Systems SMD149 - Operating Systems Roland Parviainen November 3, 2005 1 / 45 Outline Overview 2 / 45 Process (tasks) are necessary for concurrency Instance of a program in execution Next invocation of the program

More information

Multiprocessor and Real-Time Scheduling. Chapter 10

Multiprocessor and Real-Time Scheduling. Chapter 10 Multiprocessor and Real-Time Scheduling Chapter 10 1 Roadmap Multiprocessor Scheduling Real-Time Scheduling Linux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor Systems

More information

Q1. What is Deadlock? Explain essential conditions for deadlock to occur?

Q1. What is Deadlock? Explain essential conditions for deadlock to occur? II nd Midterm session 2017-18 Subject: Operating System ( V CSE-B ) Q1. What is Deadlock? Explain essential conditions for deadlock to occur? In a multiprogramming environment, several processes may compete

More information

Multimedia Systems 2011/2012

Multimedia Systems 2011/2012 Multimedia Systems 2011/2012 System Architecture Prof. Dr. Paul Müller University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY http://www.icsy.de Sitemap 2 Hardware

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling COP 4610: Introduction to Operating Systems (Fall 2016) Chapter 5: CPU Scheduling Zhi Wang Florida State University Contents Basic concepts Scheduling criteria Scheduling algorithms Thread scheduling Multiple-processor

More information

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes:

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes: CPU Scheduling (Topic 3) 홍성수 서울대학교공과대학전기공학부 Real-Time Operating Systems Laboratory CPU Scheduling (1) Resources fall into two classes: Preemptible: Can take resource away, use it for something else, then

More information

Scheduling. Scheduling. Scheduling. Scheduling Criteria. Priorities. Scheduling

Scheduling. Scheduling. Scheduling. Scheduling Criteria. Priorities. Scheduling scheduling: share CPU among processes scheduling should: be fair all processes must be similarly affected no indefinite postponement aging as a possible solution adjust priorities based on waiting time

More information

Chap 7, 8: Scheduling. Dongkun Shin, SKKU

Chap 7, 8: Scheduling. Dongkun Shin, SKKU Chap 7, 8: Scheduling 1 Introduction Multiprogramming Multiple processes in the system with one or more processors Increases processor utilization by organizing processes so that the processor always has

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation

More information

UNIT:2. Process Management

UNIT:2. Process Management 1 UNIT:2 Process Management SYLLABUS 2.1 Process and Process management i. Process model overview ii. Programmers view of process iii. Process states 2.2 Process and Processor Scheduling i Scheduling Criteria

More information

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo CSE 120 Principles of Operating Systems Fall 2007 Lecture 8: Scheduling and Deadlock Keith Marzullo Aministrivia Homework 2 due now Next lecture: midterm review Next Tuesday: midterm 2 Scheduling Overview

More information

Example Sheet for Operating Systems I (Part IA)

Example Sheet for Operating Systems I (Part IA) Example Sheet for Operating Systems I (Part IA) 1. (a) Modern computers store data values in a variety of memories, each with differing size and access speeds. Briefly describe each of the following: i.

More information

Example Sheet for Operating Systems I (Part IA)

Example Sheet for Operating Systems I (Part IA) Example Sheet for Operating Systems I (Part IA) Solutions for Supervisors Michaelmas 2018 / Last Updated: April 5, 2018 Note these may be updated in the light of feedback. (Check update time.) 1 Processes

More information

MC7204 OPERATING SYSTEMS

MC7204 OPERATING SYSTEMS MC7204 OPERATING SYSTEMS QUESTION BANK UNIT I INTRODUCTION 9 Introduction Types of operating systems operating systems structures Systems components operating systems services System calls Systems programs

More information

CPU Scheduling. Schedulers. CPSC 313: Intro to Computer Systems. Intro to Scheduling. Schedulers in the OS

CPU Scheduling. Schedulers. CPSC 313: Intro to Computer Systems. Intro to Scheduling. Schedulers in the OS Schedulers in the OS Scheduling Structure of a Scheduler Scheduling = Selection + Dispatching Criteria for scheduling Scheduling Algorithms FIFO/FCFS SPF / SRTF Priority - Based Schedulers start long-term

More information

Operating Systems ECE344. Ding Yuan

Operating Systems ECE344. Ding Yuan Operating Systems ECE344 Ding Yuan Announcement & Reminder Midterm exam Will grade them this Friday Will post the solution online before next lecture Will briefly go over the common mistakes next Monday

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

CSE 120 Principles of Operating Systems

CSE 120 Principles of Operating Systems CSE 120 Principles of Operating Systems Fall 2016 Lecture 8: Scheduling and Deadlock Geoffrey M. Voelker Administrivia Thursday Friday Monday Homework #2 due at start of class Review material for midterm

More information

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science Virtual Memory CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those from an earlier edition of the course text Operating

More information

UNIT I OVERVIEW OF OPERATING SYSTEMS

UNIT I OVERVIEW OF OPERATING SYSTEMS UNIT I OVERVIEW OF OPERATING SYSTEMS Introduction - overview of operating system concepts - Process management and Scheduling, Memory management: partitioning, paging, segmentation, virtual memory, Device

More information

Uniprocessor Scheduling. Aim of Scheduling

Uniprocessor Scheduling. Aim of Scheduling Uniprocessor Scheduling Chapter 9 Aim of Scheduling Response time Throughput Processor efficiency Types of Scheduling Long-Term Scheduling Determines which programs are admitted to the system for processing

More information

Uniprocessor Scheduling. Aim of Scheduling. Types of Scheduling. Long-Term Scheduling. Chapter 9. Response time Throughput Processor efficiency

Uniprocessor Scheduling. Aim of Scheduling. Types of Scheduling. Long-Term Scheduling. Chapter 9. Response time Throughput Processor efficiency Uniprocessor Scheduling Chapter 9 Aim of Scheduling Response time Throughput Processor efficiency Types of Scheduling Long-Term Scheduling Determines which programs are admitted to the system for processing

More information

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9 Implementing Scheduling Algorithms Real-Time and Embedded Systems (M) Lecture 9 Lecture Outline Implementing real time systems Key concepts and constraints System architectures: Cyclic executive Microkernel

More information

ECE519 Advanced Operating Systems

ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (10 th Week) (Advanced) Operating Systems 10. Multiprocessor, Multicore and Real-Time Scheduling 10. Outline Multiprocessor

More information

8th Slide Set Operating Systems

8th Slide Set Operating Systems Prof. Dr. Christian Baun 8th Slide Set Operating Systems Frankfurt University of Applied Sciences SS2016 1/56 8th Slide Set Operating Systems Prof. Dr. Christian Baun Frankfurt University of Applied Sciences

More information

OPERATING SYSTEMS. Prescribed Text Book. Operating System Principles, Seventh Edition. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne

OPERATING SYSTEMS. Prescribed Text Book. Operating System Principles, Seventh Edition. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne 1 DEADLOCKS In a multi programming environment, several processes

More information

Chapter 19: Real-Time Systems. Operating System Concepts 8 th Edition,

Chapter 19: Real-Time Systems. Operating System Concepts 8 th Edition, Chapter 19: Real-Time Systems, Silberschatz, Galvin and Gagne 2009 Chapter 19: Real-Time Systems System Characteristics Features of Real-Time Systems Implementing Real-Time Operating Systems Real-Time

More information

Uniprocessor Scheduling. Chapter 9

Uniprocessor Scheduling. Chapter 9 Uniprocessor Scheduling Chapter 9 1 Aim of Scheduling Assign processes to be executed by the processor(s) Response time Throughput Processor efficiency 2 3 4 Long-Term Scheduling Determines which programs

More information

Chapter 9 Uniprocessor Scheduling

Chapter 9 Uniprocessor Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 9 Uniprocessor Scheduling Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall Aim of Scheduling Assign

More information

Lecture 9: Load Balancing & Resource Allocation

Lecture 9: Load Balancing & Resource Allocation Lecture 9: Load Balancing & Resource Allocation Introduction Moler s law, Sullivan s theorem give upper bounds on the speed-up that can be achieved using multiple processors. But to get these need to efficiently

More information

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system Real world exposed programs Programs written to interact with the real world, outside the computer Programs handle input and output of data in pace matching the real world processes Necessitates ability

More information

Multiprocessor scheduling

Multiprocessor scheduling Chapter 10 Multiprocessor scheduling When a computer system contains multiple processors, a few new issues arise. Multiprocessor systems can be categorized into the following: Loosely coupled or distributed.

More information

Chapter 8 Memory Management

Chapter 8 Memory Management 1 Chapter 8 Memory Management The technique we will describe are: 1. Single continuous memory management 2. Partitioned memory management 3. Relocatable partitioned memory management 4. Paged memory management

More information

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling Uniprocessor Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Three level scheduling 2 1 Types of Scheduling 3 Long- and Medium-Term Schedulers Long-term scheduler Determines which programs

More information

Operating System Review Part

Operating System Review Part Operating System Review Part CMSC 602 Operating Systems Ju Wang, 2003 Fall Virginia Commonwealth University Review Outline Definition Memory Management Objective Paging Scheme Virtual Memory System and

More information

Operating System - Virtual Memory

Operating System - Virtual Memory Operating System - Virtual Memory Virtual memory is a technique that allows the execution of processes which are not completely available in memory. The main visible advantage of this scheme is that programs

More information

CIS Operating Systems Contiguous Memory Allocation. Professor Qiang Zeng Spring 2018

CIS Operating Systems Contiguous Memory Allocation. Professor Qiang Zeng Spring 2018 CIS 3207 - Operating Systems Contiguous Memory Allocation Professor Qiang Zeng Spring 2018 Previous class Uniprocessor policies FCFS, Shortest Job First Round Robin Multilevel Feedback Queue Multiprocessor

More information

CS307: Operating Systems

CS307: Operating Systems CS307: Operating Systems Chentao Wu 吴晨涛 Associate Professor Dept. of Computer Science and Engineering Shanghai Jiao Tong University SEIEE Building 3-513 wuct@cs.sjtu.edu.cn Download Lectures ftp://public.sjtu.edu.cn

More information

Deadlocks. Today. Next Time. Resources & deadlocks Dealing with deadlocks Other issues. I/O and file systems

Deadlocks. Today. Next Time. Resources & deadlocks Dealing with deadlocks Other issues. I/O and file systems Deadlocks Today Resources & deadlocks Dealing with deadlocks Other issues Next Time I/O and file systems That s some catch, that Catch-22 Thread A: lock(l1); lock(l2);... A Thread B: lock(l2); lock(l1);...

More information

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses. 1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and

More information

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable What s An OS? Provides environment for executing programs Process abstraction for multitasking/concurrency scheduling Hardware abstraction layer (device drivers) File systems Communication Do we need an

More information

CPU Scheduling: Part I ( 5, SGG) Operating Systems. Autumn CS4023

CPU Scheduling: Part I ( 5, SGG) Operating Systems. Autumn CS4023 Operating Systems Autumn 2017-2018 Outline 1 CPU Scheduling: Part I ( 5, SGG) Outline CPU Scheduling: Part I ( 5, SGG) 1 CPU Scheduling: Part I ( 5, SGG) Basic Concepts Typical program behaviour CPU Scheduling:

More information