Multiprocessor scheduling

Similar documents
ECE519 Advanced Operating Systems

Multiprocessor and Real- Time Scheduling. Chapter 10

Multiprocessor and Real-Time Scheduling. Chapter 10

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Module 20: Multi-core Computing Multi-processor Scheduling Lecture 39: Multi-processor Scheduling. The Lecture Contains: User Control.

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 1: Introduction

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Multi-Processor / Parallel Processing

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Process- Concept &Process Scheduling OPERATING SYSTEMS

OPERATING SYSTEM. Functions of Operating System:

Operating Systems Overview. Chapter 2

IT 540 Operating Systems ECE519 Advanced Operating Systems

LECTURE 3:CPU SCHEDULING

MULTIPROCESSORS. Characteristics of Multiprocessors. Interconnection Structures. Interprocessor Arbitration

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

1 Multiprocessors. 1.1 Kinds of Processes. COMP 242 Class Notes Section 9: Multiprocessor Operating Systems

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Cs703 Current Midterm papers solved by Naina_Mailk. Q: 1 what are the Characteristics of Real-Time Operating system?

CS370 Operating Systems

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Module 1: Introduction

CS370 Operating Systems

Memory Systems in Pipelined Processors

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps

QUESTION BANK UNIT I

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Main Points of the Computer Organization and System Software Module

Input/Output Management

Module 1: Introduction

Multiprocessors and Thread-Level Parallelism. Department of Electrical & Electronics Engineering, Amrita School of Engineering

Multiprocessor Support

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Distributed File Systems Issues. NFS (Network File System) AFS: Namespace. The Andrew File System (AFS) Operating Systems 11/19/2012 CSC 256/456 1

CS370 Operating Systems

CPU Scheduling: Objectives

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

Module 1: Introduction. What is an Operating System?

Following are a few basic questions that cover the essentials of OS:

LAST LECTURE ROUND-ROBIN 1 PRIORITIES. Performance of round-robin scheduling: Scheduling Algorithms: Slide 3. Slide 1. quantum=1:

Multimedia Systems 2011/2012

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

2. The system of... generally ran one job at a time. These were called single stream batch processing.

Multiple Processor Systems. Lecture 15 Multiple Processor Systems. Multiprocessor Hardware (1) Multiprocessors. Multiprocessor Hardware (2)

CS6401- Operating System QUESTION BANK UNIT-I

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

The modularity requirement


CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Announcement. Exercise #2 will be out today. Due date is next Monday

Chapter 5: CPU Scheduling

Introduction. What is an Operating System? A Modern Computer System. Computer System Components. What is an Operating System?

For use by students enrolled in #71251 CSE430 Fall 2012 at Arizona State University. Do not use if not enrolled.

Solved MCQs on Operating System Principles. Set-1

Operating Systems Fundamentals. What is an Operating System? Focus. Computer System Components. Chapter 1: Introduction

CS370 Operating Systems

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

Chapter 9. Uniprocessor Scheduling

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

The concept of concurrency is fundamental to all these areas.

Computer-System Architecture (cont.) Symmetrically Constructed Clusters (cont.) Advantages: 1. Greater computational power by running applications

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

CSCI 4717 Computer Architecture

Subject Name:Operating system. Subject Code:10EC35. Prepared By:Remya Ramesan and Kala H.S. Department:ECE. Date:

Unit 3 : Process Management

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke

Multiprogramming. Evolution of OS. Today. Comp 104: Operating Systems Concepts 28/01/2013. Processes Management Scheduling & Resource Allocation

SMD149 - Operating Systems - Multiprocessing

Overview. SMD149 - Operating Systems - Multiprocessing. Multiprocessing architecture. Introduction SISD. Flynn s taxonomy

Chapter 8 : Multiprocessors

Introduction CHAPTER. Practice Exercises. 1.1 What are the three main purposes of an operating system? Answer: The three main puropses are:

CSE 120 Principles of Operating Systems

Chapter 1: Introduction

CHAPTER 2: PROCESS MANAGEMENT

1 PROCESSES PROCESS CONCEPT The Process Process State Process Control Block 5

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100

Parallel Processing. Computer Architecture. Computer Architecture. Outline. Multiple Processor Organization

To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

Commercial Real-time Operating Systems An Introduction. Swaminathan Sivasubramanian Dependable Computing & Networking Laboratory

CSCE Operating Systems Scheduling. Qiang Zeng, Ph.D. Fall 2018

UNIT 2 PROCESSES 2.0 INTRODUCTION

Multitasking and scheduling

Chapter 5: CPU Scheduling. Operating System Concepts 9 th Edit9on

06-Dec-17. Credits:4. Notes by Pritee Parwekar,ANITS 06-Dec-17 1

Multiprocessor Systems. COMP s1

CS370 Operating Systems

Scheduling. The Basics

CSE 4/521 Introduction to Operating Systems. Lecture 29 Windows 7 (History, Design Principles, System Components, Programmer Interface) Summer 2018

Computer Systems Assignment 4: Scheduling and I/O

CS418 Operating Systems

Practice Exercises 305

18-447: Computer Architecture Lecture 30B: Multiprocessors. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/22/2013

Transcription:

Chapter 10 Multiprocessor scheduling When a computer system contains multiple processors, a few new issues arise. Multiprocessor systems can be categorized into the following: Loosely coupled or distributed. The latter consists of a collection of relatively autonomous systems connected with an interconnection network. Each of them has its own memory and I/O Channels. A system with functionality specialized processors, or servers, such as an I/O, network, graphics, or a math coprocessor, works in an environment controlled by a general-purpose, master processor, to provide specific services. A tightly coupled multiprocessing system consists of processors that share a common memory and are under control of an operating system. The presently popular multi-core architecture falls into this category. 1

Granularity One way to characterize and compare multiprocessor systems is to consider their synchronization granularity, namely, the frequency of synchronization between processors in a system. We can thus categorize parallelism in terms of their granularity degree: With independent parallelism, there is no explicit synchronization among processes. Each process in the system represents a separate, independent application, or job. For example, in a time-sharing system such as turing, each user performs a particular application, such as c programming, system services, database related stuff, etc.. The multiprocessor system thus provides the same service as a multiprogrammed uniprocessor system, but with less response time from the perspective of a user. 2

With coarse and very coarse-grained parallelism, there is minimum synchronization among processes. This kind of situation can be easily handled as a set of concurrent processes running on a multiprogrammed uniprocessor system, and can be supported on a multiprocessor with little change of the associated software. As an example, a program has been developed which takes in specifications of files that need recompilation and decides which of these compilation can be done simultaneously. It is reported that the actual speedup is more than expected, since some of the compiled codes can be shared. In these situations, linear speedup is certainly what we can expect the most. For further details,check out the Dynamic multi-threaded programming unit in my Algorithm notes. 3

Medium- and fine-grained parallelism We saw earlier that an application can be effectively implemented as a collection of threads within a single process, e.g., mergesort, where the potential parallelism must be explicitly specified by the programmer. Typically, a high degree of coordination is needed among those threads, e.g., timing of merging, which leads to a medium-grained parallelism. Because the threads coming from a single process interact among themselves so frequently, scheduling decisions concerning one thread may have some impact on other threads belonging to the same process. Fine-grained parallelism represents a much more complex use of parallelism, and remains a very difficult area. In all these cases, a scheduler plays a central role. 4

Various design issues When dealing with a multiprocessor system, besides the dispatching policies, we have to talk about a few other issues, including how to assign processors to processes, now that we have more to give away, how to make use of multiprogramming on individual processors,... Assume that we have a fair and uniform environment, then the simplest approach of processor assignment is to treat all the processors as a pool. If a processor is permanently assigned to a process throughout its life, we should associate a short-term queue with each processor. This will lead to smaller overhead, but uneven workload for processors. An alternative is to use a common queue to serve all the processes. Thus, a process can run on different processors at different times. 5

Processor assignment Regarding the actual assignment, at least two approaches can be followed: With a master/slave approach, key kernel functions, including the scheduler of the operating system, always run on a master processor, while the rest of the processors are used to run user processes. When a slave process needs some service, it simply sends a request to the master processor, and waits for its response. This approach is very simple and does not need a conflicting resolution mechanism. But, the master can become a bottleneck, and it can even bring down the whole system when it fails. 6

The other approach In a peer structure, the OS can execute on any processor, and each of them does its own scheduling among the available processes. This certainly makes the situation messier, since the OS must ensure that two processors will not choose the same process and no process is starved, i.e., never gets chosen. Also, competition among processes for various resources must be resolved. There is plenty of room in between these two extremes. For example, a subset of processors, instead of just one processor, can be selected to run the kernel functions, including the dispatcher policies. 7

Question: When a process is statically assigned to a processor for its life time, should that processor be multiprogrammed, i.e., kept busy all the time? Answer: It depends. For coarse-grained, or independent, processes, each process contains a large amount of instructions, thus, it is much more likely that a processor can be idle when processes are blocked for various service, it is definitely a necessity to let the processor be able to switch among processes to have a better performance. But, for a medium-grained applications running in an environment with many interleaving processes, it is no longer that important to keep all the processors busy all the time. Sometimes, it is simply not realistic for all of them being busy, since we have to make sure that, e.g., all the threads of a process are ready to run, before assigning them to processors. 8

How to dispatch? A key design issue for a multiprocessor scheduling is the actual selection of a process for execution. (Still remember the ten policies that we went through in the previous chapter?) In a multiprogrammed environment, more sophisticated scheduling, based on such factors as priority and past usage, may lead to a better performance, compared with simpler scheduling algorithm such as FCFS. But, a more sophisticated algorithm often leads to more overhead, thus unnecessary or even counterproductive for the overall performance. 9

Simplicity is good In most traditional multiprocessor systems, processes are not associated with dedicated processors. Instead, a single queue for all the processors is used to serve all the competing processes. If priority is an important issue, then several queues will be used, each of which takes care of a class of processes with the same priority. Various studies, via simulation, show that the specific scheduling algorithm has much less impact with two processors, than one processor. Thus, a simple FCFS strategy, or the use of FCFS coupled with a static priority scheme, may suffice for a multiprocessor system. 10

Thread scheduling An application can be implemented as a set of threads, which cooperate and execute concurrently in the same address space. On a uniprocessor situation, threads can overlap, e.g., I/O request with processing needs. Because of the much less overhead involved with thread switching, compared with process switching, the overlapping leads to better performance with little penalty. This gain can be further enhanced in a multiprocessor environment where threads can be truly executed in parallel, namely, running on different processors at the same time. It is shown that different thread scheduling algorithms could have quite different impact. 11

What are they? With load sharing, processes are not assigned to a particular processor. When a processor is idle, it selects a thread from a global queue serving all processors. With this strategy, the load is evenly distributed among processors, and no centralized scheduler is required. Such a policy can be implemented in several ways, depending on the queue structure, e.g., FCFS, and priority will be given to those processes with smallest number of threads, or its preemptive variety. (Still remember the Priority queue stuff?) 12

Can t be all good news A global queue has to be implemented in a mutual exclusive way so that no multiple processor will grab the same thread. Another issue is that since, once a thread is taken off a processor, it is unlikely to be put on the same processor when it resumes execution, the use of the cache for the processors will be a problem. Finally, when all the threads are treated as the same, it is also likely that threads of the same process will run at the same time, which will make their coordination pretty difficult, if there is such a need. 13

Other approaches With gang scheduling, a set of related threads is scheduled to run on a set of processors at the same time, on a one-to-one basis. Then, synchronization blocking may be reduced, hence, less process switching and less overhead. As an extreme form of the gang scheduling, the dedicated processor assignment strategy is the opposite of the load sharing one. Each program is allocated a number of processors equal to that of the threads in the program, for the duration of the program execution. With dynamic scheduling, the number of threads in a process can be altered during the execution. This may allow the OS to adjust the load to improve its utilization. 14

Real-time scheduling In a real-time computing, its correctness depends not only on what results are derived, but also on when these results are derived. In general, such a system contains some realtime tasks, associated with certain degree of urgency. They typically respond to some outside events that happen in real time (sensors), thus have to keep up with those events and respond in a timely fashion. More specifically, a hard real-time task is one that must meet its deadline (Project4 is due on May 4 by 10 p.m..); while a soft real-time task has an associated deadline that is desirable, but not mandatory (It is 10 p.m., do you know where your children are?). 15

Characteristics An real-time OS can be deterministic if it performs operations at fixed, predetermined times, or within predetermined time intervals. When multiple processes are competing for resources, no system can be fully deterministic. Whether an OS can deterministically satisfy competing requests depends on the speed with which it can handle interrupts, as well as whether the system has enough capacity to meet all the requests within a predetermined time frame. 16

Responsiveness Determinism is about how long an OS has to wait before acknowledging an interrupt; while responsiveness is about how long it takes an OS to serve that interrupt. Thus, with responsiveness, we have to consider such factors as how long does it take to set up an interrupt and start the interrupt service routine; how long does it take to execute the service routine; and the effect of interrupt nesting, namely, whether an interrupt can occur while a previous one has not be completed. (Cf. Chapter 1 notes, pp. 27). In a real-time system, a user must be allowed to distinguish between hard and soft tasks and to specify the relative priorities among all the tasks. 17

Reliability This is far more important in a real-time system for obvious reasons. The fail-soft operation refers to the ability of a system, when fails, can preserve as much capability and data as possible. Stability, as a piece of reliability, refers to the ability that, when a system cannot meet all the deadlines, it can meet the deadlines of its most critical tasks. To meet all these requirements, such a system has to have such features as fast process switching time, small size, prompt response time, multitasking with interprocess communication tools, use of fast backup tools, preemptive scheduling based on priority, very short intervals between interrupts, special alarms and time-outs, etc.. 18

What is really important? The heart of such a real-time system is the short-term scheduler: how to make sure that all the hard real-time tasks complete by their deadlines and as many as soft tasks can complete by their deadlines as possible. Most contemporary real-time systems are unable to deal directly with deadlines. Instead, they are designed to be as responsive as possible to real-time tasks so that when a real-time task appears, it can be quickly scheduled. Thus, a real-time task typically has to have a very short deterministic response time under a wide set of conditions. All the stuff have to be quantified. 19

Scheduling algorithms Typically, a real-time task will be given a very high priority, thus will be scheduled as soon as the current process completes or blocks. Another approach can be a combination of priority with clock-based interrupts. Preemption occurs at regular intervals. At each such interval, the currently running task will be preempted if a higher-priority task is waiting. For more urgent tasks, immediate preemption can be adopted, which immediately responds to a real-time task, unless the system is in a critical lock-out section of the codes. Homework: Go through the two examples as given in the deadline scheduling (pp. 451 455) subsection of the textbook and complete Problems 10.1 and 10.2. 20