Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

Similar documents
CHAPTER 2: PROCESS MANAGEMENT

Chapter 5: CPU Scheduling

Lecture 2 Process Management

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

Chapter 6: CPU Scheduling

3. CPU Scheduling. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn

So far. Next: scheduling next process from Wait to Run. 1/31/08 CSE 30341: Operating Systems Principles

CPU Scheduling Algorithms

CS370 Operating Systems

CPU Scheduling: Part I ( 5, SGG) Operating Systems. Autumn CS4023

Chapter 5 CPU scheduling

Properties of Processes

Operating Systems. Figure: Process States. 1 P a g e

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

LECTURE 3:CPU SCHEDULING

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

TDIU25: Operating Systems II. Processes, Threads and Scheduling

COSC243 Part 2: Operating Systems

Last class: Today: CPU Scheduling. CPU Scheduling Algorithms and Systems

Chapter 5: Process Scheduling

CS370 Operating Systems

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes:

Last Class: Processes

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

Process- Concept &Process Scheduling OPERATING SYSTEMS

Operating System Concepts Ch. 5: Scheduling

Chapter 6: CPU Scheduling

SMD149 - Operating Systems

CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. Dispatcher. CPU Scheduler. Alternating Sequence of CPU and I/O Bursts

Announcements/Reminders

CS370 Operating Systems

Assignment 3 (Due date: Thursday, 10/15/2009, in class) Part One: Provide brief answers to the following Chapter Exercises questions:

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

by Maria Lima Term Paper for Professor Barrymore Warren Mercy College Division of Mathematics and Computer Information Science

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads

Unit 3 : Process Management

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

Processes and Threads

Advanced Operating Systems (CS 202) Scheduling (1)

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

Chapter 5: CPU Scheduling

CS307: Operating Systems

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition

UNIT 2 PROCESSES 2.0 INTRODUCTION

CPU Scheduling. Schedulers. CPSC 313: Intro to Computer Systems. Intro to Scheduling. Schedulers in the OS

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

CPU Scheduling: Objectives

OS Assignment II. The process of executing multiple threads simultaneously is known as multithreading.

OPERATING SYSTEMS: Lesson 4: Process Scheduling

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

UNIT - II PROCESS MANAGEMENT

Chapter 5: CPU Scheduling

CSE 120 Principles of Operating Systems Spring 2017

CS3733: Operating Systems

Scheduling of processes

CS370 Operating Systems

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Department of Computer applications. [Part I: Medium Answer Type Questions]

OPERATING SYSTEMS. COMS W1001 Introduction to Information Science. Boyi Xie

Processes and Threads

CSE 4/521 Introduction to Operating Systems

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS 450 Operating System Week 4 Lecture Notes

Frequently asked questions from the previous class survey

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Chapter 9. Uniprocessor Scheduling

Last Class: CPU Scheduling. Pre-emptive versus non-preemptive schedulers Goals for Scheduling: CS377: Operating Systems.

Chapter 5 Scheduling

Course Syllabus. Operating Systems

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

Operating Systems. Process scheduling. Thomas Ropars.

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

CS418 Operating Systems

Process Concept Process in Memory Process State new running waiting ready terminated Diagram of Process State

Comp 310 Computer Systems and Organization

Operating Systems ECE344. Ding Yuan

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Operating Systems CS 323 Ms. Ines Abbes

8th Slide Set Operating Systems

Practice Exercises 305

Round Robin (RR) ACSC 271 Operating Systems. RR Example. RR Scheduling. Lecture 9: Scheduling Algorithms

1.1 CPU I/O Burst Cycle

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

CSC 716 Advanced Operating System Fall 2007 Exam 1. Answer all the questions. The maximum credit for each question is as shown.

Operating Systems Unit 3

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Chapter 4: Threads. Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Transcription:

CS211: Programming and Operating Systems Lecture 17: Threads and Scheduling Thursday, 05 Nov 2009 CS211 Lecture 17: Threads and Scheduling 1/22

Today 1 Introduction to threads Advantages of threads 2 User and Kernel Threads 3 Types of Multithreading 4 Scheduling Processes 5 Preempting 6 The Dispatcher 7 Scheduling Criteria 8 Scheduling Algorithms First-Come, First-Served (FCFS) Shortest-Job-First (SJR) Round Robin (RR) CS211 Lecture 17: Threads and Scheduling 2/22

Introduction to threads To date we have always assumed that a process has a single thread of control. However this need not be the case... Example 1: A word-processor may simultaneous display text, check spellings, read keystrokes, spool a printer. All these could be achieved by separate processes, but would need to share certain resources, particularly program data. So, instead of using a traditional process, we might use threads... CS211 Lecture 17: Threads and Scheduling 3/22

Introduction to threads A thread (or lightweight process) is a basic unit of CPU utilisation. Rather than a process begin dedicated to one sequential list of tasks it may be broken up into threads. These consists a set of (distinct) program counter the next instruction to be executed register set operands for CPU instructions stack temporary variables, etc. Threads belonging to the same process share code section, data section, operating-system resources, collectively known as the task. A traditional process is sometimes known as a heavyweight process. CS211 Lecture 17: Threads and Scheduling 4/22

Introduction to threads Threads differ from the child processes that we looked at last week (created using fork()): though a child might inherit a copy of its parent s memory space, a thread shares it. Example 2: Java uses threads to (among other thing) simulate asynchronous behaviour. If a Java program attempts to connect to a server, it will block until it makes the connection. In order to perform a time-out it sets up two threads one to make the connection and another to sleep for, say, 60 seconds. When it wakes, it generates an interrupt if a connection has not been established. Threads were initially introduced by Sun operating systems, but are now a feature of any modern OS, in particular versions of Windows since NT/2000. CS211 Lecture 17: Threads and Scheduling 5/22

Introduction to threads Advantages of threads The reasons that an OS might use threads over heavy-weight processes are: (i) Responsiveness: a part of a process may continue working, even if another part is blocked, for example waiting for an I/O operation to occur. (ii) Resource sharing: For example, you can have several hundred threads running at the same time. If they were separate procs, each would need their own memory space. However, as threads they can share that resource. (iii) Economy: Thread creation is much faster than processes creation. E.g., on Solaris, it takes 30 times as long to create a proc as a thread. Also context switching is faster (about 5 times). (iv) Efficiency: Threads belonging to the same proc share common memory space so they do not need support from the OS to communicate. (v) Utilisation of multiprocessor architecture each thread can execute on a different processor. CS211 Lecture 17: Threads and Scheduling 6/22

User and Kernel Threads Threads may be one of two types: User or Kernel. User threads are implemented by a thread library: a collection is routines and functions for manipulating threads. Hence, user threads are dealt at the compiler/library level above the level of the OS kernel. Advantage: User threads are quick to create and destroy because systems calls are not required. Disadvantage: If the kernel itself is not threaded, then if one thread makes a blocking system call, then the entire process with be blocked. The POSIX Pthreads are user threads found on most Unix systems. CS211 Lecture 17: Threads and Scheduling 7/22

User and Kernel Threads Kernel Threads: thread creation, management and scheduling is controlled by the operating system. Hence they are slower to manipulate then user threads, but do not suffer from blocking problems: if one thread performs a blocking system call, the kernel can schedule another thread from that application for execution. Windows has support kernel threads. CS211 Lecture 17: Threads and Scheduling 8/22

Types of Multithreading User and kernel threads are implemented in different ways. Most important are: Many-to-One One-to-one Many-to-Many Many-to-One model A process is defined with address space and resource ownership. Each process is associated with a single kernel thread, but may create multiple threads to be executed within the proc. The Many-to-One (M:1) model is used on systems without kernel thread support. System blocking calls are a problem. CS211 Lecture 17: Threads and Scheduling 9/22

Types of Multithreading One-to-One model: Each user thread is mapped to a kernel thread. Hence there is better concurrency that in the one-to-one model. Also threads run in parallel on a multiprocessor. However, creating a user thread also creates a kernel thread, which have (relatively) large system overheads. Most one-to-one systems limit the number of threads for that reason. One-to-one may be used on systems with kernel thread support. Windows and the Linux C library implement one-to-one models. CS211 Lecture 17: Threads and Scheduling 10/22

Types of Multithreading Many-to-Many model User threads are multiplexed among a smaller or equal number of kernel threads. The number of kernel thread per process may depend on the type of application of the type of computer (single proc, dual proc, multi proc, etc). Like the many-to-one model, programmers may create many threads quickly. Like the one-to-one model, concurrence can be achieved and blocking need not be a problem. Digital Unix supports the Many-to-Many model. In all the above models, all the threads belonging to a process are suspended/terminated when the process is suspended/terminated. CS211 Lecture 17: Threads and Scheduling 11/22

Scheduling Processes CPU scheduling is the basis of multiprogrammed operating systems, and so of multitasking operating systems. The underlying concepts of CPU scheduling are: Goal: Maximum CPU utilisation i.e., there should be something running at any given time. Based around: The CPU burst I/O Burst Cycle a running process alternates between executing instructions (CPU burst) and waiting for interaction with peripheral devices (I/O burst) Depends on: the burst distribution is a given proc predominantly concerned with computation or I/O. Also, a proc usually begins with a long CPU burst and almost always ends with a CPU burst. CS211 Lecture 17: Threads and Scheduling 12/22

Preempting The CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. This happens when a process: (i) Switches from running to waiting state (ii) Terminates. (iii) Switches from running to ready state (iv) Switches from waiting to ready Scheduling that depends only on (i) and (ii) is non-preemptive. All other scheduling is preemptive. CS211 Lecture 17: Threads and Scheduling 13/22

Preempting If scheduler uses only non-preemptive methods, then once a process is allocated the CPU it will retain it until it terminates or changes its state. E.g., Windows 3.1, versions MacOS prior to v8. For preemptive methods, the scheduler must have an algorithm for deciding when a control of the CPU must be passed to another process. Note: Non-preemptive methods will work on any hardware, but preemptive will not. Why? CS211 Lecture 17: Threads and Scheduling 14/22

The Dispatcher The Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: 1 switching context 2 switching to user mode 3 jumping to the proper location in the user program to restart that program The dispatcher gives rise to the phenomenon of Dispatch latency, i.e., the time it takes for the dispatcher to stop one process and start another running. CS211 Lecture 17: Threads and Scheduling 15/22

Scheduling Criteria The choice/evaluation of scheduling algorithms is usually based on the following: CPU utilisation keep the CPU as busy as possible Throughput the number of processes that complete their execution per time unit Turnaround time the time that elapses between a process entering the ready state for the first time and leaving the running state for the last time. Waiting time amount of time a process has been waiting in the ready queue Response time amount of time it takes from when a request was submitted until the first response is started (but not including the time it takes to conclude the response) In general one would try to maximise CPU utilisation and throughput, and minimise turnaround time, wait time and response time. CS211 Lecture 17: Threads and Scheduling 16/22

Scheduling Algorithms There are many algorithms for scheduling including 1 First-Come-First-Served (FSFS) 2 Shortest-Job-First (SJF) 3 Round-Robin (RR) 4 Priority Scheduling 5 Multilevel queuing In describing each of these, we ll consider a few examples. For each example we ll assume that each proc has a single CPU burst, measured in milliseconds. Algorithms will be compared by calculating the average wait time. CS211 Lecture 17: Threads and Scheduling 17/22

Scheduling Algorithms Example 1 Proc Burst Time P 1 10 P 2 10 P 3 10 Example 2 Proc Burst Time P 1 19 P 2 10 P 3 1 CS211 Lecture 17: Threads and Scheduling 18/22

Scheduling AlgorithmsFirst-Come, First-Served (FCFS) The simplest algorithm is FCFS: the first proc that requests the CPU is given it. Advantage: This is the simplest algorithm to implement, requiring only a FIFO queue. Disadvantage: It has a long average wait time. For Example 1, the average wait time is (0 + 10 + 20)/3 = 10 For Example 2, the average wait time is (0 + 19 + 29)/3 = 16 The FSFS algorithm is also known to suffer from the Convoy Effect. CS211 Lecture 17: Threads and Scheduling 19/22

Scheduling Algorithms Shortest-Job-First (SJR) Associate the length of its next CPU burst with each process. Use these lengths to schedule the process with the shortest burst first. For Example 2, the average wait time is (0 + 1 + 11)/3 = 4. So this algorithm is much better than FCFS! Advantage: SJF is optimal it gives the minimum average waiting time for a given set of processes. Disadvantage: It cannot be employed in a realistic setting because we would be required to known the length of next CPU Burst before it happens. One can only estimate the length. This can be done by using the length of previous CPU bursts, using some form of averaging. CS211 Lecture 17: Threads and Scheduling 20/22

Scheduling Algorithms Shortest-Job-First (SJR) There are two versions of SJF: non-preemptive once CPU given to the process it cannot be preempted until completes its CPU burst. preemptive if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF). CS211 Lecture 17: Threads and Scheduling 21/22

Scheduling Algorithms Round Robin (RR) Each process gets a small unit of CPU time called a time quantum or time slice usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n 1)q time units. (Why?) The size of the quantum is of central importance to the RR algorithm. If it is too large, then its is just the FCFS model. If it is too low, them too much time is spent on context switching. CS211 Lecture 17: Threads and Scheduling 22/22