Operating Systems and Networks Assignment 9

Similar documents
Operating Systems and Networks Assignment 2

Computer Systems Assignment 4: Scheduling and I/O

Computer Systems Assignment 2: Fork and Threads Package

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

OS 1 st Exam Name Solution St # (Q1) (19 points) True/False. Circle the appropriate choice (there are no trick questions).

CS 370 Operating Systems

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Getting to know you. Anatomy of a Process. Processes. Of Programs and Processes

CHAPTER 2: PROCESS MANAGEMENT

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

CSE 4/521 Introduction to Operating Systems

INF1060: Introduction to Operating Systems and Data Communication. Pål Halvorsen. Wednesday, September 29, 2010

University of Ottawa School of Information Technology and Engineering

(MCQZ-CS604 Operating Systems)

Processes and Threads

Introduction to OS Processes in Unix, Linux, and Windows MOS 2.1 Mahmoud El-Gayyar

COSC243 Part 2: Operating Systems

Operating Systems. Process scheduling. Thomas Ropars.

CS4514 Real Time Scheduling

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Operating Systems. Engr. Abdul-Rahman Mahmood MS, PMP, MCP, QMR(ISO9001:2000) alphapeeler.sf.net/pubkeys/pkey.htm

Reading Assignment 4. n Chapter 4 Threads, due 2/7. 1/31/13 CSE325 - Processes 1

Processes. Operating System CS 217. Supports virtual machines. Provides services: User Process. User Process. OS Kernel. Hardware

CSCI 4210 Operating Systems CSCI 6140 Computer Operating Systems Project 1 (document version 1.3) Process Simulation Framework

Lecture 2 Process Management

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Roadmap. Tevfik Ko!ar. CSC Operating Systems Spring Lecture - III Processes. Louisiana State University. Virtual Machines Processes

Operating Systems 2014 Assignment 2: Process Scheduling

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

CS 322 Operating Systems Practice Midterm Questions

Mon Sep 17, 2007 Lecture 3: Process Management

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Mid Term from Feb-2005 to Nov 2012 CS604- Operating System

CS370 Operating Systems

OPERATING SYSTEMS: Lesson 4: Process Scheduling

238P: Operating Systems. Lecture 14: Process scheduling

Chapter 5 Scheduling

Operating Systems. Lecture 05

Assignment 3 (Due date: Thursday, 10/15/2009, in class) Part One: Provide brief answers to the following Chapter Exercises questions:

CPU Scheduling Algorithms

But this will not be complete (no book covers 100%) So consider it a rough approximation Last lecture OSPP Sections 3.1 and 4.1

Properties of Processes

Recap. Run to completion in order of arrival Pros: simple, low overhead, good for batch jobs Cons: short jobs can stuck behind the long ones

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Problem Set: Processes

CS 537: Intro to Operating Systems (Fall 2017) Worksheet 3 - Scheduling & Process API Due: Sep 27 th 2017 (Wed) in-class OR Simmi before 5:30 pm

Killing Zombies, Working, Sleeping, and Spawning Children

Chapter 5: CPU Scheduling

CS370 Operating Systems

Scheduling, part 2. Don Porter CSE 506

Analyzing Real-Time Systems

Operating System Concepts Ch. 5: Scheduling

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Process a program in execution; process execution must progress in sequential fashion. Operating Systems

Chapter 5: CPU Scheduling

1.1 CPU I/O Burst Cycle

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

Operating Systems. Lecture Process Scheduling. Golestan University. Hossein Momeni

Networks and Operating Systems ( ) Chapter 3: Scheduling

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

Administrivia. Networks and Operating Systems ( ) Chapter 3: Scheduling. A Small Quiz. Last time. Scheduling.

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

csci3411: Operating Systems

Windows architecture. user. mode. Env. subsystems. Executive. Device drivers Kernel. kernel. mode HAL. Hardware. Process B. Process C.

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

SE350: Operating Systems

CSC 716 Advanced Operating System Fall 2007 Exam 1. Answer all the questions. The maximum credit for each question is as shown.

Processes & Threads. Recap of the Last Class. Layered Structure. Virtual Machines. CS 256/456 Dept. of Computer Science, University of Rochester

Process Management! Goals of this Lecture!

Process- Concept &Process Scheduling OPERATING SYSTEMS

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

CISC 7310X. C05: CPU Scheduling. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/1/2018 CUNY Brooklyn College

CSE 153 Design of Operating Systems

CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. Dispatcher. CPU Scheduler. Alternating Sequence of CPU and I/O Bursts

Chapter 6: CPU Scheduling

CPSC 341 OS & Networks. Processes. Dr. Yingwu Zhu

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 21

CS 3733 Operating Systems

Class average is Undergraduates are performing better. Working with low-level microcontroller timers

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

TDIU25: Operating Systems II. Processes, Threads and Scheduling

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Scheduling in the Supermarket

Operating Systems Design Exam 1 Review: Spring 2012

EECS 482 Introduction to Operating Systems

Operating Systems 2013/14 Assignment 3. Submission Deadline: Tuesday, December 17th, :30 a.m.

Multi-Level Feedback Queues

Operating Systems. Scheduling

COP 4610: Introduction to Operating Systems (Spring 2016) Chapter 3: Process. Zhi Wang Florida State University

KEY CENG 334. Midterm. Question 1. Question 2. Question 3. Question 4. Question 5. Question 6. Total. Name, SURNAME and ID

Homework Assignment #5

EECE.4810/EECE.5730: Operating Systems Spring 2018 Programming Project 3 Due 11:59 PM, Wednesday, 4/18/18 Monday, 4/30/18

Processes. CS3026 Operating Systems Lecture 05

Operating Systems ECE344. Ding Yuan

Process Management 1

Titolo presentazione. Scheduling. sottotitolo A.Y Milano, XX mese 20XX ACSO Tutoring MSc Eng. Michele Zanella

Transcription:

Spring Term 01 Operating Systems and Networks Assignment 9 Assigned on: rd May 01 Due by: 10th May 01 1 General Operating Systems Questions a) What is the purpose of having a kernel? Answer: A kernel is a piece of software which provides save multiplexing/access of the underlying hardware. (a) What is the least functionality a kernel has to provide usually (Hint: Usually a minimal kernel provides three properties)? Answer: A kernel provides at least basic scheduling, some form of message passing (or the setup of channels) and protection (only the kernel sets up page table entries) (b) What does this functionality provide to the rest of the system? Answer: This functionality provides a secure environment to the user-space. The underlying hardware is multiplexed. Every process can get memory, but cannot overwrite any other process s memory. Basic scheduling makes sure, that every process gets some amount of CPU cycles. Basic messaging makes sure that you can build applications which act as servers. Those servers export functionality/services which can be used by other applications. (c) Where does the rest of the system reside? Answer: The rest of the system lives in user space which means in libraries and applications. Applications may provide services to other applications. (d) How does the rest of the system interact with the kernel? Answer: The user-space part of the system interacts via system calls with the kernel. System calls are the gateway to the kernel. (e) Why does it need to interact with the kernel? Answer: Getting more memory means setting up page table entries which can only be done by the kernel. Setting up a message channel cannot be done in user-space, if userspace processes are completely isolated. In general, user-space libraries or applications have to interact with the kernel, if they need some resources which can only be accessed by the kernel. Answer: Note: The description above is about the minimum functionality which usually is provided by a kernel (microkernel). Linux is a monolithic kernel. It provides a lot more functionality to the user-space.

b) The kernel can do and access everything. It exports functionality to user applications by syscalls. Does that mean that every user application can execute code in the kernel by doing a syscall? Answer: In theory yes, but the real answer is no. System calls can be performed by every user-space application. However the kernel has to check arguments passed by the calling user application. It also has to check whether the user launching the application is privileged enough to do some operations. In Linux we have the notion of root. For some operations, the kernel checks whether the calling process has root rights. c) Can you compile glibc without the kernel sources? Answer: You can compile glibc without kernel sources. However you need the kernel headers. The reason is that there is an interface between the kernel and the user-space library glibc. The library has to know the system calls. Scheduling The following table describes tasks to be scheduled. The table contains the entry times of the tasks, their duration/execution times and their deadlines. All time values are given in ms. Task Number Entry Execution Time Deadline 1 0 0 70 0 0 90 0 0 0 0 10 0 0 0 10 Scheduling decisions are performed every 10ms. You can assume that scheduling decisions take about no time. The deadline values are absolute..1 Creating schedules In the following you are asked to create different types of schedules. Please visualize your schedule (like in the lecture) and also answer the questions below. Types of schedules: a) RR (Round Robin) b) EDF (Earliest Deadline first) c) SRTF (Shortest Remaining Time First) Please answer the following questions for each of the schedules: a) How big is the wait time per task? b) How big is the average wait time? c) How big is the turnaround time per task? d) How is the response time computed for this scheduler? If possible, calculate the response time per task. Answer: Definitions of the terms turnaround time, waiting time and response time according to the Silberschatz book:

The Turnaround time is the time between the submission or arrival of a job until it completes execution. Waiting time is the time the job spends runnable but not executing. This is the sum of all periods the jobs spends in the runqueue, but it is not actually running. Response time is the time it takes from the point where a job becomes runnable to the point where output appears. In general this applies to interactive tasks. For example, how long does it take from the user pressing a key to the character being displayed on the screen? a) RR (Round Robin) We assume, that a task which enters the system can be immediately scheduled and it is at the beginning of the scheduling ring. This leads to the following schedule: Task 00 10 0 0 0 0 60 70 80 90 100 1 (a) How big is the wait time per task? 1: 60ms, : 0ms, : 0ms, : 0ms, : 0ms (b) How big is the average wait time? (60ms + 0ms + 0ms + 0ms + 0ms) / = 6ms (c) How big is the turnaround time per task? 1: 90ms, : 70ms, : 60ms, : 10ms, : 60ms (d) The response time In the worst case the task just lost its timeslice (is being preempted by the scheduler) when the user pressed a key. If the task is very fast, it can produce output immediately when it becomes running again. So with jobs we have to wait for jobs which are running in the meantime. 1: jobs scheduled in RR with a timequanta of ( - 1) * 10ms = 0ms b) EDF (Earliest deadline first) The deadlines are absolute if the tasks are non-periodic. If two tasks have the same deadline, we assume that the first found task is going to run. This leads to the following schedule: Task 00 10 0 0 0 0 60 70 80 90 100 110 1 (a) How big is the wait time per task? 1: 0ms, : 60ms, : 10ms, : 0ms, : 0ms (b) How big is the average wait time? (0ms + 60ms + 10ms + 0ms + 0ms) / = 6ms

(c) How big is the turnaround time per task? 1: 60ms, : 80ms, : 0ms, : 10ms, : 60ms (d) The response time This is less obvious than in RR scheduling. Since the task are scheduled by their deadline we can say, that if a schedule is feasible, the response time is at most (deadline - entry time - execution time). This leads to: 1: 0ms, : 70ms, : 10ms, : 0ms, : 0ms c) SRTF (Shortest remaining time first) The job with the shortest execution time is always chosen to be executed. This scheduling might lead to starvation as long jobs might never be scheduled due to short running ones. SRTF can lead to the following schedule: Task 00 10 0 0 0 0 60 70 80 90 100 110 1 (a) How big is the wait time per task? 1: 0ms, : 0ms, : 0ms, : 10ms, : 0ms (b) How big is the average wait time? (0ms + 0ms + 0ms + 10ms + 0ms) / = 18ms (c) How big is the turnaround time per task? 1: 80ms, : 0ms, : 0ms, : 0ms, : 60ms (d) The response time There is no formalism that can be developed for SRTF for computing the response time. As SRTF can lead to starvation, it might happen that some jobs are never scheduled. As such, the response time can not be estimated.. General Questions a) What is the problem with shortest job first (SJF) scheduling policy? Answer: Long jobs are potentially never scheduled, if short jobs are entering the system continuously and if there is no preemption. b) What is the advantage of SJF? Answer: It minimizes the waiting time and the turnaround time. c) What is the benefit of round robin? Answer: It is easy to implement, understand and analyze. It has a good response time. d) What is the big conceptual difference between EDF and RR? Answer: EDF is a realtime-based scheduling strategy. Therefore tasks have priorities. RR treats all tasks the same. This is not a realtime scheduling strategy and there is no notion of priorities. e) Why do hard realtime systems often not have dynamic scheduling? Answer: In a dynamic setup it is not possible to guarantee the feasibility of a correct schedule. That means, if new tasks enter the system and the system allows that, it cannot

be guaranteed that every task still meets the deadline. The other approach is to first compute whether there is enough time to allow a new task. If not, creation of a new task will fail. This guarantees that the already running task will always meet the deadlines, however it is possible that an important task cannot be created.. Realtime Scheduling You are designing an HD-TV. To keep the production costs low, it has only one CPU which must perform the following tasks: Decode video chunks (takes 0 ms, has to be done every 00ms) Update Screen (takes 0 ms, has to be done every 00 ms) Handle user input (takes 10 ms, has to be done every 0 ms) a) Show that it is possible to schedule all tasks in such a way that all deadlines are met, using Rate Monotonic Scheduling. Do not present a working schedule. Answer: It can be shown that N periodic tasks with the computation times C i and periods P i can be scheduled by ordering the tasks according to their periods if U = N i=1 C i P i N( 1 N 1) For the given values we get: 0. 0.78 (cf. last question). b) Show that, when the number of tasks approaches infinity, all tasks can be scheduled without violating deadlines if the utilization is below 69.%. Explain how one can arrive at this result. Answer: lim N (N( 1 N 1)) = ln() 0.69 c) Now assume your boss wants you to add DRM to your TV. This requires an extra task with a period of 00 ms and a execution time of 100 ms. (a) What is the utilization of the system now? (b) What is the upper bound according to the theorem used in b)? (c) If you try to build a working schedule by hand, you will see that it is still possible to create one. How can this be explained? Answer: Task i Execution Time C i Period T i C i /T i Utilization Upper Bound 1 0 00 0.1 0.1 1.00 0 00 0.0 0.0 0.8 10 0 0. 0. 0.78 100 00 0.77 0.77 0.7 The upper bound given above states that if utilization U, for a set of tasks, is below N( 1 N 1) then it is always possible to schedule those tasks - however, it is not a tight upper bound, i.e., there are many cases where feasible schedules can be found even though the utilization reaches 1. Priority Inversion Please explain in detail:

a) What is priority inversion? Answer: A lower-priority task is running while a high-priority task is not, but would be runnable. Those two tasks are independent and still the lower-priority task runs instead the high-priority task. b) What is the problem with priority inversion? Answer: High-priority tasks cannot proceed, because lower-priority tasks are running. Blocking high-priority tasks is very bad in general. c) What causes priority inversion? Answer: A low-priority task is holding a lock. Later, a high-priority task is trying to acquire the same lock which it can t, because it is held by the low-priority task. In the meantime, a middle-priority task enters the system and therefore, the low-priority task is preempted, since it has lower priority. The middle-priority task and the high-priority task are completely independent and still the middle-priority task runs instead of the high-priority one. In fact it looks like the middle-priority task has higher priority than the high-priority one. So we have a priority inversion. d) How can this problem be solved? Answer: This can be solved by a priority inheritance scheme. The task holding the lock temporarily inherits the priority of the task which wants to acquire the lock, if this priority is higher. That makes sure, that no middle-priority task can suspend the low-priority task holding the lock. e) Priority inheritance (a) How many levels of priority inheritance do you need? Answer: In general you need as many levels as possible priority levels. If your tasks can have five levels of priorities (for example), then you need to have five levels of priority inheritance. (b) Why? Answer: We know now that the low-priority task holding the lock which a highpriority task wants to acquire has to inherit its priority. However, if for example the low-priority task holds two locks and a high-priority task wants to acquire one of them, it inherits this priority. If a very-high-priority task enters the system and wants to acquire the second lock, the low-priority task inherits the very high priority. After the low-priority task releases the second lock, its priority has to be reset. But we don t want to reset it to the original low priority, since it is still holding the first lock which is also requested by a high-priority task. That means, we should reset the priority from very high to high. In general this can happen with all the possible levels of priorities. A low-priority task might hold many locks. fork() Creating new processes in the Unix/Linux world is done using fork(). fork() clones an existing process and adds it to the runqueue, rather then really creating a new one. Since fork clones a process, they both execute the line after the fork() call. Now they need to distinguish whether they are parent or child process. This can be done by checking the return value of fork(): to the parent process, fork() returns the PID of the child process, to the child process, fork() returns 0.1 Playing with fork().1.1 Calling fork() once Create a program which forks itself once. The parent process should output Im the parent and my child s PID is <pid>. The child should output I m the child and my PID is <pid>. Do the 6

PIDs match?.1. fork() multiple times What do you expect to happen here? Please explain what you think will happen. int main(int argc, char **argv) { while(1) { fork(); } } Answer: If you execute this code, your computer will be (almost) dead. Every child forks new children in a loop and this new children fork new children in a loop as well. This consumes too many resources in a very short time. Not only memory, but also CPU cycles, page table entries, descriptors....1. Executing ls -l from your program Write a simple program (main function) which executes the ls program. Look up the manual page for the exec family (man exec). After the exec call in your main function, have a printf which says that you have called exec now. What do you notice? Answer: The line after exec of your program will not be executed. exec replaces the currently executed program by a new one. It does not automatically fork a new process to execute ls -l. How can you fix that? Answer: If you want your program to continue to run, first fork and execute exec in the child. If you want your program to wait until the child process terminated, you can call one of the wait functions (see man wait).. Reading the ls output from a pipe Create a simple application which opens a pipe, executes ls and reads its output via the pipe. Your application should write a sentence and the output of ls to the console (example: Output from ls: <ls output> ). 7