CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

Similar documents
CSL373: Lecture 6 CPU Scheduling

Last Class: Deadlocks. Today

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

CSCE Operating Systems Scheduling. Qiang Zeng, Ph.D. Fall 2018

Last Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples

Topic 4 Scheduling. The objective of multi-programming is to have some process running at all times, to maximize CPU utilization.

CSE 120 Principles of Operating Systems

UNIT:2. Process Management

Process Scheduling. Copyright : University of Illinois CS 241 Staff

CS 318 Principles of Operating Systems

CS 153 Design of Operating Systems Winter 2016

Deadlocks. Dr. Yingwu Zhu

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

Midterm Exam Amy Murphy 19 March 2003

Operating Systems (Classroom Practice Booklet Solutions)

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

COMP 3361: Operating Systems 1 Final Exam Winter 2009

Computation Abstractions. Processes vs. Threads. So, What Is a Thread? CMSC 433 Programming Language Technologies and Paradigms Spring 2007

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Process Management And Synchronization

Module 6: Deadlocks. Reading: Chapter 7

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

CPS 110 Midterm. Spring 2011

CPSC/ECE 3220 Summer 2018 Exam 2 No Electronics.

Multi-core Architecture and Programming

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Dealing with Issues for Interprocess Communication

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes:

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Last Class: Synchronization Problems!

Main Points of the Computer Organization and System Software Module

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Remaining Contemplation Questions

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

CHAPTER 6: PROCESS SYNCHRONIZATION

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

CS370 Operating Systems

Operating Systems. Process scheduling. Thomas Ropars.

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 7: Deadlocks. Operating System Concepts 8th Edition, modified by Stewart Weiss

Microkernel/OS and Real-Time Scheduling

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Midterm Exam. October 20th, Thursday NSC

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CPSC/ECE 3220 Summer 2017 Exam 2

CSE 120 Principles of Operating Systems Spring 2017

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

Scheduling. CS 161: Lecture 4 2/9/17

CS140 Operating Systems Midterm Review. Feb. 5 th, 2009 Derrick Isaacson

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Deadlock

Course Syllabus. Operating Systems

Advanced Topic: Efficient Synchronization

Unit 3 : Process Management

QUESTION BANK UNIT I

Last Class: Processes

Process Synchronisation (contd.) Deadlock. Operating Systems. Spring CS5212

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

System Model. Deadlocks. Deadlocks. For example: Semaphores. Four Conditions for Deadlock. Resource Allocation Graph

Operating Systems ECE344. Ding Yuan

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

What is Deadlock? Two or more entities need a resource to make progress, but will never get that resource. Examples from everyday life:

Operating Systems: Quiz2 December 15, Class: No. Name:

IT 540 Operating Systems ECE519 Advanced Operating Systems

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1

Frequently asked questions from the previous class survey

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Deadlock. Disclaimer: some slides are adopted from Dr. Kulkarni s and book authors slides with permission 1

Concurrency. Glossary

Virtual Machine Design

CS 318 Principles of Operating Systems

Concurrency. Chapter 5

The Deadlock Problem (1)

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

G52CON: Concepts of Concurrency

Scheduling. Multiple levels of scheduling decisions. Classes of Schedulers. Scheduling Goals II: Fairness. Scheduling Goals I: Performance

Scheduling of processes

Operating Systems Overview. Chapter 2

CSE 451: Operating Systems Winter Module 10 Scheduling

Concurrent & Distributed Systems Supervision Exercises

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Chapter 5: Synchronization 1

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01

CS370 Operating Systems Midterm Review

Scheduling Bits & Pieces

Process- Concept &Process Scheduling OPERATING SYSTEMS

Resource Sharing & Management

Suggested Solutions (Midterm Exam October 27, 2005)

Chapter 5: CPU Scheduling

G52CON: Concepts of Concurrency

Concurrent Programming Synchronisation. CISTER Summer Internship 2017

Readings and References. Deadlock. Deadlock. Simple Traffic Gridlock Example. Reading. CSE Computer Systems November 30, 2001.

Lecture 10: Multi-Object Synchronization

Transcription:

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

Past & Present Have looked at two constraints: Mutual exclusion constraint between two events is a requirement that they do not overlap in time» Enforced using scheduling, locks, semaphores, monitors Precedence constraint between two events is a requirement that one completes before the other» (usually) enforced using scheduling or semaphores Synchronization primitive ordering: Atomic instructions can implement locks, locks can implement semaphores (lock + integer counter) or monitors (one implicit lock), and vice versa (of course) Today: Deadlock: what to do when many threads = no progress Scheduling: what to do when many threads want progress

Deadlock Graphically, caused by a directed cycle in interthread dependencies e.g., T1 holds resource R1 and is waiting on R2, T2 holds R2 and is waiting on R1 waiting-for r2 held-by t1 t2 held-by r1 waiting-for e.g., r1 = disk, r2 = printer No progress possible Even simple cases can be non-trivial to diagnose

An example Lock l1, l2 void p() { l1->acquire(); l2->acquire(); <ops on shared state> l2->release(); l1->release(); } void q() { l2->acquire(); l1->acquire(); <ops on shared state> l1->release(); l2->release(); }

Deadlock Prevention: Eliminate one condition Problem: limited access Solutions: Buy more resources, split into pieces, or split the usage of a resource temporally to make it appear infinite in number Problem: Non-preemption Solution: create copies or virtualize Threads: each thread has it s own copy of registers = no lock Physical memory: virtualized with VM, can take physical page away and give to another process! Problem: Hold + wait Solution: acquire resources all at once (wait on many without locking any, must know all needed) Problem: Circularity Possible Solutions: Single lock for entire system: (problems?) Partial ordering of resources (next)

Partial orders: simple deadlock control Order resources (lock1, lock2, ) Acquire resources in strictly increasing (or decreasing) order Intuition: number all nodes in a graph to form a cycle, there has to be at least one edge from high to low number (or to same node) 1 2 4 1 2 1 3

What if you need to acquire locks in Use trylocks() different orders? If trylock fails, release all previously held locks and reacquire them in correct order. Need to make sure that the state is sane when releasing locks. Remember that on releasing and reacquiring locks, the state could have changed.

Example: out-of-order locks //l1 protects d1, l2 protects d2. lock order: l1, l2 //decrements d2, and if the result is 0, increments d1 void increment() { l2->acquire(); int t = d2; t--; if (t == 0) { if (trylock(l1)) d1++; else { l2->release(); l1->acquire(); l2->acquire(); t = d2; t--; //recheck t if (t == 0) d1++; } l1->release(); } d2 = t; l2->release(); }

Two phase locking: simple deadlock control Acquire all resources, if block on any, release all and retry print_file: lock(file); acquire printer; acquire disk; do work release all If any acquire fails, release all previously acquired. Could we do this this without a priori knowledge of what s needed? Pro: dynamic, simple, flexible Con: Cost with number of resources? Length of critical section: Abstraction breaking: hard to know what s needed a priori

Deadlock Detection 1: Work = Avail; Finish[i] = False for all i; 2: Find i such that Finish[i] = False and Request[i] <= Work If no such i exists, goto 4 3: Work = Work + Alloc[i]; Finish[i] = True; goto 2 4: If Finish[i] = False for some i, system is deadlocked. Moreover, Finish[i] = False implies that process i is deadlocked. When to run?

Detection + correction Terminate threads and release resources Repeat until deadlock goes away Con: Blowing away threads leaves system in what state? Wild guess: probably not a sane state. Stylized use: acquire all locks, then modify state. Can always blow away the thread if acquire fails (basically two-phase locking with thread termination) More fancy: roll back actions of deadlocked threads acquire locks however only modify state using invertible actions get stuck? System kills thread ( bad thread ) and inverts actions. Repeat as necessary Each thread now behaves like a transaction that would either complete in entirety or non at all (easy for programmer) Problem: tracking actions, constructing inverses (refer databases)

Dirty secret: the most common Prevention: Test Kill app schemes Pro: no complex machinery. Everyone understands testing Con: interleavings = huge space. Throw deadlock in the same box as infinite loops. Do what you usually do. Works for some applications (emacs, gcc, ). Just rerun. Con: not a valid solution for many applications (example?)

Concurrency Summary Concurrency errors: One way to view: thread checks condition(s)/examines value(s) and continues with the implicit assumption that this result still holds while another thread modifies Fixes? Rule 1: don t do concurrency (poor utilization or impossible) Rule 2: don t share state (may be impossible) Rule 3: If you violate 1 & 2, use one big lock [coarse-grain] (could lead to poor utilization, e.g., Linux on multi-core) Last resort: many locks (fine-grain: good parallelism but error prone).

Scheduling: what job to run? We ll have three main goals (many others possible) minimize response/completion time response time = what the user sees: elapsed time to echo keystroke to editor (acceptable delay around 50-100ms) Completion time: start to finish of job gcc time completion Maximize throughput: operations(=jobs) per second minimize overhead (context switching) efficient use of resources (CPU, disk, cache, ) Fairness: share CPU equitably Tension: unfairness might imply better throughput or better response times

When does scheduler make decisions? Non preemptive minimum: When process voluntarily relinquishes CPU» process blocks on an event (e.g., I/O or synchronization)» process terminates dead exit Preemptive minimum All of the above, plus: I/O, join, block running yield scheduled ready blocked I/O completes, child exits, unlock Event completes: process moves from blocked to ready Timer interrupts Priorities: One process can be interrupted in favor of another

Can think of: I/O device = special CPU I/O device one special purpose CPU special purpose = disk drive can only run a disk job, printer a print job, Implication: computer system with n I/O devices n+1 CPU multiprocessor Result: all I/O devices + CPU busy = n + 1 fold speedup! grep Matrix multiplication Running on CPU Blocked on disk overlap them just right? ave. completion time halved

Process *model* Process alternates between CPU and I/O bursts CPU-bound job: long CPU bursts Matrix multiplication I/O-bound job: short CPU bursts emacs I/O burst = process idle, switch to another for free Problem: don t know job s type before running An underlying assumption: response time most important for interactive jobs, which will be I/O bound

Universal scheduling theme General multiplexing theme: what s the best way to run n processes on k nodes? (k < n) we re (probably) always going to do a bad job Problem 1: mutually exclusive objectives no one best way latency vs throughput conflicts speed vs fairness Problem 2: incomplete knowledge User determines what s most important. Can t mind read Need future knowledge to make decision and evaluate impact. Use past = future Problem 3: real systems = mathematically intractable Scheduling very ad hoc. Try and see

Scheduling Until now: Processes. From now on: resources Resources are things operated on by processes e.g., CPU time, disk blocks, memory page, network bufs Categorize resources into two categories: Non-preemptible: once given, can t be reused until process gives back. Locks, disk space for files, terminal. Preemptible: once given, can be taken away and returned. Register file, CPU, memory. A bit arbitrary, since you can frequently convert non-preemptible to preemptible: create a copy and use indirection e.g., physical memory pages: use virtual memory to allow transparent movement of page contents to/from disk.

How to allocate resources? Space sharing (horizontal): How should the resource split up? Used for resources not easily preemptible e.g., disk space, terminal Or when not *cheaply* preemptible e.g., divide memory up rather than swap entire memory to disk on context switch. Time sharing (vertical): Given some partitioning, who gets to use a given piece (and for how long)? Happens whenever there are more requests than can be immediately granted Implication: resource cannot be divided further (CPU, disk arm) or it s easily/cheaply pre-emptible (e.g., registers)

First come first served (FCFS or FIFO) Simplest scheduling algorithm Run jobs in order that they arrive Uni-programming: Run until done (non-preemptive) Multi-programming: put job at back of queue when blocks on I/O Advantage: very simple Disadvantage: wait time depends on arrival order. Unfair to later jobs (worst case: long job arrives first) e.g.,: three jobs (A, B, C) arrive nearly simultaneously) cpu Job a what s the average wait time? 24 units 3 3 time Job b Job c

Summary Mutual exclusion introduces dependencies circular dependencies = deadlock can either prevent circularities or recover from them > 1 process = choice = scheduling We ll first look at traditional systems Goals: response time, throughput, fairness Next time: specific scheduling algorithms