COP 5611 Operating Systems Spring Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM

Similar documents
COP 5611 Operating Systems Spring Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM

COT 4600 Operating Systems Fall Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM

6.033 Computer System Engineering

Processes and More. CSCI 315 Operating Systems Design Department of Computer Science

Process Concepts. CSC400 - Operating Systems. 3. Process Concepts. J. Sumey

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Process Description and Control

Today s Topics. u Thread implementation. l Non-preemptive versus preemptive threads. l Kernel vs. user threads

Module 1. Introduction:

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

CSE 153 Design of Operating Systems

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

I/O Handling. ECE 650 Systems Programming & Engineering Duke University, Spring Based on Operating Systems Concepts, Silberschatz Chapter 13

3.1 Introduction. Computers perform operations concurrently

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Main Points of the Computer Organization and System Software Module

Last 2 Classes: Introduction to Operating Systems & C++ tutorial. Today: OS and Computer Architecture

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

THREADS AND CONCURRENCY

SE350: Operating Systems. Lecture 3: Concurrency

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Distributed Systems Operation System Support

Multiprocessor System. Multiprocessor Systems. Bus Based UMA. Types of Multiprocessors (MPs) Cache Consistency. Bus Based UMA. Chapter 8, 8.

6.033 Spring Lecture #6. Monolithic kernels vs. Microkernels Virtual Machines spring 2018 Katrina LaCurts

Lecture 15: I/O Devices & Drivers

Lecture 2: September 9

Multiprocessor Systems. COMP s1

OPERATING SYSTEM OVERVIEW

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication

Tasks. Task Implementation and management

(MCQZ-CS604 Operating Systems)

6.033 Spring 2018 Lecture #5

EECS 482 Introduction to Operating Systems

Lecture 5: Process Description and Control Multithreading Basics in Interprocess communication Introduction to multiprocessors

Chapter 4: Threads. Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads

Chapter 2 Processes and Threads

Synchronization I. Jo, Heeseung

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Computer Systems Engineering: Spring Quiz I Solutions

Process- Concept &Process Scheduling OPERATING SYSTEMS

SMD149 - Operating Systems

Diagram of Process State Process Control Block (PCB)

Multi-core Architecture and Programming

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Threads. Raju Pandey Department of Computer Sciences University of California, Davis Spring 2011

Processes, PCB, Context Switch

Today s Topics. u Thread implementation. l Non-preemptive versus preemptive threads. l Kernel vs. user threads

REAL-TIME MULTITASKING KERNEL FOR IBM-BASED MICROCOMPUTERS

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

Applications, services. Middleware. OS2 Processes, threads, Processes, threads, communication,... communication,... Platform

CS 318 Principles of Operating Systems

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.)

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Frequently asked questions from the previous class survey

Reminder from last time

Multiprocessor Systems. Chapter 8, 8.1

[537] Locks. Tyler Harter

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

6.033 Spring Lecture #5. Threads Condition Variables Preemption spring 2018 Katrina LaCurts

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Dealing with Issues for Interprocess Communication

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Performance Throughput Utilization of system resources

Lecture 2 Process Management

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection

CS370 Operating Systems

Device-Functionality Progression

Chapter 12: I/O Systems. I/O Hardware

Threading and Synchronization. Fahd Albinali

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Chapter 13: I/O Systems

CS370 Operating Systems

Concurrent programming: Introduction I

CSE 4/521 Introduction to Operating Systems

Course Syllabus. Operating Systems

THREADS & CONCURRENCY

CS370 Operating Systems

Review: Program Execution. Memory program code program data program stack containing procedure activation records

University of Waterloo Midterm Examination Model Solution CS350 Operating Systems

Lecture 5: Synchronization w/locks

Operating System Design Issues. I/O Management

ò Paper reading assigned for next Tuesday ò Understand low-level building blocks of a scheduler User Kernel ò Understand competing policy goals

CS 326: Operating Systems. Process Execution. Lecture 5

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Lecture 2: Architectural Support for OSes

THREADS & CONCURRENCY

PROCESSES & THREADS. Charles Abzug, Ph.D. Department of Computer Science James Madison University Harrisonburg, VA Charles Abzug

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Processes and Threads. Processes: Review

Lecture notes Lectures 1 through 5 (up through lecture 5 slide 63) Book Chapters 1-4

Review: Program Execution. Memory program code program data program stack containing procedure activation records

COT 4600 Operating Systems Fall 2009

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004

Mid Term from Feb-2005 to Nov 2012 CS604- Operating System

Transcription:

COP 5611 Operating Systems Spring 2010 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM

Lecture 7 Last time: Thread coordination Today: Thread coordination Scheduling Multi-level memories I/O bottleneck Next Time: 2

Hardware support for atomic actions RSM (Read and Set Memory) instruction TST (Test and Set) instruction Two primitives: ACQUIRE (lock) RELEASE (lock) use atomic instructions to manipulate the lock.

Lecture 6 4

Processor sharing strategies The previous solution assume that each thread runs on a different processor and have the luxury of a spin lock and busy wait. Now we consider sharing a processor among several threads and need several new functions: Strategy 1: a thread voluntary releases the control of the processor Allow a thread to wait for an event; Allow several threads running on the same processor to wait for a lock. Strategy 2: force a thread to release the control of the processor What needs to be done to switch the processor from one thread to another: Save the state of the current thread Schedule another thread Start running the new thread 5

The kernel The role of a kernel: controls virtualization Processor sharing among threads Virtual memory management I/O operations Two modes of running: user (unprivileged) + kernel (privileged) Two types of threads: user-layer threads + processor-layer threads. Open questions How to create and terminate a thread? If multiple threads are RUNNABLE who decides which one gets control of the processor? What if no threads are ready to run? 6

The procedure followed when a kernel starts Procedure RUN_PROCESSORS() for each processor do allocate stack and setup processor thread /*allocation of the stack done at processor layer */ shutdown FALSE SCHEDULER() deallocate processor_thread stack /*deallocation of the stack done at processor layer */ halt processor

The processor_thread and the SCHEDULER Thread creation: thread_id ALLOCATE_THREAD(starting_address_of_procedure, address_space_id); What if want to create/terminate threads dynamically we have to: Allow a tread to self-destroy and clean-up -> EXIT_THREAD Allow a thread to terminate another thread of the same application DESTROY_THREAD What if no thread is able to run create a dummy thread for each processor called a processor_thread which is scheduled to run when no other thread is available the processor_thread runs in the thread layer the SCHEDULER runs in the processor layer 8

Switching threads with dynamic thread creation Switching from one user-thread to another requires two steps 1. Switch from the user-thread releasing the processor to the processor-thread 2. Switch from the processor thread to the new use-thread which is going to have the control of the processor. This step requires the SCHEDULER to circle through the thread_table until a thread ready to run is found The boundary between user-layer threads and processor-layer thread is crossed twice 9

10

YIELD A thread voluntarily releases the control of the processor. allow a thread to wait for an event; allow several threads running on the same processor to wait for a lock. YIELD function implemented by the kernel to Save the state of the current thread Schedule another thread. Invoke the SCHEDULER Start running the new thread dispatch the processor to the new thread Cannot be implemented in a high level language, must be implemented in the machine language. Can be called from the environment of the thread, e.g., C, C++, Java 11

Lecture 19 12

Communication with a bounded buffer using YIELD Now the producer (the thread writing to the bounded buffer) and the consumer share one processor. The SEND and RECEIVE use YIELD to allow the other thread to continue. Example: switch from thread 1 to thread 6 using YIELD ENTER_PROCESSOR_LAYER EXIT_PROCESSOR_LAYER

14

Read from the buffer location pointed by out in 0 1 2 N-2 N-1 Write to the buffer out location pointed by out shared structure buffer message instance message[n] integer in initially 0 integer out initially 0 lock instance buffer_lock initially UNLOCKED procedure SEND (buffer reference p, message instance msg) ACQUIRE (p_buffer_lock) while p.in p.out = N do /* if buffer full wait RELEASE (p_buffer_lock) YIELD() ACQUIRE (p_buffer_lock) p.message [p.in modulo N] msg /* insert message into buffer cell p.in p.in + 1 /* increment pointer to next free cell RELEASE (p_buffer_lock) procedure RECEIVE (buffer reference p) ACQUIRE (p_buffer_lock) while p.in = p.out do /* if buffer empty wait for message RELEASE (p_buffer_lock) YIELD() ACQUIRE (p_buffer_lock) msg p.message [p.in modulo N] /* copy message from buffer cell p.out p.out + 1 /* increment pointer to next message return msg

Shared data structures protected by locks All threads share The bounded buffer The thread table Both resources are protected by locks. Is this sufficient? Recall that other resources shared are the pointers IN and OUT. 16

Two senders execute the code concurrently Processor 1 runs thread A Processor 2 runs thread B Memory contains shared data Buffer In out Processor-memory bus time Operations of Thread A Fill entry 0 at time t2 with item a Increment pointer at time t3 in=out=0 in 1 Buffer is empty 0 on=out=0 in 2 Operations of Thread B Fill entry 0 at time t1 with item b Increment pointer at time t4 Item b is overwritten, it is lost

Using events for thread sequence coordination YIELD requires the thread to periodically check if a condition has occurred. Basic idea use events and construct two before-or-after actions WAIT(event_name) issued by the thread which can continue only after the occurrence of the event event_name. NOTIFY(event_name) search the thread_table to find a thread waiting for the occurrence of the event event_name. 18

Polling and interrupts Polling periodically checking the status of a subsystem. How often should the polling be done? Interrupts Too frequently large overhead After a large time interval the system will appear non-responsive could be implemented in hardware as polling before executing the next instruction the processor checks an interrupt bit implemented as a flip-flop If the bit is ON invoke the interrupt handler instead of executing the next instruction Multiple types of interrupts multiple interrupts bits checked based upon the priority of the interrupt. Some architectures allow the interrupts to occur durin the execution of an instruction The interrupt handler should be short and very carefully written. Interrupts of lower priority could be masked.

This solution does not work The NOTIFY should always be sent after the WAIT. If the sender and the receiver run on two different processor there could be a race condition for the notempty event. The NOTIFY could be sent before the WAIT. Tension between modularity and locks Several possible solutions: AWAIT/ADVANCE, semaphores, etc 21

AWAIT - ADVANCE solution A new state, WAITING and two before-or-after actions that take a RUNNING thread into the WAITING state and back to RUNNABLE state. eventcount variables with an integer value shared between threads and the thread manager; they are like events but have a value. A thread in the WAITING state waits for a particular value of the eventcount AWAIT(eventcount,value) If eventcount >value the control is returned to the thread calling AWAIT and this thread will continue execution If eventcount value the state of the thread calling AWAIT is changed to WAITING and the thread is suspended. ADVANCE(eventcount) increments the eventcount by one then searches the thread_table for threads waiting for this eventcount if it finds a thread and the eventcount exceeds the value the thread is waiting for then the state of the thread is changed to RUNNABLE 22

Thread states and state transitions 23

Solution for a single sender and multiple receivers 25

Supporting multiple senders: the sequencer Sequencer shared variable supporting thread sequence coordination -it allows threads to be ordered and is manipulated using two before-or-after actions. TICKET(sequencer) returns a non-negative value which increases by one at each call. Two concurrent threads calling TICKET on the same sequencer will receive different values based upon the timing of the call, the one calling first will receive a smaller value. READ(sequencer) returns the current value of the sequencer 26

Multiple sender solution; only the SEND must be modified 27

structure sequencer long integer ticket procedure TICKET(sequence reference s) ACQUIRE (thread_table_lock) t s.ticket s.ticket s.ticket + 1 RELEASE(thread_table_lock) return t procedure READ(eventcount reference event) ACQUIRE (thread_table_lock) e event.count RELEASE(thread_table_lock) return

Thread scheduling policies Non-preemptive scheduling a running thread releases the processor at its own will. Not very likely to work in a greedy environment. Cooperative scheduling a thread calls YIEALD periodically Preemptive scheduling a thread is allowed to run for a time slot. It is enforced by the thread manager working in concert with the interrupt handler. The interrupt handler should invoke the thread exception handler. What if the interrupt handler running at the processor layer invokes directly the thread? Imagine the following sequence: Thread A acquires the thread_table_lock An interrupt occurs The YIELD call in the interrupt handler will attempt to acquire the thread_table_lock Solution: the processor is shared between two threads: The processor thread The interrupt handler thread Recall that threads have their individual address spaces so the scheduler when allocating the processor to thread must also load the page map table of the thread into the page map table register of the processor 29

Virtual machines First commercial product IBM VM 370 originally developed as CP-67 Advantages: One could run multiple guest operating systems on the same machine An error in one guest operating system does not bring the machine down An ideal environment for developing operating systems Word Internet Explorer Firefox X Windows User Mode Kernel Mode

Thread Layer Thread of OS1 ID.1 SP PC PMAP Thread of OSn ID.n SP PC PMAP Virtual Machine Layer Guest OS1 Guest OS2 ID ID SP SP PC PC PMAP PMAP Guest OSn ID SP PC PMAP Processor Layer Processor A ID SP PC PMAP Processor B ID SP PC PMAP

Performance metrics Wide range, sometimes correlated, other times with contradictory goals : Throughput, utilization, waiting time, fairness Latency (time in system) Capacity Reliability as a ultimate measure of performance Some measures of performance reflect physical limitations: capacity, bandwidth (CPU, memory, communication channel), communication latency. Often measures of performance reflect system organization and policies such as scheduling priorities. Resource sharing is an enduring problem; recall that one of the means for virtualization is multiplexing physical resources. The workload can be characterized statistically Queuing Theory can be used for analytical performance evaluation. 33

System design for performance When you have a clear idea of the design, simulate the system before actually implementing it. Identify the bottlenecks. Identify those bottlenecks likely to be removed naturally by the technologies expected to be embedded in your system. Keep in mind that removing one bottleneck exposes the next. Concurrency helps a lot both in hardware and in software. in hardware implies multiple execution units Pipelining multiple instructions are executed concurrently Multiple exaction units in a processor: integer, floating point, pixels Graphics Processors geometric engines. Multi-processor system Multi-core processors Paradigm: SIMD (Single instruction multiple data), MIMD (Multiple Instructions Multiple Data. 34

System design for performance (cont d) in software complicates writing and debugging programs. SPMD (Same Program Multiple data) paradigm Design a well balanced system: The bandwidth of individual sub-systems should be as close as possible The execution time of pipeline stages as close as possible. 35