Yielding, General Switching. November Winter Term 2008/2009 Gerd Liefländer Universität Karlsruhe (TH), System Architecture Group

Similar documents
Today s Topics. u Thread implementation. l Non-preemptive versus preemptive threads. l Kernel vs. user threads

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 4: Threads and Dispatching. 4.

Threads Chapter 5 1 Chapter 5

Thread States, Dispatching, Cooperating Threads. November Winter Term 2008/09 Gerd Liefländer

Processes and Threads. Processes and Threads. Processes (2) Processes (1)

Today s Topics. u Thread implementation. l Non-preemptive versus preemptive threads. l Kernel vs. user threads

Chap.6 Limited Direct Execution. Dongkun Shin, SKKU

1 System & Activities

2 Introduction to Processes

Lecture 4: Mechanism of process execution. Mythili Vutukuru IIT Bombay

Hardware OS & OS- Application interface

Systems Programming and Computer Architecture ( ) Timothy Roscoe

Chapter 3 Process Description and Control

A Predictable RTOS. Mantis Cheng Department of Computer Science University of Victoria

KERNEL THREAD IMPLEMENTATION DETAILS. CS124 Operating Systems Winter , Lecture 9

Sistemi in Tempo Reale

Process Description and Control. Chapter 3

8086 Interrupts and Interrupt Responses:

Process Characteristics. Threads Chapter 4. Process Characteristics. Multithreading vs. Single threading

Threads Chapter 4. Reading: 4.1,4.4, 4.5

PC Interrupt Structure and 8259 DMA Controllers

Process Scheduling Queues

Processes. Sanzheng Qiao. December, Department of Computing and Software

Introduction to the ThreadX Debugger Plugin for the IAR Embedded Workbench C-SPYDebugger

Lecture 2: September 9

CSCE Introduction to Computer Systems Spring 2019

Processes and Non-Preemptive Scheduling. Otto J. Anshus

An Operating System in Action

CSCE Operating Systems Interrupts, Exceptions, and Signals. Qiang Zeng, Ph.D. Fall 2018

CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY LANGUAGE PROGRAMMING LECTURE 09, SPRING 2013

Lecture 3: Processes. CMPUT 379, Section A1, Winter 2014 January 13, 15 and 17

Computer-System Architecture (cont.) Symmetrically Constructed Clusters (cont.) Advantages: 1. Greater computational power by running applications

Last Time. Think carefully about whether you use a heap Look carefully for stack overflow Especially when you have multiple threads

Last 2 Classes: Introduction to Operating Systems & C++ tutorial. Today: OS and Computer Architecture

CSC227: Operating Systems Fall Chapter 1 INTERRUPTS. Dr. Soha S. Zaghloul

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

Processes and Threads Implementation

Process Concepts. CSC400 - Operating Systems. 3. Process Concepts. J. Sumey

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

Windows Interrupts

Threads Implementation. Jo, Heeseung

Agenda. Threads. Single and Multi-threaded Processes. What is Thread. CSCI 444/544 Operating Systems Fall 2008

Announcement. Exercise #2 will be out today. Due date is next Monday

THREADS ADMINISTRIVIA RECAP ALTERNATIVE 2 EXERCISES PAPER READING MICHAEL ROITZSCH 2

CSE 153 Design of Operating Systems

Module 1. Introduction:

CSE 153 Design of Operating Systems Fall 18

Design document: Handling exceptions and interrupts

Computer architecture. A simplified model

Extensions to Barrelfish Asynchronous C

The Kernel Abstraction

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

System Call. Preview. System Call. System Call. System Call 9/7/2018

SE350: Operating Systems. Lecture 3: Concurrency

Inf2C - Computer Systems Lecture 16 Exceptions and Processor Management

Native POSIX Thread Library (NPTL) CSE 506 Don Porter

Scheduling. Scheduling 1/51

Anne Bracy CS 3410 Computer Science Cornell University

Exam TI2720-C/TI2725-C Embedded Software

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Lecture 2: Architectural Support for OSes

Real-Time Programming

Handout 14 - Quiz I Solutions

Last time: introduction. Networks and Operating Systems ( ) Chapter 2: Processes. This time. General OS structure. The kernel is a program!

OVERVIEW. Last Week: But if frequency of high priority task increases temporarily, system may encounter overload: Today: Slide 1. Slide 3.

Introduction to Operating Systems. Chapter Chapter

Signals and Session Management. Signals. Mechanism to notify processes of system events

Review: Program Execution. Memory program code program data program stack containing procedure activation records

AUTOBEST: A United AUTOSAR-OS And ARINC 653 Kernel. Alexander Züpke, Marc Bommert, Daniel Lohmann

Architectural Support for Operating Systems

Computer System Overview

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - III Processes. Louisiana State University. Processes. September 1 st, 2009

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Interrupts Peter Rounce

Operating System. Chapter 4. Threads. Lynn Choi School of Electrical Engineering

COSC 243. Input / Output. Lecture 13 Input/Output. COSC 243 (Computer Architecture)

Lecture 5: Synchronization w/locks

Exceptions and Processes

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Interrupts Peter Rounce - room 6.18

SMD149 - Operating Systems

Operating Systems. Operating System Structure. Lecture 2 Michael O Boyle

Problems Kernel Scheduler User Level Scheduler Universität Karlsruhe (TU), System Architecture Group

Introduction to Operating Systems. Chapter Chapter

20-EECE-4029 Operating Systems Fall, 2015 John Franco

Processes and Exceptions

18-349: Introduction to Embedded Real-Time Systems

CPSC 341 OS & Networks. Introduction. Dr. Yingwu Zhu

THREADS & CONCURRENCY

Power Management as I knew it. Jim Kardach

Implementation of Processes

Introduction to Concurrency (Processes, Threads, Interrupts, etc.)

Review: Program Execution. Memory program code program data program stack containing procedure activation records

I/O - input/output. system components: CPU, memory, and bus -- now add I/O controllers and peripheral devices. CPU Cache

OS Structure. Hardware protection & privilege levels Control transfer to and from the operating system

Interrupts (Exceptions) Gary J. Minden September 11, 2014

THE PROCESS ABSTRACTION. CS124 Operating Systems Winter , Lecture 7

Lecture 11: Interrupt and Exception. James C. Hoe Department of ECE Carnegie Mellon University

Anne Bracy CS 3410 Computer Science Cornell University

Transcription:

System Architecture 6 Switching Yielding, General Switching November 10 2008 Winter Term 2008/2009 Gerd Liefländer 1

Agenda Review & Motivation Switching Mechanisms Cooperative PULT Scheduling + Switch Cooperative KLT Scheduling KIT Switch of KLTs Additional Design Parameters User and Kernel Stack Idle Initialization/Termination 2

Review & Motivation 3

Implementing s Problems to Solve How to design mechanism thread_switch? IP IP IP A B C? IP D? How to schedule threads, and for how long? Do we need time slices in every computer? time 4

Influence of CPU Switching CPU switching back and forth among threads: Rate at which a thread performs its computation will not be uniform Nor will it be reproducible if the same set of threads will run again Its timing (e.g. waiting times) can depend on other application- or system-activities Implementing s s should never be programmed with built-in assumptions about timing 5

Implementing s Conclusion Never accept a solution relying on timing conditions If you program portable application don t rely on specific scheduling policies number of processors In case, you can rely on a specific platform offering different scheduling policies, try to get the most promising one In each system the scheduling policies should be supported by a policy-free dispatching mechanism 6

Pult Scheduling 7

Cooperative Scheduling Common research trick: simplify whenever possible Assumption: Given 2 pure CPU- bound threads T1 and T2, one CPU and no interference with a device Dispatch only cooperatively via yield() T1 running T2 Yield yield(t2) running yield(t1) running yield(t2) running time 8

Yield Simplified User Level Yield (1) T1 call yield yield(t2) yield() save context of T1? reload context of T2 T2 return from yield!!! A bit tricky!!! Return to another caller yield(t1) yield(t2) time 9

Yield Simplified UL-Yield (2) T1 call yield yield(t2) yield save context of T1? reload context of T2 return from yield T2 yield(t2) return from yield save context of T2? call yield reload context of T1 yield(t1) What is happening at? time 10

Yield Simplified UL-Yield (3) procedure yield(nt:thread) { save context of CT? load context of NT return; Part of yield still runs under control of caller How to solve this problem? Part of yield will run under control of the next thread } Assumption: Both threads T1 and T2 already have called yield() once before Corollary: Each thread gets and gives up control within the procedure yield at exactly the same (user land) instruction 11

Yield Simplified UL-Yield (4) procedure yield(nt:thread) { save context of CT CT.sp := SP; SP := NT.sp; load context of NT return; } Part of yield still runs under control of caller Change stack pointers Part of yield will run under control of the next thread SP = stack pointer register sp = entry in TCB 12

Stack Contents during UL-Yield (1) Yield Local Variables of yield SP Local Variables of yield Return Address to T2 SP Local Variables of T1 Local Variables of T1 T1 runs Parameter T1 Local Variables of T2 Local Variables of T2 13

Stack Contents during UL-Yield (2) Yield Local Variables of yield SP SP Return address to T1 Parameter T2 Call yield Local Variables of yield Return Address to T2 Parameter T1 Local Variables of T1 Local Variables of T2 Local Variables of T1 Local Variables of T2 14

Stack Contents during UL-Yield (3) Yield SP Local Variables of yield Local Variables of yield Save Context of T1 Local Variables of yield Local Variables of yield SP Return address to T1 Return Address to T2 Parameter T2 Parameter T1 Local Variables of T1 Local Variables of T2 Local Variables of T1 Local Variables of T2 15

Stack Contents during UL-Yield (4) Yield Save to current TCB, i.e. to TCB of T1 Switch Stack Pointer SP Local Variables of yield Local Variables of yield Save Context of T1 Local Variables of yield Local Variables of yield SP Return address to T1 Return Address to T2 Parameter T2 Parameter T1 Local Variables of T1 Local Variables of T2 Local Variables of T1 Local Variables of T2 16

Stack Contents during UL-Yield (5) Yield Switch Stack Pointer Load from NT.TCB SP Local Variables of yield Local Variables of yield SP Local Variables of yield Local Variables of yield Return address to T1 Return address to T2 Parameter T2 Parameter T1 Local Variables of T1 Local Variables of T2 Local Variables of T1 Local Variables of T2 17

Stack Contents during UL-Yield (6) Yield SP Local Variables of yield Local Variables of yield Restore Context of T2 Local Variables of yield Local Variables of yield SP Return address to T1 Return address to T2 Parameter T2 Parameter T1 Local Variables of T1 Local Variables of T2 Local Variables of T1 Local Variables of T2 18

Stack Contents during UL-Yield (7) Yield SP Local Variables of yield Local Variables of yield Return address to T1 Parameter T2 Return from yield Return address to T2 Parameter T1 SP Local Variables of T1 Local Variables of T2 Local Variables of T1 Local Variables of T2 19

Stack Contents during UL-Yield (8) Yield SP Local Variables of yield Local Variables of yield Parameter T2 Return address to T1 Local Variables of T1 Local Variables of T1 T2 runs Local Variables of T2 Local Variables of T2 SP 20

Yield Summary of a PULT-Yield Assumption: Suppose we have a single processor system, and yield is the only dispatching possibility Only the stack of the running thread is visible Number of involved stack elements as well as their order is the same Content of involved stack elements differ a bit Of course, T 1 or T 2 can have different local variables 21

Library Library Contents Can contain code for: Creating and destroying PULTs Passing messages between PULTs Scheduling thread execution Synchronizing with other PULTs Saving/restoring context of a PULT 22

Library Potential Kernel Support for PULTs Though the kernel is not aware of a PULT, it is still managing the activity of the task that hosts the PULT Example: When a PULT does a blocking system call kernel blocks its whole task From the point of view of the PULT scheduler this PULT is still in the PULT thread state running! * Thesis: PULT thread states are independent of task states 23

Cooperative Scheduling of KLTs Anthony D. Joseph http://inst.eecs.berkeley.edu/~cs162 24

KIT Switch of KLTs 25

Control Causes for a Switch Additional reasons for switching to another thread: Current (CT) terminates synchronous CT calls synchronous I/O, must wait for result CT waits for a message from another thread CT is cooperative, hands over CPU to another thread CT exceeds its time slice asynchronous CT has lower priority than another ready thread: CT interrupted by a device waking up another thread A higher-priority thread s sleep time is exhausted CT creates a new thread with higher priority CT gets a software interrupt from another thread preemption 26

Needed: External Events What might happen if a KLT never does any I/O, never waits for anything, and never calls yield()? Could the ComputePI program grab all resources and never release the processor? What if it didn t print to console? Must find a way that the kernel dispatcher regains control Answer: Utilize External Events Interrupts: signals from hardware or software that stop the running code and jump to kernel Timer: like an alarm clock that goes off every x milliseconds If we ensure that external events occur frequently enough, the dispatcher can gain control again 27

Control Events triggering a Switch Exceptions (all synchronous events): Faulty event (reproducible) Division by zero (during instruction) Address violation (during instruction) Unpredictable event Page fault (before instruction) Breakpoint Data (after instruction) Code (before instruction) System call Trap 28

Control Events triggering a Switch Interrupts (all asynchronous events 1 ): Clock End of time slice Wake up signal Printer Missing paper Paper jam, Network Packet arrived, Another CPU 1 Inter-Processor signal Software Interrupt 1 From the point of view of the interrupted CPU 29

Nested Interrupt Handling (2) APIC sits in between CPU and peripherals IR- Input register Pending interrupts are listed here (as 1 bits) MR- Mask Mask register Where IRs can be masked out IR- Compare register Helps to decide whether interrupting the current interrupt handling is allowed Dynamic or static interrupt scheme Rotating or fixed priorities Control 30

Control Linux Interrupt Handling Linux: Bottom-Half Handler Top-Half Handler/tasklet urgent phase, e.g. reading and clearing device registers less urgent phase, e.g. copying buffers HWtopic Interrupt-Handler RTI current thread next thread * * Depending on system and/or interrupt, sometimes next thread = current thread 31

Nested Exception Handling (1) Control Division by 0?Bug in exception handler? Division by 0 Divison by 0 handler 1,2,3, current application Remark: Some systems allow application-specific exception handlers. 32

Nested Exception Handling (2) Control Invalid Address?Bug in exception handler? At least somewhere in the kernel Division by 0 Divison by 0 handler 1,2,3, current application Remark: Some systems allow two to three nested exceptions, but not more 33

Control Construction Conclusion Due to these events we need a centralized control instance in the Microkernel or Kernel Due to the sensitivity of these events, thread switching and thread controlling need special protection: Kernel Mode Code and Data inside Kernel Address Space Let s study the case: Current KLT CT has consumed its complete time slice 34

Switch End of Time Slice Objective: Establishing Fair Scheduling Assumption: No other thread-switching events to be discussed in detail Simplification: No detailed clock interrupt handling CPU Time Slice of Current CT Start of Time Slice of CT End of Time Slice of CT? time 35

green blue Simplified Switch Switch Switch Internal Call Return from Internal call Clock Interrupt Clock Interrupt Handling Kernel Level??? Return from??? to User Mode Current thread User Level Again: Just switch the stack pointer New Current time 36

green blue Simplified Switch Switch Switch Internal Call Return from Internal call Clock Interrupt Clock Interrupt Handling Kernel Level??? Return from??? to User Mode Current thread User Level New Current time 37

green blue Simplified Switch Switch Switch Clock Intr Handling??? Old Current Current time 38

green blue Switch Switch Switch Internal call Clock Intr Handling Clock Interrupt Clock Intr Handling??? Old Current Current time 39

green blue Simplified Switch Switch Switch Switch Internal call Clock Intr Handling Clock Interrupt Clock Intr Handling???????? Old Current Current time 40

green blue green Simplified Switch Switch Switch Switch Internal call Clock Intr Handling??? Clock Interrupt Clock Intr Handling Clock Interrupt finish Old Current Current time 41

green blue green Simplified Switch Switch Switch Switch Internal call Clock Intr Handling??? Clock Interrupt Clock Intr Handling Clock Interrupt finish Return to User Mode Old Current Current New Current time 42

Switch Simplified Switch Switch Switch Internal call Clock Intr Handling??? Clock Interrupt Clock Intr Handling Clock Interrupt finish Return to User Mode Old Current Current New Current time 43

Switch Simplified Switch Switch Switch Internal call Clock Intr Handling??? Clock Clock clck Intr Interrupt Intr Handling post Return to User Mode Old Current Current New Current time 44

Switch Simplified Switch Switch Switch Clock Intr Handling clck Intr post Return to User Mode Old Current New Current time 45

Switch Simplified Switch Switch Switch Clock Intr Handling clck Intr post Return to User Mode Old Current New Current time 46

Switch Simplified Switch Switch Clock Intr Handling clck Intr post Return to User Mode Old Current New Current time 47

Switch Simplified Switch Clock Intr Handling Clock Intr post Return to User Mode Old Current New Current 48

Review Task Address Space T1 T2 PULTs Task Address Space T1 T2 KLTs TCB1 yield() TCB2 T1 T2 thread_switch TCB1 timeslice yield() TCB2 thread_switch 49

Switch Switch Implementation Assumption 1 : Whenever entering the kernel, i.e. via interrupt exception system call the HW automatically pushes SP, IP and status flags, e.g. the user-context of the current thread, e.g. CT = T1 onto the kernel stack (T1) Kernel stack is implemented in its related TCB, e.g. TCB1 1 Some processors use shadow register instead of 50

Processor Current thread T1 is running IP of T1 SP of T1 Flags of T1 TCB T1 Kernel SP Clock IH RAM Memory Switch TCB T2 Kernel SP Code T1 Stack T1 Code T2 Stack T2 Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags Note: As long as T1 is running in user mode, the kernel stack is nearly empty, However, at least the start address of the kernel stack is kept in TCBT1.SP 51

Processor CT T1 is running in user mode. Clock interrupt saves context of T1 and... IP of T1 SP of T1 Flags of T1 TCB T1 Kernel SP Clock IH Memory Switch TCB T2 Kernel SP Saved IP Saved SP Saved Flags Code T1 Stack T1 Code T2 Stack T2 Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags 52

Processor CT T1 is running in user mode. Clock interrupt saves context of T1 and loads context of clock IH. IP of CIH SP of CIH Flags of CIH TCB T1 Kernel SP Clock IH Memory Switch TCB T2 Kernel SP CIH Data CIH Data Saved IP Saved SP Saved Flags Code T1 Stack T1 Code T2 Stack T2 Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags 53

Processor IP of TSw SP of TSw Flags of TSw CT T1 is running in user mode. Clock interrupt saves context of T1 and loads context of clock IH. CIH decides end of time slice for T1 calling thread_switch(t2) TCB T1 Kernel SP Clock IH Memory Switch TCB T2 Kernel SP Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags Code T1 Stack T1 Code T2 Stack T2 Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags 54

Processor IP of TSw SP of TSw Flags of TSw CT T1 is running in user mode. Clock interrupt saves context of T1 and loads context of clock IH. CIH decides end of time slice for T1 calling thread_switch(t2) Save kernel SP(TCB1) and... TCB T1 Kernel SP Clock IH Memory Switch TCB T2 Kernel SP Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags Code T1 Stack T1 Code T2 Stack T2 Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags 55

Processor IP of TSw SP of TSw Flags of TSw CT T1 is running in user mode. Clock interrupt saves context of T1 and loads context of clock IH. CIH decides end of time slice for T1 calling thread_switch(t2). Save kernel SP(TCB1) and... TCB T1 Kernel SP Clock IH Memory Switch TCB T2 Kernel SP Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags Code T1 Stack T1 Code T2 Stack T2 Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags 56

Processor IP of TSw SP of TSw Flags of TSw CT T1 is running in user mode. Clock interrupt saves context of T1 and loads context of clock IH. CIH decides end of time slice for T1 calling thread_switch(t2). Save kernel SP(TCB1) and load kernel SP(TCB 2) TCB T1 Kernel SP Clock IH Memory Switch TCB T2 Kernel SP Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags Code T1 Stack T1 Code T2 Stack T2 Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags 57

TCB T1 Kernel SP Processor IP of TSw SP of TSw Flags of TSw Clock IH Memory Switch CT T1 is running in user mode. Clock interrupt saves context of T1 and loads context of clock IH. CIH decides end of time slice for T1 calling thread_switch(t2). Save kernel SP(TCB1) and load kernel SP(TCB 2) Next Steps? Complete for yourself. TCB T2 Kernel SP Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags Code T1 Stack T1 Code T2 Stack T2 Switch Data CIH Data CIH Data Saved IP Saved SP Saved Flags 58

Additional Design Parameter 59

Switch Implementation Alternatives Number of kernel stacks involved: 1 Kernel stack for all threads Each KLT/process has a kernel stack Discuss carefully!! 60

Stack Management Each process/klt has two stacks Kernel stack Kernel Address Space User stack Stack pointer changes when entering/exiting the kernel SP Why is this necessary? User Address Space 61

Switch Open Questions If thread T2 not known in advance need for scheduling policy? (see later chapters) If we know thread T2, where do we get its TCB? (see exercise) If kernel stack is part of TCB danger of stack overflow? How to handle thread initiation and termination? Remark: Limitation on kernel stack size is no real problem in practice. If your system suffers from a kernel stack overflow obvious sign of a severe kernel bug Examine your kernel design and implementation, before playing around with increasing kernel stack sizes 62

Switch Open Questions How to handle thread initiation and termination? General remark (Principle of Construction): Solve special cases with the normal-case solution 63

Switch Termination How to terminate a thread? Do all necessary work for cleaning up thread s environment Switch to another thread, never return to exiting thread No additional mechanisms required Switch Some nested Kernel Ops of T2 Kernel Op 1. Part of exit Kernel OP T1 T2 64

Switch Initialization What to do, when switching to a brand new thread for the very first time?? Something is missing in between!! T1 New T2 65

Switch Initialization Initialize new thread s (T2) kernel stack with the second part of the thread_switch and the exit function Returning from thread_switch leads to second part of system call exit, return to T2 in user mode, and start with the first instruction of T2 Switch Same trick 2. part of exit() T1 New T2 66

Switch Idle CPU Problem What to do, when there is no thread to switch to? Solution: Avoid that situation by introducing an idle thread that is always runnable Question: Major properties of an idle thread? 67

Switch Idle When to install? Before booting While booting After booting How to guarantee that idle thread is always runnable? Avoid any wait events in the idle thread 68

Summary: Kernel-Level s Switch All thread management is done by the kernel No thread library, but API to kernel thread facility Kernel maintains TCBs for the task and threads. Switching between threads requires kernel. Scheduling on thread basis 69

Switch Pros/Cons of KLTs Advantages: Kernel can simultaneously schedule threads of same task on different processors A blocking system call only blocks the calling thread, but no other thread from the same application Even kernel tasks can be multi-threaded Disadvantages: switching within same task involves the kernel. We have two additional mode switches per thread switch!! This can result in a significant slow down!! 70

Switch Summary -Switching Environment: Kernel Entry + Mode Switch (User Kernel) Changing Old State Select New (optional) _Switch (context switch) Changing New State Kernel Exit + Mode Switch (Kernel User) Remark: (Only needed for kernel-level threads) 71

Preview Control Representation Switch States orthogonal to Task States Dispatching of s 72