SMP T-Kernel Specification

Similar documents
SMP T-Kernel Standard Extension Specification

Multiprocessor and Real- Time Scheduling. Chapter 10

CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, Review

Module 1. Introduction:

Multiprocessor scheduling

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Threads, SMP, and Microkernels. Chapter 4

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

CS370 Operating Systems

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

ITRON Project Overview

Operating Systems (1DT020 & 1TT802)

IT 540 Operating Systems ECE519 Advanced Operating Systems

Interprocess Communication By: Kaushik Vaghani

Announcement. Exercise #2 will be out today. Due date is next Monday

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Operating System. Chapter 4. Threads. Lynn Choi School of Electrical Engineering

CSC Operating Systems Fall Lecture - II OS Structures. Tevfik Ko!ar. Louisiana State University. August 27 th, 2009.

Announcements. Computer System Organization. Roadmap. Major OS Components. Processes. Tevfik Ko!ar. CSC Operating Systems Fall 2009

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Subject Name:Operating system. Subject Code:10EC35. Prepared By:Remya Ramesan and Kala H.S. Department:ECE. Date:

1.1 CPU I/O Burst Cycle

OS Design Approaches. Roadmap. OS Design Approaches. Tevfik Koşar. Operating System Design and Implementation

CPS221 Lecture: Threads

Following are a few basic questions that cover the essentials of OS:

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.

Chapter 4: Threads. Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads

Multiprocessor and Real-Time Scheduling. Chapter 10

CSE 4/521 Introduction to Operating Systems. Lecture 29 Windows 7 (History, Design Principles, System Components, Programmer Interface) Summer 2018

Utilizing Linux Kernel Components in K42 K42 Team modified October 2001

Process- Concept &Process Scheduling OPERATING SYSTEMS

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100

(MCQZ-CS604 Operating Systems)

Lecture 3: Concurrency & Tasking

UNIT:2. Process Management

CHAPTER 6: PROCESS SYNCHRONIZATION

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

IT 540 Operating Systems ECE519 Advanced Operating Systems

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Unit 2 : Computer and Operating System Structure

PROCESS STATES AND TRANSITIONS:

Embedded Systems. 5. Operating Systems. Lothar Thiele. Computer Engineering and Networks Laboratory

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Tasks. Task Implementation and management

REAL-TIME MULTITASKING KERNEL FOR IBM-BASED MICROCOMPUTERS

Chapter 6 Concurrency: Deadlock and Starvation

Chapter 1: Introduction. Operating System Concepts 8 th Edition,

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Final Exam Preparation Questions

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Operating Systems Process description and control

Design of Operating System

OPERATING SYSTEMS: Lesson 1: Introduction to Operating Systems

CS370 Operating Systems Midterm Review

- Table of Contents -

Exam TI2720-C/TI2725-C Embedded Software

Linux Operating System

Chapter 3 Process Description and Control

Operating Systems Overview. Chapter 2

Unit 3 : Process Management

* What are the different states for a task in an OS?

DSP/BIOS Kernel Scalable, Real-Time Kernel TM. for TMS320 DSPs. Product Bulletin

LINUX OPERATING SYSTEM Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science

Operating Systems: Internals and Design Principles, 7/E William Stallings. Chapter 1 Computer System Overview

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Green Hills Software, Inc.

Resource management. Real-Time Systems. Resource management. Resource management

CPS221 Lecture: Operating System Functions

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

Problem Set: Processes

Remaining Contemplation Questions

«Real Time Embedded systems» Multi Masters Systems

Processes, PCB, Context Switch

Problem Set: Processes

Embedded Systems. 6. Real-Time Operating Systems

Multiprocessor System. Multiprocessor Systems. Bus Based UMA. Types of Multiprocessors (MPs) Cache Consistency. Bus Based UMA. Chapter 8, 8.

Chapter 1: Introduction. Operating System Concepts 9 th Edit9on

Process Monitoring in Operating System Linux

Microkernel/OS and Real-Time Scheduling

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

OPERATING SYSTEM. Chapter 4: Threads

1 Multiprocessors. 1.1 Kinds of Processes. COMP 242 Class Notes Section 9: Multiprocessor Operating Systems

Multiprocessor Systems. COMP s1

Multiprocessor Systems. Chapter 8, 8.1

OPERATING SYSTEM : RBMCQ0402. Third RavishBegusarai.

Recent Results of the ITRON Subproject

CS370 Operating Systems

Project 2: CPU Scheduling Simulator

Sistemi in Tempo Reale

Xinu on the Transputer

Chapter 4: Threads. Chapter 4: Threads. Overview Multicore Programming Multithreading Models Thread Libraries Implicit Threading Threading Issues

For use by students enrolled in #71251 CSE430 Fall 2012 at Arizona State University. Do not use if not enrolled.

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Processes and Threads

Transcription:

SMP T-Kernel Specification Ver. 1.00.00 TEF021-S002-01.00.00/en February 2009

SMP T-Kernel Specification (Ver.1.00.00) TEF021-S002-01.00.00/en February 2009 Copyright 2006-2009 T-Engine Forum. All Rights Reserved.. T-Engine Forum owns the copyright of this specification. Permission of T-Engine Forum is necessary for copying, republishing, posting on servers, or redistribution to lists of the contents of this specification. The contents written in this specification may be changed without a prior notice for improvement or other reasons in the future. About this specification, please refer to follows; Publisher T-Engine Forum The 28th Kowa Building 2-20-1 Nishi-gotanda Shinagawa-Ward Tokyo 141-0031 Japan TEL:+81-3-5437-0572 FAX:+81-3-5437-2399 E-mail:office@t-engine.org 2 TEF021-S002-01.00.00/en

Chapter 1 SMP T-Kernel Overview... 9 1.1 Position of SMP T-Kernel... 9 1.2 Background... 9 1.3 Policies of Specification Establishment... 9 1.3.2 Hardware Prerequisites... 10 1.3.3 Basic System Configuration... 10 Chapter 2 Concepts Underlying the SMP T-Kernel Specification... 12 2.1 Definition of Basic Terminology... 12 2.1.1 Implementation-Related Terminology... 12 2.1.2 System-Related Terminology... 12 2.1.3 Meaning of Other Basic Terminology... 13 2.2 SMP T-Kernel System... 14 2.2.1 Processor... 14 2.2.2 Processor and SMP T-Kernel... 14 2.2.3 Differences With Single Processor Systems... 14 2.3 Task States and Scheduling Rules... 17 2.3.1 Task States... 17 2.3.2 Task Scheduling Rules... 21 2.3.3 Task Execution Processor... 25 2.4 System States... 29 2.4.1 System States While Non-task Portion Is Executing... 29 2.4.2 Task-Independent Portion and Quasi-Task Portion... 30 2.5 Objects... 32 2.6 Memory... 33 2.6.1 Address Space... 33 2.6.2 Resident Memory and Nonresident Memory... 33 2.7 Protection Levels... 35 2.8 Domains... 36 2.8.1 Concept of Domain... 36 2.8.2 Kernel Domains and Hierarchical Structure of Domains... 36 2.8.3 ID Number Search Function... 37 2.8.4 Domains and Access Protection Attributes... 37 Private attribute objects can only be accessed from tasks and handlers that belong to the same domain as itself. Access from objects belonging to other domains is not possible.... 37 2.8.5 Target and Restrictions of Access Protection... 38 That being said, the following are exceptions.... 38 2.9 Interrupt and Exception... 40 2.9.1 Interrupt Handling... 40 2.9.2 Task Exception Handling... 40 2.10 Low-level Operation Function... 41 3 TEF021-S002-01.00.00/en

Chapter 3 Common SMP T-Kernel Specifications... 42 3.1 Data Types... 42 3.1.1 General Data Types... 42 3.1.2 Other Defined Data Types... 43 3.2 System Calls... 44 3.2.1 System Call Format... 44 3.2.2 System Calls Possible from Task-Independent Portion and Dispatch Disabled State... 44 3.2.3 Restricting System Call Invocation... 44 3.2.4 Modifying a Parameter Packet... 45 3.2.5 Function Codes... 45 3.2.6 Error Codes... 45 3.2.7 Timeout... 45 3.2.8 Relative Time and System Time... 46 3.3 High-Level Language Support Routines... 48 Chapter 4 SMP T-Kernel/OS Functions... 49 4.1 Task Management Functions... 50 4.2 Task-Dependent Synchronization Functions... 74 4.3 Task Exception Handling Functions... 90 4.4 Synchronization and Communication Functions... 98 4.4.1 Semaphore... 98 4.4.2 Event Flag... 105 4.4.3 Mailbox... 114 4.5 Extended Synchronization and Communication Functions... 122 4.5.1 Mutex... 122 4.5.2 Message Buffer... 131 4.5.3 Rendezvous Port... 139 4.6 Memory Pool Management Functions... 157 4.6.1 Fixed-size Memory Pool... 157 4.6.2 Variable-size Memory Pool... 164 4.7 Time Management Functions... 172 4.7.1 System Time Management... 172 4.7.2 Cyclic Handler... 176 4.7.3 Alarm Handler... 184 4.8 Domain Management Functions... 192 4.9 Interrupt Management Functions... 201 4.10 System Management Functions... 206 4.11 Subsystem Management Functions... 220 Chapter 5 SMP T-Kernel/SM Functions... 232 5.1 System Memory Management Functions... 233 5.1.1 System Memory Allocation... 233 4 TEF021-S002-01.00.00/en

5.1.2 Memory Allocation Libraries... 237 5.2 Address Space Management Functions... 238 5.2.1 Address Space Configuration... 238 5.2.2 Address Space Checking... 238 5.2.3 Get Address Space Information... 239 5.2.4 Cache Mode Setting... 240 5.2.5 Control of Cache... 241 5.2.6 Get Physical Address... 241 5.2.7 Map Memory... 242 5.3 Device Management Functions... 244 5.3.1 Basic Concepts... 244 5.3.2 Application Interface... 246 5.3.3 Device Registration... 252 5.3.4 Device Driver Interface... 254 5.3.5 Attribute Data... 257 5.3.6 Device Event Notification... 259 5.3.7 Device Suspend/Resume Processing... 259 5.3.8 Special Properties of Disk Devices... 260 5.4 Interrupt Management Functions... 260 5.4.1 CPU Interrupt Control... 261 5.4.2 Control of Interrupt Controller... 263 5.5 IO Port Access Support Functions... 264 5.5.1 IO Port Access... 264 5.5.2 Micro wait... 265 5.6 Interprocessor Management Functions... 266 5.6.1 Spinlock Control... 266 5.6.2 Atomic Function... 269 5.6.3 Memory Barriers... 269 5.7 Power Management Functions... 270 5.8 System Configuration Information Management Functions... 271 5.8.1 System Configuration Information Acquisition... 271 5.8.2 Standard System Configuration Information... 272 Chapter 6 Starting SMP T-Kernel... 274 6.1 Subsystem and Device Driver Starting... 274 Chapter 7 SMP T-Kernel/DS Functions... 276 7.1 Kernel Internal State Reference Functions... 277 7.2 Trace Functions... 297 Chapter 8 Reference... 302 8.1 List of Error Codes... 302 5 TEF021-S002-01.00.00/en

List of Figures Figure 1: SMP T-Kernel System Configuration Diagram... 10 Figure 2(a) Example of task execution by a single processor T-Kernel... 15 Figure 2(b) Example of task execution by SMP T-Kernel... 16 Figure 3: Task State Transitions... 19 Table 1 State Transitions Distinguishing Invoking Task and Other Tasks... 20 Figure 4(a): Precedence in Initial State... 22 Figure 4(b): Precedence After Task B Goes To RUN State... 22 Figure 4(c): Precedence After Task B Goes To WAIT State... 22 Figure 4(d): Precedence After Task B WAIT State Is Released... 23 Figure 5(a) Precedence in Initial State... 24 Figure 5(b) Precedence After Task A Ends... 24 Figure 5(c) Precedence After Task B Goes To WAIT State... 24 Figure 5(d) Precedence After Task B WAIT State Is Released... 25 Figure 6(a) Allocation of execution processors Initial state... 26 Figure 6(b) Allocation of execution processors Task E starts and is initialized... 26 Figure 6(c) Allocation of execution processors State following a dispatch... 26 Figure 7(a) Processor allocation when the execution task is not specified... 27 Figure 7(b) Processor allocation when the execution task is specified... 27 Figure 8(a) Execution processor reallocation Initial state... 28 Figure 8(b) Execution processor reallocation Task E starts and is initialized... 28 Figure 8(c) Execution processor reallocation State following dispatch... 28 Figure 9: Classification of System States... 29 Figure 10(a): Delayed Dispatching in an Interrupt Handler... 31 Figure 10(b): Delayed Dispatching in Interrupt Handlers (Interrupt Nesting)... 31 Figure 10(c): Dispatching in the Quasi-task Portion... 31 Table 2 List of Kernel Objects in SMP T-Kernel... 32 Figure 11: Address Space... 33 Figure 12 Hierarchical Structure of Domains... 37 Figure 13: Behavior of High-Level Language Support Routine... 48 Table 4: Target Task State and Execution Result (tk_ter_tsk)... 59 Table 5: Values of tskwait and wid... 73 Table 6: Task States and Results of tk_rel_wai Execution... 79 Figure 14: Multiple Tasks Waiting for One Event Flag... 112 Figure 15: Format of Messages Using a Mailbox... 114 Figure 16: Synchronous Communication Using Message Buffer of bufsz =0... 134 Figure 17: Rendezvous Operation... 140 Figure 18(a): Using Rendezvous to Implement ADA select Function... 149 6 TEF021-S002-01.00.00/en

Figure 18(b): Using Rendezvous to Implement ADA select Function... 149 Figure 19: Server Task Operation Using tk_fwd_por... 152 Figure 20(a): Precedence Before Issuing tk_rot_rdq... 207 Figure 20(b): Precedence After Issuing tk_rot_rdq (tskpri = 2)... 208 Figure 21(a): maker Field Format... 217 Figure 21(b): prid Field Format... 218 Figure 21(c): spver Field Format... 218 Figure 22: T-Kernel Subsystems... 220 Figure 23 Relationship Between Subsystems and Resource Groups... 221 Figure 24: Device Management Functions... 244 Table 7 Whether or not it is possible to open the same device at the same time... 247 7 TEF021-S002-01.00.00/en

System Call Notation In the parts of this specification that describe system calls, the specification of each system call is explained in the format illustrated below. System call name Summary description -System call name:- [C Language Interface] - Indicates the C language interface for invoking the system call.- [Parameters] - Describes the system call parameters, i.e., the information passed to the OS when the system call is executed.- [Return Parameters] - Describes the system call return parameters, i.e., the information returned by the OS when execution of the system call ends.- [Error Codes] - Describes the errors that can be returned by the system call.- *The following error codes are common to all system calls and are not included in the error code listings for individual system calls. E_SYS, E_NOSPT, E_RSFN, E_MACV, E_OACV *Error code E_CTX is included in the error code listings for individual system calls only when the conditions for its occurrence are clear (e.g., system calls that enter WAIT state). Depending on the implementation, however, the E_CTX error code may be returned by other system calls as well. The implementation-specific occurrences of E_CTX are not included in the error code specifications for individual system calls. [Description] - Describes the system call functions. - *When the values to be passed in a parameter are selected from various choices, the following notation is used in the parameter descriptions. ( x y z ) - Set one of x, y, or z. x y - Both x and y can be set at the same time (in which case the logical sum of x and y is taken). s [ x ] - x is optional Example: When wfmode := (TWF_ANDW TWF_ORW) [TWF_CLR], wfmode can be specified in any of the following four ways. TWF_ANDW TWF_ORW (TWF_ANDW TWF_CLR) (TWF_ORW TWF_CLR) [Additional Notes] - Supplements the description by noting matters that need special attention or caution, etc. - [Rationale for the Specification] - Explains the reason for adopting a particular specification. [Items Concerning SMP T-Kernel] - Describes sections where the T-Kernel 1.00 Specification differs from SMP T-Kernel.- 8 TEF021-S002-01.00.00/en

Chapter 1 SMP T-Kernel Overview 1.1 Position of SMP T-Kernel SMP T-Kernel is a real-time operating system for symmetric multiprocessors (SMP: Symmetric Multiple Processor). The functions of SMP T-Kernel have been extended to support SMP on top of the T-Kernel 1.00 Specification for single processor embedded systems. Multiprocessors have asymmetric multiprocessors (AMP: Asymmetric Multiple Processor) in addition to SMP. The T-Kernel used for AMP is called AMP T-Kernel. SMP T-Kernel and AMP T-Kernel aim to share specifications as much as possible in consideration of compatibility between them. Both are collectively referred to as "MP T-Kernel". 1.2 Background The necessity of multiprocessors has been increasing along with the increasing size and improving sophistication of embedded systems. In past embedded systems that have used multiprocessors, generally it was not the OS but the application program that handled control and communication between processors in its own scheme. However, in the future it is preferable that this handling should be standardized from the viewpoint of software compatibility and portability. Additionally, high-speed communication between cores has become possible and OS level control between processors is now simpler in recent multicore processors where the cores of multiple processors have been built inside a single chip. Based on the above observation, extension of function of T-Kernel to support multiprocessor systems was examined. The multiprocessor is classified broadly into AMP and SMP according to the configuration. In AMP, roles are set in each processor statically, and statically assigned programs which include the OS operate in each processor. In SMP, all roles of a processor are equal, and programs are dynamically allocated by the OS to each processor. The functions and implementation of OS are very different in the case of AMP and SMP in this way. Therefore, AMP T-Kernel and SMP T-Kernel were examined separately during the establishment of specifications. However, as the number of processors increases, it is conceivable an SMP and AMP combination system will appear. Moreover, the demand that would like to see software shared among AMP, SMP, and single processor systems is also significant. Thus, compatibility between AMP T-Kernel and SMP T-Kernel is deemed very important, and the future integration of AMP T-Kernel and SMP T-Kernel is being considered as a result. 1.3 Policies of Specification Establishment 1.3.1 Fundamental Policy SMP T-Kernel is the real-time OS that mainly targets embedded systems. In the existing T-Kernel 1.00 Specification, one of the purposes was to improve the portability and distribution of software in various embedded systems. SMP T-Kernel is a successor of T-Kernel, and improving the portability and distribution of software in various SMP systems is also one of its goals. In addition, the portability and distribution of software with embedded systems that has non-smp architecture, namely AMP and a single processor are also important. Based on the above observation, the following fundamental policy was set during the establishment of the SMP T-Kernel Specification. (1) Compatibility with the Standard T-Kernel The aim for SMP T-Kernel is to have upper compatibility with the standard T-Kernel at the source code level. The API is to be common with the standard T-Kernel other than the functions extended in SMP T-Kernel, and the porting of software shall be simple. Moreover, the development of software that can run under both SMP T-Kernel and standard T-Kernel shall be made possible. (2) Reducing the Hardware-Dependency and Supporting Various SMP Systems The goal for SMP T-Kernel is the support of various types of hardware without depending on the architecture of specific hardware and making porting simple. (3) Valuing Performance as a Real-time OS Targeting Embedded Systems The functions of a real-time OS that the T-Kernel 1.00 Specification offers shall be provided by SMP T-Kernel as well. 9 TEF021-S002-01.00.00/en

Therefore, when these functions are used in a single processor system, the goal is execution efficiency equal to that of T-Kernel 1.00 implementation on a single processor. In addition, the communication overhead between processors is paid due attention during design. 1.3.2 Hardware Prerequisites The hardware prerequisites are stipulated as follows according to the fundamental policy stated above: (1) Each processor that configures SMP must possess the capability to operate the T-Kernel 1.00 Specification OS on its own. Specifically, CPU shall be 32-bit or more powerful with MMU (Memory Management Unit). Although the MMU is not indispensable, restriction(s) will be imposed on functions without the MMU. (2) The following functions assumes SMP Each processor that comprises SMP does not differ in basic function but can execute the same program code and can share the main memory with all other processors. Moreover, when there is a cache function for memory, it has the hardware function to guarantee coherency between processors. 1.3.3 Basic System Configuration The SMP system consists of multiple processors. All processors are managed by one SMP T-Kernel, and programs to be executed are allocated to each processor dynamically. Task scheduling and object management are uniformly managed in the entire system by SMP T-Kernel. User programs do not need to be aware of individual processors. In the same manner as T-Kernel user programs operate on single processor systems, programs operate under one SMP T-Kernel. User Program SMP T-Kernel Service Call Processor 1 Processor 2 Processor n Figure 1: SMP T-Kernel System Configuration Diagram Like the T-Kernel Specification, individual AMP T-Kernel consists of the following three parts: T-Kernel/OperatingSystem(T-Kernel/OS), T-Kernel/SystemManager(T-Kernel/SM), and T-Kernel/DebuggerSupport (T-Kernel/DS).) T-Kernel/OperatingSystem (T-Kernel/OS) provides the following functions. Task Management functions Task-Dependent Synchronization Functions Task Exception Handling Functions Synchronization and Communication Functions Extended Synchronization and Communication Functions Memory Pool Management Functions Time management functions Domain Management Functions Interrupt Management Functions System Status Management Functions 10 TEF021-S002-01.00.00/en

Subsystem management functions T-Kernel/SystemManager (T-Kernel/SM) provides the following functions. System memory management functions Address space management functions Device management functions Interrupt management functions I/O port access support functions Interprocessor Management Functions Power management functions System configuration information management functions T-Kernel/DebuggerSupport (T-Kernel/DS) provides the following functions exclusively for the debugger. Kernel internal state reference Trace 11 TEF021-S002-01.00.00/en

Chapter 2 Concepts Underlying the SMP T-Kernel Specification 2.1 Definition of Basic Terminology The basic terminology is provided at the inception of the SMP T-Kernel Specification. These terms are common with the T-Kernel 1.00 Specification. 2.1.1 Implementation-Related Terminology (1) Implementation-defined There are items that have not been standardized as specifications. Therefore, specifications must be stipulated for each implementation. Specific implementation details must be noted in the implementation specification. In application programs, portability is not assured for sections which are dependent on implementation- defined items. (2) Implementation-dependent In the specification, something that is implementation-dependent refers to an item which changes behavior according to the target system or the operating conditions of the system. Behavior must be described for each implementation and specific implementation details must be noted in the implementation specification. In application programs, the sections which depend on implementation-dependent items basically need to be changed when porting. 2.1.2 System-Related Terminology (1) Device Driver A device driver is a program that mainly controls hardware. Each device driver is placed under the management of T-Kernel, and the interface between T-Kernel and the device driver is stipulated in the specifications of T-Kernel. Furthermore, the standard specification of device drivers is stipulated as the T-Kernel Standard Device Driver Specification. (2) Subsystem A subsystem is a program that realizes extended system calls (extension SVC), and extends the functions of T-Kernel. The subsystem is placed under the management of T-Kernel, and the interface between T-Kernel and the subsystem is stipulated in the T-Kernel Specification. (3) T-Monitor T-Monitor is a program that mainly performs hardware initialization, system startup, exception and interrupt handling, and provision of basic debugging functions. Initially, T-Monitor starts when the hardware power is turned on (system reset). T-Monitor then initializes the necessary hardware, and starts T-Kernel. T-Monitor is not part of T-Kernel and is not included in the T-Kernel Specification. (4) T-Kernel Extension T-Kernel Extension is a program for extending the functions of T-Kernel and realizes the functions of a more sophisticated OS. T-Kernel Extension has some specifications including T-Kernel Standard Extension as the standard specification. T-Kernel Standard Extension is implemented as a subsystem of T-Kernel and provides file system and process management functions. The realization of functions of a more sophisticated OS becomes possible by combining these T-Kernel Extensions with T-Kernel. Moreover, an OS with different functions can be realized by replacing T-Kernel Extension. (5) Application and System Software An application is a program created by the user on system software. System software is a program for operating applications, and it is divided into the hierarchy of T-Monitor, T-Kernel, and T-Kernel Extension from the standpoint of the application. However, T-Monitor and T-Kernel Extension do not always exist. Finally, device drivers are handled as part of T-Kernel. 12 TEF021-S002-01.00.00/en

(6) Kernel Object A resource which is an operational object of T-Kernel is called a Kernel Object or Object for short. Execution programs such as tasks and synchronization handlers and resources for synchronization and communication such as semaphores and event flags are all Kernel Objects. The Kernel Object is identified by a numerical ID. For example, the Task ID identifies a task. All Object IDs are dynamically and automatically allocated in T-Kernel during program execution. 2.1.3 Meaning of Other Basic Terminology (1) Task, invoking task The basic logical unit of concurrent program execution is called a task. While instructions within one task are executed in sequence, instructions within different tasks can be executed in parallel. This concurrent processing is a conceptual view from the standpoint of applications. In reality, multiple executing tasks cannot exceed the number of processors and be truly executed concurrently. In such cases, processing is accomplished by time-sharing among tasks as controlled by the kernel. A task that invokes a system call is called the invoking task. (2) Dispatch, dispatcher The switching of tasks executed by the processor is called dispatching (or task dispatching). The kernel mechanism by which dispatching is realized is called a dispatcher (or task dispatcher). (3) Scheduling, scheduler The processing to determine which task to execute next is called scheduling (or task scheduling). The kernel mechanism by which scheduling is realized is called a scheduler (or task scheduler). Generally a scheduler is implemented inside system call processing or in the dispatcher. (4) Context The environment in which a program runs is generally called context. For two contexts to be called identical, at the very least, the processor operation mode (Execution mode of the program stipulated by the processor such as privilege and user) must be the same and the stack space must be the same (part of the same contiguous memory area). Note that context is a conceptual view from the standpoint of applications; even when processing must be executed in independent contexts, in actual implementation both contexts may sometimes use the same processor operation mode and the same stack space. (5) Precedence The relationship among different execution requests that determines their order of execution is called precedence. When a higher-precedence execution request becomes ready for execution while a low-precedence execution request is satisfied and is in progress, as a general rule, the higher-precedence execution request is run ahead of the other request (6) API and System Calls The standard interface to call functions provided by T-Kernel from the application and middleware is collectively called API (Application Program Interface). API includes those which are realized as macros and libraries in addition to system calls that call the OS functions directly. [Additional Note] Priority is a parameter assigned by an application to control the order of task or message processing. Precedence, on the other hand, is a concept in the specification used to make clear the order in which processing is to be executed. Precedence relation among tasks is determined based on task priority. 13 TEF021-S002-01.00.00/en

2.2 SMP T-Kernel System 2.2.1 Processor The SMP system is configured with multiple processors. Each processor is identified by Processor ID number. Processor ID numbers are consecutive numbers beginning with 1 and are designated statically as one item of system configuration information during system construction. When a specific processor operates at system startup, the ID number for the processor is 1. The allocation of other numbers is implementation-defined. Processors are not distinguished for the application by SMP T-Kernel. Therefore, regular applications do not need to be aware of individual processors. Processor ID's are only used to specify a processor in particular when it is absolutely necessary. 2.2.2 Processor and SMP T-Kernel Processors in SMP T-Kernel are managed by one SMP T-Kernel. For example, the management of kernel objects, task scheduling, system management such as devices and subsystems, and management of resources such as memory are uniformly managed by one SMP T-Kernel. Tasks are dynamically allocated to each processor by SMP T-Kernel. Applications do not need to be aware of the processor where the program is executed or the number of processors in the system, etc. However, in the following special cases, programs need to be aware of individual processors. (1) Control at a level close to hardware such as interrupts and device control, etc. (2) Execution processor specification of the task [Additional Notes] User programs are relieved from multiprocessor control. Processors are not distinguished by SMP T-Kernel. At the same time user programs do not depend on the number of processors. Programs can have compatibility at the source code level both in SMP T-Kernel systems with different numbers of processors and single processor T-Kernel. Conversely, hardware-dependency decreases portability. Hence, sufficient attention must be paid to programs that are aware of each processor. Moreover, it is difficult to individually control each processor in SMP systems. If there is a desire to control each processor proactively and individually, the use of AMP T-Kernel must be examined. 2.2.3 Differences With Single Processor Systems In SMP T-Kernel, the system configuration is equal to single processor T-Kernel in that there are a kernel and an application in the system. The main difference between SMP T-Kernel and single processor T-Kernel is that multiple tasks and handlers may be executed literally at the same time as seen from the application. The following occurs as a result. (1) Multiple tasks in RUN state exist The maximum number of running tasks in a single processor T-Kernel is one. However, in SMP T-Kernel, at the maximum, tasks as many as the number of processors can go to RUN state. Therefore, there is a possibility that running tasks are directly and indirectly controlled from other running tasks. This is not possible in single processor systems. Derived from this, while executing a certain task, a task with lower priority than the executing task may be executed., As a result, the currently executing task may be influenced by this. (2) Other tasks and handlers may be executed even when a handler is in execution In single processor T-Kernel, various handlers such as the interrupt handler have higher priority than tasks in execution and tasks are never interrupted by tasks while handlers are in execution. However, in SMP T-Kernel, there is a possibility that tasks and other handlers are executed even during the handler execution. [Additional Notes] Single processors can be thought of as a special state with only one processor in SMP. Therefore, programs that do not depend on the number of processors which operate under SMP T-Kernel can also be operated under single processor T-Kernel. In other words, it is possible to write programs that are compatible between single processor T-Kernel and SMP T-Kernel. However, there is a possibility that existing single processor T-Kernel programs may implicitly conduct mutual exclusion or synchronization control by using the priority of the tasks because they are not aware of multiprocessors. Moreover, embedded systems have the strong tendency of excluding unnecessary controls. Therefore, attention must be paid when porting existing single processor T-Kernel programs to SMP T-Kernel. This is explained further in the following examples. 14 TEF021-S002-01.00.00/en

In single processor T-Kernel applications, during the execution of a certain task, tasks having lower priority are never executed or tasks are never executed during the handler execution. Based on the said principles, implicit mutual exclusion control using priority is often done. In SMP T-Kernel, since implicit mutual exclusion control by precedence doesn t work in principle, system calls must be invoked explicitly to conduct mutual exclusion control. In single processor T-Kernel applications, tasks are executed one at a time. Based on this, the execution order is forecast from the priority of the tasks and is used for task synchronization. In SMP T-Kernel, explicit synchronization control is necessary because the execution order of tasks changes not only according to the priority of tasks but also according to the number of processors. An example of the changed execution order of tasks under single processor T-Kernel and under SMP T-Kernel is provided below. Here, Task B and Task C are started by Task A. The task priority is Task A > Task B > Task C. In single processor T-Kernel, tasks are sequentially executed one at a time according to priority [Figure 2(a)]. In SMP T-Kernel, there is the possibility that Task B begins execution immediately when it is started from Task A without waiting for the completion of Task A. For example, if there are three or more processors, and there are no tasks having higher priority than Task A, B, and C, Task B and Task C are executed immediately when they are started. [Figure 2(b)]. High Task A Completion of Task A Start of Task B Priority Task B Completion of Task B Start of Task C Task C Low Figure 2(a) Example of task execution by a single processor T-Kernel 15 TEF021-S002-01.00.00/en

High Task A Completion of Task A Start of Task B Priority Task B Start of Task C Task C Low Figure 2(b) Example of task execution by SMP T-Kernel Here, suppose processing of Task A must end when Task B starts running, and also suppose the processing of Task A and Task B must end when Task C starts running. Therefore, assuming it is a single processor T-Kernel, even if special synchronization is not controlled, processing is executed as expected in this example. However, in SMP T-Kernel, special synchronization control must be conducted using system calls. Programs that explicitly performs mutual exclusion control and synchronization control can be operated independent of the number of processors, are on single processor T-Kernel, for example. If portability is considered, it is better to explicitly conduct mutual exclusion control and synchronization control. 16 TEF021-S002-01.00.00/en

2.3 Task States and Scheduling Rules 2.3.1 Task States States of individual tasks of SMP T-Kernel are from the same as those of the T-Kernel 1.00 Specification. However, it must be noted that the number of tasks in RUN state can be up to the number of processors in SMP T-Kernel while only one task is the RUN state in the T-Kernel 1.00 Specification that operates on a single processor. Task states are classified primarily into the five below. Of these, the Wait state in the broad sense is further classified into three states. Saying that a task is in a Run state means it is in either RUN state or READY state. (a) RUN state The task is currently being executed. When a task-independent portion is executing, except when otherwise specified, the task that was executing prior to the start of task-independent portion execution is said to be in RUN state. (b) READY state The task has completed preparations for running, but cannot run because a task with higher precedence is running. In this state, the task is able to run whenever it becomes the task with a higher precedence than tasks currently running. (c) Wait states The task cannot run because conditions for running are not in place. In other words, the task is waiting for the conditions for its execution to be met. While a task is in one of the Wait states, the program counter, register values, and other information representing the program execution state are saved. When the task resumes running from this state, the program counter, registers and other values revert to their values immediately prior to going into the Wait state. This state is subdivided into the following three states. (c.1) WAIT state Execution is stopped because a system call was invoked that interrupts execution of the invoking task until some condition is met. (c.2) SUSPEND state Execution was forcibly interrupted by another task. (c.3) WAIT-SUSPEND state The task is both in WAIT state and SUSPEND state at the same time. WAIT-SUSPEND state results when another task requests suspension of a task already in WAIT state. T-Kernel makes a clear distinction between WAIT state and SUSPEND state. A task cannot go to SUSPEND state on its own. (d) DORMANT state The task has not yet been started or has completed execution. While a task is in DORMANT state, information representing its execution state is not saved. When a task is started from DORMANT state, execution starts from the task start address. Except when otherwise specified, the register values are not saved. (e) NON-EXISTENT state A virtual state before a task is created, or after it is deleted, and is not registered in the system. Depending on the implementation, there may also be transient states that do not fall into any of the above categories (see section 2.4). When a task going to READY state has higher precedence than the currently running task, a dispatch may occur at the same time as the task goes to READY state and it may make an immediate transition to RUN state. In such a case, the task that was in RUN state up to that time is said to have been preempted by the task newly going to RUN state. Note also that in explanations of system call functions, even when a task is said to go to READY state, depending on the task precedence it may go immediately to RUN state further. Task starting means transferring a task from DORMANT state to READY state. A task is therefore said to be in a started state if it is in any state other than DORMANT or NON-EXISTENT. Task exit means that a task in a started state goes to DORMANT state. Task wait release means that a task in WAIT state goes to READY state, or a task in WAIT-SUSPEND state goes to SUSPEND state. The resumption of a suspended task means that a task in SUSPEND state goes to READY state, or a task in 17 TEF021-S002-01.00.00/en

WAIT-SUSPEND state goes to WAIT state. Task state transitions in a typical implementation are shown in Figure 3. Depending on the implementation, there may be other states besides those shown here. 18 TEF021-S002-01.00.00/en

READY state Release wait Dispatching Preemption Wait condition RUN state WAIT state Terminate (tk_ter_tsk) Suspend Resume (tk_sus_tsk) (tk_rsm_tsk, tk_frsm_tsk) WAIT-SUSPE Terminate NDED (tk_ter_tsk) state Release wait Suspend (tk_sus_tsk) Resume (tk_rsm_tsk, tk_frsm_tsk) SUSPENDED state Suspend (tk_sus_tsk) Terminate (tk_ter_tsk) Start (tk_sta_tsk) DORMANT Terminate Terminate (tk_ter_tsk) Create (tk_cre_tsk) state Exit (tk_ext_tsk) Terminate (tk_ter_tsk) Delete (tk_del_tsk) NON-EXISTENT state Exit and delete (tk_exd_tsk) Figure 3: Task State Transitions 19 TEF021-S002-01.00.00/en

A feature of T-Kernel is the clear distinction made between system calls that perform operations affecting the invoking task and those whose operations affect other tasks (see Table 1). The reason for this is to clarify task state transitions and facilitate understanding of system calls. Table 1 State Transitions Distinguishing Invoking Task and Other Tasks Task transition to a wait state (including SUSPEND) Task exit Task deletion Operations in invoking task tk_slp_tsk RUN WAIT tk_ext_tsk RUN DORMANT tk_exd_tsk RUN NON-EXISTENT Operations on other tasks tk_sus_tsk RUN, READY, WAIT SUSPEND, WAIT-SUSPEND tk_ter_tsk RUN, READY, WAIT DORMANT tk_del_tsk DORMANT NON-EXISTENT [Additional Notes] WAIT state and SUSPEND state are orthogonally related in that a request for transition to SUSPEND state cannot have any effect on the conditions for task wait release. That is, the task wait release conditions are the same whether the task is in WAIT state or WAIT-SUSPEND state. Thus even if transition to SUSPEND state is requested for a task that is in a state of waiting to acquire some resource (semaphore resource, memory block, etc.), and the task goes to WAIT-SUSPEND state, the conditions for allocation of the resource do not change but remain the same as before the request to go to SUSPEND state. [Rationale for the Specification] The reason the T-Kernel specification makes a distinction between WAIT state (wait caused by the invoking task) and SUSPEND state (wait caused by another task) is that these states sometimes overlap. By distinguishing this overlapped state as WAIT-SUSPEND state, the task state transitions become clearer and system calls are easier to understand. On the other hand, since a task in WAIT state cannot invoke a system call, different types of WAIT state (e.g., waiting for wakeup, or waiting to acquire a semaphore resource) will never overlap. Since there is only one kind of wait state caused by another task (SUSPEND state), the T-Kernel specification treats overlapping of SUSPEND states as nesting, thereby achieving clarity of task state transitions. 20 TEF021-S002-01.00.00/en

2.3.2 Task Scheduling Rules When the priority level of a task is changed due to a system call, etc. in T-Kernel, task scheduling is performed. A dispatch occurs when a task in RUN state changes its state due to scheduling. Task scheduling is a preemptive priority-based scheduling based on priority levels assigned to each task. Task scheduling between tasks having the same priority is done on a FCFS (First Come First Served) basis. The task scheduling of SMP T-Kernel uses a similar method to single processor T-Kernel. However, in SMP T-Kernel, it is different from single processor T-Kernel in that multiple tasks can be in RUN state at the same time. The following paragraphs will first explain task scheduling in single processor T-Kernel and then will explain task scheduling in SMP T-Kernel. (1) Task scheduling in single processor T-Kernel Task scheduling in single processor T-Kernel is equal to task scheduling in the special case of SMP T-Kernel with only one processor. Task scheduling is conducted as follows: Precedence is given to tasks that can be executed. Among tasks having different priorities, a task having higher priority has higher precedence. Among tasks having the same priority, the one first going to a run state (RUN state or READY state) has the highest precedence. It is possible, however, to use a system call to change the precedence relation among tasks having the same priority. The task with the highest precedence goes to RUN state, and other tasks goes to READY state. When the task with the highest precedence changes from one task to another, a dispatch occurs immediately and the task in RUN state is switched. If dispatching is not allowed, however, the switching of the task in RUN state is held off until dispatch occurs. In other words, tasks that can be executed are considered to be in a queue according to precedence. If the change in the precedence relation among tasks is allowed and the first task in the queue is thus replaced, dispatch occurs. The task scheduling in single processor T-Kernel is described using the example in Figure 4. Figure 4(a) illustrates the precedence relation among tasks after Task A of priority 1, Task E of priority 3, and Tasks B, C and D of priority 2 are started in that order. The task with the highest precedence, Task A, goes to RUN state. When Task A ends, Task B with the next-highest precedence goes to RUN state (Figure 4(b)). When Task A is again started, Task B is preempted and reverts to READY state; but since Task B went to a run state earlier than Task C and Task D, it still has the highest precedence among tasks having the same priority. In other words, the task precedence reverts to that in Figure 4(a). Next, consider what happens when Task B goes to WAIT state in the conditions in Figure 4(b). Since task precedence is defined among tasks that can be run, the precedences of tasks become as shown in Figure 4(c). Thereafter, when the Task B s wait state is released, Task B goes to RUN state after Task C and Task D, and thus will have the lowest precedence among tasks of the same priority (Figure 4(d)). 21 TEF021-S002-01.00.00/en

Precedence Priority High <Priority 1> Task A <Priority 2> [Task B] [Task C] [Task D] Low <Priority 3> [Task E] Figure 4(a): Precedence in Initial State Precedence Priority High <Priority 1> <Priority 2> Task B [Task C] [Task D] Low <Priority 3> [Task E] Figure 4(b): Precedence After Task B Goes To RUN State Precedence Priority High <Priority 1> <Priority 2> Task C [Task D] Low <Priority 3> [Task E] Figure 4(c): Precedence After Task B Goes To WAIT State 22 TEF021-S002-01.00.00/en

Precedence Priority High <Priority 1> <Priority 2> [Task C] [Task D] [Task B] Low <Priority 3> [Task E] Figure 4(d): Precedence After Task B WAIT State Is Released Summarizing the above, immediately after a task that went from READY state to RUN state reverts to READY state, it has the highest precedence among tasks of the same priority; but after a task went from RUN state to WAIT state and then the wait is released, its precedence is the lowest among tasks of the same priority. (2) Task scheduling in SMP T-Kernel The difference between task scheduling of SMP T-Kernel and of single processor T-Kernel is that the task having the highest precedence goes to RUN state in single processor T-Kernel while, in SMP T-Kernel, tasks equal to the number of execution processors go to RUN state in the order of precedence. The number of processors which can execute tasks is as many as the number of processors that comprise SMP. Task scheduling is conducted as follows: Precedence is given to tasks that can be executed. The rules regarding precedence are the same as those in single processor T-Kernel. In the order of precedence, tasks as many as the number of processors that can execute go to RUN state, and other tasks go to READY state. When the precedence changes and a task with higher precedence than any of the tasks currently in RUN state appears, a dispatch occurs immediately, and the tasks in RUN state is switched. However, when the tasks in RUN state is in a state where dispatch is not allowed, the switching of the tasks in RUN state is held off until dispatch is allowed. The task scheduling in SMP T-Kernel is described using the example in Figure 5. Assume that there are two processors. Here, the example of the single processor T-Kernel used in Figure 4 is treated under SMP T-Kernel with two processors. Figure 5(a) illustrates the precedences of tasks after Task A of priority 1, Task E of priority 3, and Tasks B, C and D of priority 2 are started in that order. In this state, two task, i.e. Task A and Task B, are put in RUN state in the order of precedence (Two is the number of processors). When Task A exits, Task B and Task C with the next highest precedence will be in RUN state. Task B continues in RUN state and Task C goes to RUN state as shown in Figure 5(b). Thereafter when Task A is started again, Task C is preempted and reverts to READY state; but at this time, there is no change in precedences of Task B, Task C, and Task D. That is, the precedence relation among tasks reverts to as shown in Figure 5(a). Next, consider what happens when Task B goes to WAIT state in the conditions in Figure 5(b). Since task precedence is defined among tasks that can be run, the precedence relation among tasks becomes as shown in Figure 5(c). Thereafter, when the Task B s wait state is released, Task B goes to run state after Task C and Task D, and thus will have the lowest precedence among tasks of the same priority (Figure 5(d)). 23 TEF021-S002-01.00.00/en

Precedence Priority High <Priority 1> Task A <Priority 2> Task B [Task C] [Task D] Low <Priority 3> [Task E] Figure 5(a) Precedence in Initial State Precedence Priority High <Priority 1> <Priority 2> Task B Task C [Task D] Low <Priority 3> [Task E] Figure 5(b) Precedence After Task A Ends Precedence Priority High <Priority 1> <Priority 2> Task C Task D Low <Priority 3> [Task E] Figure 5(c) Precedence After Task B Goes To WAIT State 24 TEF021-S002-01.00.00/en

Precedence Priority High <Priority 1> <Priority 2> Task C Task D [Task B] Low <Priority 3> [Task E] Figure 5(d) Precedence After Task B WAIT State Is Released In SMP T-Kernel, the dispatch of multiple tasks may occur in a single scheduling step. When this happens, the dispatch of each task is synchronized. For example, when the states of multiple tasks are changed by a single system call, all the task state transitions finish at the time of this system call returns. However, for processors executing the task-independent portion such as interrupt handlers, dispatch is delayed because the dispatch of tasks cannot be executed immediately until the task-independent portion ends. Tasks which have not been dispatched, in other words, which are continuously in RUN state in the same processor, are not affected by the dispatch of other tasks. Tasks in dispatch-disabled state are excluded from scheduling. Therefore, a task in dispatch-disabled state always continues in the RUN state on the same processor even after scheduling, [Additional Notes] According to the scheduling rules adopted in the single processor T-Kernel specification, so long as there is a high precedence task in a run state, a task with lower precedence will simply not run. That is, unless the highest-precedence task goes to WAIT state or cannot run for other reason, other tasks are not run. This is a fundamental difference from TSS (Time Sharing System) scheduling in which multiple tasks are treated in a fair and equal manner. In the same way, in SMP T-Kernel, tasks with low precedence in READY state are not executed unless a task with a higher precedence cannot run any more due to reasons such as going to WAIT state. It is possible, however, to issue a system call for changing the precedence relation among tasks having the same priority. An application can use such a system call to realize round-robin scheduling which is a typical scheduling method used in TSS. 2.3.3 Task Execution Processor In SMP T-Kernel, tasks as many as the number of execution processors can go to RUN state in the order of precedence of tasks. The processor to which a task is allocated is implementation-defined and applications do not need to be aware of this information. However, users can specify the execution processor of a task. In task scheduling, it is actually guaranteed that tasks which continue in RUN state will continue to be allocated to the same processor. However, when scheduling includes tasks which specify the execution processor, this guarantee ends. In other words, a switching of the allocation of the tasks that continue in RUN state to other processors can occur. The allocation of a task to a processor is explained using Figure 6 as an example of a case where the execution processor is not specified. Here, the number of processors is four and Task A with priority 1, Task B with priority 2, Task C with priority 3 and Task D with priority 4 are in RUN state. All tasks do not have an execution processor specified, and there are no other tasks that can run (Figure 6(a)). Task E with priority 2 is started then. Task A, Task B, Task C, and Task E go to RUN state and Task D goes to READY state because of the order of their precedence (Figure 6(b)). In other words, a dispatch occurs and Task D and Task E are switched on processor 4. 25 TEF021-S002-01.00.00/en

Prior to and following this scheduling, Task A, Task B, and Task C continue in RUN state. These tasks continue to be allocated to the same processor without dispatch (Figure 6(c)). Processor 1 Processor 2 Processor 3 Processor 4 Task in RUN state for each processor Task A Priority 1 Task B Priority 2 Task C Priority 3 Task D Priority 4 Tasks in READY state None Figure 6(a) Allocation of execution processors Initial state Processor 1 Processor 2 Processor 3 Processor 4 Task in RUN state for each processor Task A Priority 1 Task B Priority 2 Task C Priority 3 Task D Priority 4 Tasks in READY state Start Task E Priority 2 Figure 6(b) Allocation of execution processors Task E starts and is initialized Processor 1 Processor 2 Processor 3 Processor 4 Task in RUN state for each processor Task A Priority 1 Task B Priority 2 Task C Priority 3 Task E Priority 2 Tasks in READY state Tasks that continue in RUN state executed by the same processor. Task D Priority 4 Figure 6(c) Allocation of execution processors State following a dispatch Next is an example for the case in which the execution processor of a task is specified. In SMP T-Kernel, normally tasks are automatically allocated to a processor but processors for executing a task, namely execution processors can be specified by the user. When a task is created, one or more execution processors can be specified. Tasks with a specified execution processor are only executed in the specified processor. When an execution processor is specified, task restrictions result in the following scheduling. There is a possibility that the precedence relation of the RUN and READY tasks is not observed in a normal manner during scheduling and is used in an inverted manner. 26 TEF021-S002-01.00.00/en