Concurrent Programming. Implementation Alternatives. Content. Real-Time Systems, Lecture 2. Historical Implementation Alternatives.

Similar documents
Concurrent Programming

Implementation Alternatives. Lecture 2: Implementation Alternatives & Concurrent Programming. Current Implementation Alternatives

Interrupts and Time. Real-Time Systems, Lecture 5. Martina Maggio 28 January Lund University, Department of Automatic Control

Interrupts and Time. Interrupts. Content. Real-Time Systems, Lecture 5. External Communication. Interrupts. Interrupts

SMD149 - Operating Systems

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

CS370 Operating Systems

CIS233J Java Programming II. Threads

CS370 Operating Systems

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

CPU Scheduling: Objectives

Process- Concept &Process Scheduling OPERATING SYSTEMS

Embedded Systems: OS. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Chapter 5: CPU Scheduling

CS370 Operating Systems

Last Class: CPU Scheduling. Pre-emptive versus non-preemptive schedulers Goals for Scheduling: CS377: Operating Systems.

Chapter 5: CPU Scheduling

Chapter 6: CPU Scheduling

Process Description and Control

Embedded Systems: OS

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

THREADS ADMINISTRIVIA RECAP ALTERNATIVE 2 EXERCISES PAPER READING MICHAEL ROITZSCH 2

Chapter 3 Process Description and Control


Chapter 5 CPU scheduling

Scheduling in the Supermarket

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

LECTURE 3:CPU SCHEDULING

Introduction. CS3026 Operating Systems Lecture 01

CHAPTER 2: PROCESS MANAGEMENT

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Processes, Threads and Processors

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

TDIU25: Operating Systems II. Processes, Threads and Scheduling

Announcements/Reminders

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4

Scheduling - Overview

Lecture 4: Threads; weaving control flow

CSE 120 Principles of Operating Systems

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Operating Systems 2 nd semester 2016/2017. Chapter 4: Threads

Agenda. Threads. Single and Multi-threaded Processes. What is Thread. CSCI 444/544 Operating Systems Fall 2008

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Embedded Systems and Software

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Xinu on the Transputer

INF1060: Introduction to Operating Systems and Data Communication. Pål Halvorsen. Wednesday, September 29, 2010

Virtual Machine Design

Threads and Too Much Milk! CS439: Principles of Computer Systems January 31, 2018

Lecture 3: Processes. CMPUT 379, Section A1, Winter 2014 January 13, 15 and 17

Advanced Operating Systems (CS 202) Scheduling (1)

CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. Dispatcher. CPU Scheduler. Alternating Sequence of CPU and I/O Bursts

Lecture: Embedded Software Architectures

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

Lecture 3: Concurrency & Tasking

W4118 Operating Systems. Junfeng Yang

Processes and Threads. Processes: Review

Lecture 2 Process Management

ò Paper reading assigned for next Tuesday ò Understand low-level building blocks of a scheduler User Kernel ò Understand competing policy goals

Operating Systems Overview. Chapter 2

Operating Systems CS 323 Ms. Ines Abbes

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

Performance Throughput Utilization of system resources

Chapter 5: Threads. Outline

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

UNIT - II PROCESS MANAGEMENT

CSE 120 Principles of Operating Systems

Operating Systems ECE344. Ding Yuan

CSE 120 Principles of Operating Systems

AC59/AT59/AC110/AT110 OPERATING SYSTEMS & SYSTEMS SOFTWARE DEC 2015

CS307: Operating Systems

Threads and Too Much Milk! CS439: Principles of Computer Systems February 6, 2019

Operating Systems. VI. Threads. Eurecom. Processes and Threads Multithreading Models

CS370 Operating Systems

Agenda Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2

Chapter 19: Real-Time Systems. Operating System Concepts 8 th Edition,

Chapter 5: Process Scheduling

CSE 4/521 Introduction to Operating Systems

CISC 7310X. C05: CPU Scheduling. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/1/2018 CUNY Brooklyn College

Chapter 3: Process Concept

Process Scheduling Queues

Chapter 3: Process Concept

CMSC 433 Programming Language Technologies and Paradigms. Concurrency

Threads and Concurrency

Transcription:

Content Concurrent Programming Real-Time Systems, Lecture 2 [Real-Time Control System: Chapter 3] 1. Implementation Alternatives Martina Maggio 19 January 2017 Lund University, Department of Automatic Control www.control.lth.se/course/frtn01 2. Concurrent Programming 2.1 The need for synchronization 2.2 Processes and threads 2.3 Implementation 1 Historical Implementation Alternatives Implementation Alternatives Historically, controllers have been implemented in many ways: mechanical techniques; discrete analog electronics; discrete digital electronics. These techniques are no longer used in industrial practice. 2 Current Implementation Alternatives Microcontroller Performance, Power, Efficiency General Purpose Processors (Multicores, Manycores) Special Purpose Processors (Microcontrollers, Digital Signal Processors) Programmable Hardware (Field Programmable Gate Arrays) Application Specific Integrated Circuits System on Chip Flexibility A small and cheap computer on a chip. In 2015, 8-bit microcontrollers cost $0.311 (1,000 units), 16-bit $0.385 (1,000 units), and 32-bit $0.378 (1,000 units but at $0.35 for 5,000) 1. A typical home in a developed country is likely to have 4 general-purpose microprocessors and more than 35 microcontrollers. A typical mid-range automobile has at least 30 microcontrollers. Contains: a processor core, memory and input/output peripherals. 1 Numbers from https://en.wikipedia.org/wiki/microcontroller and http://www.icinsights.com/news/bulletins/mcu-market-on-migration -Path-To-32bit-And-ARMbased-Devices/. 3 4

Small Microcontrollers Large Microcontrollers Limited program memory. Limited data memory. Optimized interrupt latency: main program and interrupt handlers; (periodic) timers; periodic controllers implemented in (timers) interrupt handlers; limited amount of timers. Large amount of memory on chip. External memory available. Real-time kernel supporting multiple threads (and concurrent programming). 5 6 Multicores Single computing entity with two or more processor cores 2. Shared memory, tasks running on each of the cores can access data. Shared cashes, partitioned cashes or separate ones. Concurrency should be taken into account. Concurrent Programming 2 Multicores typically have from 2 to 16 cores, many-cores exceed 16 cores, see http://www.argondesign.com/news/2012/sep/11/multicore-many-core. 7 Concurrency Concurrent Programming Concurrent execution can take three different forms: Concurrency is not parallelism 3 : when people hear the word concurrency they often think of parallelism, a related but quite distinct concept. In programming, concurrency is the composition of independently executing processes, while parallelism is the simultaneous execution of (possibly related) computations. (1) Multiprogramming: the processes multiplex their execution on a single processor (2) Multiprocessing: the processes multiplex their execution on tightly coupled processors (processors sharing memory) (3) Distributed processing: the processes multiplex their execution on loosely coupled processors (not sharing memory) Concurrency, or logical parallelism: (1); Parallelism: (2) and (3) 3 http://blog.golang.org/concurrency-is-not-parallelism, http://vimeo.com/49718712 8 9

Concurrent Programming Concurrent Programming Approaches Task A Task B Task C Logical Concurrency Time Sharing Concurrency Context Switch Real-Time Operating Systems (RTOS) - Sequential Languages (like C) with real-time primitives - Real-Time kernel for process handling Real-Time Programming Languages (like Ada) - The language itself or its runtime kernel provides the functionality of a real-time kernel 10 11 Real-Time Programming Processes Memory Models Real-Time Programming requires: Notion of processes Process synchronization Inter-process communication (IPC) A B C Processes Shared address space: it means that all the processes can access variables that are global in the address space (requires some care), lightweight processes or threads: less status information, less switching overhead Separate address space: it needs explicit ways to handle data that should be shared among the running processes (for example message passing) 12 13 Processes vs Threads Process States Threads Running Ready Process 1 Scheduler waiting event Process 2 Blocked Process 3 Windows, Solaris, Some opearting systems like Windows provide support for threads within processes. Linux does not differentiate between processes and threads. Ready The process is ready to execute. Running The process is currently executing. Blocked The process is waiting for an event. 14 15

Context Switches Priority A context switch takes place when the system changes the running process (thread). The context of the process that is currently occupying the processor is stored and the context of the new process to run is restored. The context consists in: the stack; the content of the programmable registers; the content of the status registers. Each process is assigned a priority number that reflects the importance of its execution demands. The interval of possible priorities can vary, and they can be ordered from low numbers to high or from high numbers to low. STORK: low priority number = high priority (range 1 to max integer) Java: high priority number = high priority (range 1 to 10) Linux: high priority number = high priority (range 0, 1 99 and 100 depending on the scheduler) 16 17 Priority Multicore Case Two alternatives: Processes that are ready to execute are stored in the ready queue according to their priorities. Running P 1 P 2 P 3 P 4 Higher priority global scheduling: a single ready queue with running processes. Running 1 Running 2 partitioned scheduling: P 1 P 2 P 3 P 4 Higher priority one ordinary ready queue per core; often combined with the possibility to move (migrate) processes. 18 19 Preemption A change in the ready queue may lead to a context switch. If the change (insertion, removal) results in a different process being the first in queue, the context switch takes place. Depending on the scheduler, this can happen in different ways: preemptive scheduling: the context switch takes place immediately (the running process is preempted), it happens in most real-time operating systems and languages (STORK, Java); non-preemptive scheduling: the running process continues until it voluntarily releases the processor, then the context switch takes place; preemption-point based scheduling: the running process continues until it reaches a preemption point, then the context switch takes place, it was the case in early versions of Linux (no preemption in kernel mode). Assigning Priorities Assigning priorities is non-trivial, requires global knowledge. Two ways: ad hoc rules: important time requirements = high priority time consuming computations = low priority conflicts, no guarantees; scheduling theory: often it is easier to assign deadlines than priorities. 20 21

Dynamic Priorities Process Representation A process consists of: Priority-based scheduling is fixed-priority, the priority of a process is fixed a priori. It may be better to have systems where it is the closeness to a deadline to decide which process should be executed: earliest deadline first (EDF) scheduling. The deadline can then be viewed as a dynamic priority that changes as the time proceeds (still unusual in commercial systems). the code to be executed: in Java this is the run method; a stack: local variables of the process, arguments and local variables of procedures called by the process, (when suspended) storage space for the values of the programmable registers and program counter; a process record (process control block, task control block): administrative information, priority, a pointer to the code of the process, a pointer to the stack of the process, etc. 22 23 Memory Organization Demo is called MODULE Main; VAR x, z: INTEGER; y: POINTER TO INTEGER; PROCEDURE Demo(a: INTEGER, b: INTEGER): INTEGER; VAR Result: INTEGER; Result := a*b - b*b; RETURN Result; END Demo; x := 2; NEW(y); y^ := 7; z := Demo(x, y^); Write(z); DISPOSE(y); END Main. Stack Heap 7 x:=2 y Data area z tree-list Program area Code for Module Main Growing stack direction Stack pointer Heap end Heap start Program counter When the Demo procedure is called, a stack frame is created and the parameters and the return value and address are given stack space. When the function executes, it can allocate local variables in the stack. When the procedure is terminated, the stack is cleared, the return values is returned and the program counter is set to the return address. Parameters Local variables b a Return value Return address Result Growing stack direction Stack pointer 24 25 [STORK] Memory Organization: Concurrency [STORK] Process Process A is executing Process B is suspended MODULE Main; (* Process *) PROCEDURE ProcessA(); (* Local variable declarations *) LOOP Code for ProcessA; END ProcessA; (* Process *) PROCEDURE ProcessB(); (* Local variable declarations *) LOOP Code for ProcessB; END ProcessB; (* Global variable declarations *) (* Code for creating process A and B *) Wait statement; END Main. Stack Heap Process B Stack Process B Process A Stack Process A Global data area Program area Code for Module Main Growing stack direction Heap end Stack pointer Heap start Program counter 26 TYPE PROCESS = ADDRESS; ProcessRef = POINTER TO ProcessRec; Queue = POINTER TO QueueRec; QueueRec = RECORD succ, pred priority nexttime ProcessRec = RECORD head procv timer stack stacksize : ProcessRef; : Cardinal; : Time; : QueueRec; : PROCESS; : CARDINAL; : ADDRESS; : CARDINAL; ProcessRec head: procv timer stack QueueRec succ pred stacksize Modula2 record stackpointer In the global data area, there are two pointers: ReadyQueue (pointer to a double linked list of processes that are ready to run) and running (pointer to the currently running process). For each process: pointers part of head (QueueRec): successor and predecessor; procv points to the Modula2 process record; timer is used for round robin slicing. 27

[STORK] Process Creation [STORK] Context Switching Procedure CreateProcess(proced: PROC, memreq: CARDINAL, name: ARRAY OF CHAR); CreateProcess(ProcessA, 2000, "Process A"); CreateProcess(ProcessB, 2000, "Process B");... END Main. Creates a process from the procedure proced; memreq: maximum memory bytes required by the process stack; name: used for debugging purposes. The call initializes the process record, creates the process calling the Modula-2 primitive NEWPROCESS, allocates memory on the heap, set the process priority to 1 (highest priority), and insert the process into the ReadyQueue. Initiated by a call to Schedule directly by the running procedure; from within an interrupt handler. It happens when: the running process voluntarily releases the CPU; the running process performs an operation that may cause a blocked process to become ready; an interrupt has occurred, which may have caused a blocked process to become ready. 28 29 [STORK] Schedule [STORK] Inside TRANSFER PROCEDURE Schedule; VAR oldrunning olddisable : ProcessRef; : InterruptMask; olddisable := Disable(); (* Disable interrupts *) IF ReadyQueue^.succ <> Running THEN oldrunning := Running; Running := ReadyQueue^.succ; TRANSFER(oldRunning^.procv, Running^.procv); Reenable(oldDisable); END Schedule; Operations: interrupts are disabled; the actual context switch takes place in the Modula-2 primitive TRANSFER using the Modula-2 process record pointer as argument; interrupts are enabled again. TRANSFER is entered in the context of one process and left in the context of another process. Operations performed by the TRANSFER procedure: Saving: the state of the processor immediately before the switch is saved on the stack of the suspended process; Switching: the context switch operations the value of the stack pointer is stored in the process record of the suspended process, the stack pointer is set to the value that is stored in the process secord of the new process; Restoring: the state of the resumed process is restored (popped from its stack). 30 31 Hyperthreading [JAVA] Java in Real-Time Many new architectures support hyperthreading: the processor registers are duplicated; one set of registers is used for the thread currently being executed, the other set (in case of two hyperthreads) is used for the thread that is next to be executed (next-highest priority); when context switching between these two, no need for saving and restoring the context; the processor only need to change the set of registers that it operates upon, which takes substantially less time. Three possibilities: Ahead-Of-Time (AOT) compilation java or java bytecode to native code java or java bytecode to intermediate language (C) Java Virtual Machine (JVM) as a process in an existing OS (green thread model: the thread are handled internally by the JVM vs native thread model: the java threads are mapped onto the threads of the operating system) executing on an empty machine (green thread model) direct hardware support (for example throug micro-code) 32 33

[JAVA] Execution Models [JAVA] Execution Models Ahead-Of-Time compilation works essentially as STORK. JVM with native thread model: each java thread is executed by a native thread, similar to what seen for STORK. JVM with green thread model: threads are abstractions inside the JVM, the JVM holds within the thread objects all information related to threads (the thread s stack, the current instruction, bookkeeping information), the JVM performs context switching between threads by saving and restoring the contexts, the JVM program counter points at the current instruction of the running thread, the global program counter points at the current instruction of the JVM. 34 35 [JAVA] Thread Creation [JAVA] Thread Creation extending Thread A thread can be created in two ways: by defining a class that extends (inherits from) the Thread class, by defining a class that implements the runnable interface (defines a run method). The run method contains the code that the thread executes. The thread is started by a call to the start method. public class MyThread extends Thread { Thread Start: MyThread m = new MyThread(); m.start(); 36 37 [JAVA] Thread Creation implementing Runnable Used when the object needs to extend some other class than Thread public class MyClass extends MySuperClass implements Runnable { Thread Start: MyClass m = new MyClass(); Thread t = new Thread(m); t.start(); Drawback: non-static thread methods are not directly accessible inside the tun method. [JAVA] Thread Creation: threads as variables public class MyThread extends Thread { MyClass owner; public MyThread(MyClass m) { owner = m; public class MyClass extends MySuperClass { MyThread t; public MyClass() { t = new MyThread(this); public void start() { t.start(); 38 Makes it possible for an active object to contain multiple threads. The thread has a reference to the owner (not elegant). 39

[JAVA] Thread Creation: threads as an inner class [JAVA] Thread Priorities public class MyClass extends MySuperClass { MyThread t; class MyThread extends Thread { public MyClass() { t = new MyThread(); public void start() { t.start(); No owner reference needed. MyThread has direct access to variables and methods of MyClass. The inner class is anonymous. A new thread inherits the priority of the creating thread. The default priority is NORM PRIORITY equal to 5. Thread priorities can be changed calling the nonstatic Thread method setpriority. public class MyClass implements Runnable { Thread t = Thread.currentThread(); t.setpriority(10); The static currentthread method is used to get a reference to thread of MyClass. 40 41 [JAVA] Thread Termination [LINUX] Scheduling in Linux A thread terminates when the run method terminates. To stop a thread from some other thread it is possible to use a flag. public class MyClass implements Runnable { boolean doit = true; while (doit) { periodically The thread is stopped by setting the doit flag to false, either directly or providing some access method provided by MyClass. Sleeping threads will not terminate immediately. Four scheduling classes (schedulers): SCHED OTHER (previously called SCHED NORMAL) default scheduler from 2.6.23 based on the Completely Fair Scheduler (CFS) not a real-time scheduler round robin-based fairness and efficiency are major design goals tradeoff between low lantecy (for example for input/output bound processes) and high throughput (for example for compute bound processes) threads scheduled with this scheduler have priority 0 42 43 [LINUX] Scheduling in Linux [LINUX] Scheduling in Linux Four scheduling classes (schedulers): SCHED RR round robin real-time scheduler threads scheduled with this scheduler have priority always higher than threads scheduled with SCHED NORMAL increasing priorities from 1 to 99 after a quantum the thread releases the CPU to the scheduler and the scheduler assigns it to the highest priority ready thread SCHED FIFO works as SCHED RR but the thread does not release the CPU until termination Four scheduling classes (schedulers): SCHED DEADLINE added in Linux 3.14 (as a result of the EU project ACTORS led by Ericsson in Lund and with the Department of Automatic Control as partner) Earliest-Deadline First scheduler support for CPU reservations threads scheduled with this scheduler have priority 100 (highest) if any SCHED DEADLINE thread is runnable, it will preempt any thread scheduled under one of the other policies 4 4 http://man7.org/linux/man-pages/man7/sched.7.html 44 45