Guaranteeing Hard Real Time End-to-End Communications Deadlines

Similar documents
Lecture 28: Data Link Layer

Introduction to Wireless & Mobile Systems. Chapter 6. Multiple Radio Access Cengage Learning Engineering. All Rights Reserved.

6.854J / J Advanced Algorithms Fall 2008

COSC 1P03. Ch 7 Recursion. Introduction to Data Structures 8.1

CSC 220: Computer Organization Unit 11 Basic Computer Organization and Design

Threads and Concurrency in Java: Part 1

Threads and Concurrency in Java: Part 1

Lecture Notes 6 Introduction to algorithm analysis CSS 501 Data Structures and Object-Oriented Programming

CIS 121 Data Structures and Algorithms with Java Spring Stacks and Queues Monday, February 12 / Tuesday, February 13

CIS 121 Data Structures and Algorithms with Java Fall Big-Oh Notation Tuesday, September 5 (Make-up Friday, September 8)

Basic allocator mechanisms The course that gives CMU its Zip! Memory Management II: Dynamic Storage Allocation Mar 6, 2000.

Analysis Metrics. Intro to Algorithm Analysis. Slides. 12. Alg Analysis. 12. Alg Analysis

Quality of Service. Spring 2018 CS 438 Staff - University of Illinois 1

Reliable Transmission. Spring 2018 CS 438 Staff - University of Illinois 1

CMSC Computer Architecture Lecture 12: Virtual Memory. Prof. Yanjing Li University of Chicago

1. SWITCHING FUNDAMENTALS

Elementary Educational Computer

Chapter 4 Threads. Operating Systems: Internals and Design Principles. Ninth Edition By William Stallings

Announcements. Reading. Project #4 is on the web. Homework #1. Midterm #2. Chapter 4 ( ) Note policy about project #3 missing components

n n B. How many subsets of C are there of cardinality n. We are selecting elements for such a

condition w i B i S maximum u i

Throughput-Delay Scaling in Wireless Networks with Constant-Size Packets

Data diverse software fault tolerance techniques

Solution printed. Do not start the test until instructed to do so! CS 2604 Data Structures Midterm Spring, Instructions:

. Written in factored form it is easy to see that the roots are 2, 2, i,

Exact Minimum Lower Bound Algorithm for Traveling Salesman Problem

Load balanced Parallel Prime Number Generator with Sieve of Eratosthenes on Cluster Computers *

CS 683: Advanced Design and Analysis of Algorithms

Traditional queuing behaviour in routers. Scheduling and queue management. Questions. Scheduling mechanisms. Scheduling [1] Scheduling [2]

The Magma Database file formats

Computer Science Foundation Exam. August 12, Computer Science. Section 1A. No Calculators! KEY. Solutions and Grading Criteria.

% Sun Logo for. X3T10/95-229, Revision 0. April 18, 1998

One advantage that SONAR has over any other music-sequencing product I ve worked

Ones Assignment Method for Solving Traveling Salesman Problem

End Semester Examination CSE, III Yr. (I Sem), 30002: Computer Organization

Appendix D. Controller Implementation

IMP: Superposer Integrated Morphometrics Package Superposition Tool

CSC165H1 Worksheet: Tutorial 8 Algorithm analysis (SOLUTIONS)

On Nonblocking Folded-Clos Networks in Computer Communication Environments

Bayesian approach to reliability modelling for a probability of failure on demand parameter

WYSE Academic Challenge Sectional Computer Science 2005 SOLUTION SET

Running Time. Analysis of Algorithms. Experimental Studies. Limitations of Experiments

Switching Hardware. Spring 2018 CS 438 Staff, University of Illinois 1

Pseudocode ( 1.1) Analysis of Algorithms. Primitive Operations. Pseudocode Details. Running Time ( 1.1) Estimating performance

Sorting in Linear Time. Data Structures and Algorithms Andrei Bulatov

1 Graph Sparsfication

Running Time ( 3.1) Analysis of Algorithms. Experimental Studies. Limitations of Experiments

Analysis of Algorithms

27 Refraction, Dispersion, Internal Reflection

Media Access Protocols. Spring 2018 CS 438 Staff, University of Illinois 1

Enhancing Cloud Computing Scheduling based on Queuing Models

Examples and Applications of Binary Search

What are we going to learn? CSC Data Structures Analysis of Algorithms. Overview. Algorithm, and Inputs

Alpha Individual Solutions MAΘ National Convention 2013

Lecture 1: Introduction and Strassen s Algorithm

Task scenarios Outline. Scenarios in Knowledge Extraction. Proposed Framework for Scenario to Design Diagram Transformation

Data Structures and Algorithms. Analysis of Algorithms

Security of Bluetooth: An overview of Bluetooth Security

The Idea. Leader Election. Outline. Why Rings? Network. We study leader election in rings. Specification of Leader Election YAIR. Historical reasons

CIS 121 Data Structures and Algorithms with Java Spring Stacks, Queues, and Heaps Monday, February 18 / Tuesday, February 19

K-NET bus. When several turrets are connected to the K-Bus, the structure of the system is as showns

University of Waterloo Department of Electrical and Computer Engineering ECE 250 Algorithms and Data Structures

How do we evaluate algorithms?

Python Programming: An Introduction to Computer Science

A Study on the Performance of Cholesky-Factorization using MPI

Random Graphs and Complex Networks T

Review: The ACID properties

Chapter 11. Friends, Overloaded Operators, and Arrays in Classes. Copyright 2014 Pearson Addison-Wesley. All rights reserved.

Improvement of the Orthogonal Code Convolution Capabilities Using FPGA Implementation

Chapter 1. Introduction to Computers and C++ Programming. Copyright 2015 Pearson Education, Ltd.. All rights reserved.

UNIVERSITY OF MORATUWA

Adaptive Resource Allocation for Electric Environmental Pollution through the Control Network

Software development of components for complex signal analysis on the example of adaptive recursive estimation methods.

Analysis of Server Resource Consumption of Meteorological Satellite Application System Based on Contour Curve

% Sun Logo for Frame. X3T10/95-229, Revision 2. September 28, 1995

Copyright 2016 Ramez Elmasri and Shamkant B. Navathe

Multi-Threading. Hyper-, Multi-, and Simultaneous Thread Execution

An Efficient Algorithm for Graph Bisection of Triangularizations

1&1 Next Level Hosting

Civil Engineering Computation

Pattern Recognition Systems Lab 1 Least Mean Squares

Lecture 5. Counting Sort / Radix Sort

A Generalized Set Theoretic Approach for Time and Space Complexity Analysis of Algorithms and Functions

Graphs. Minimum Spanning Trees. Slides by Rose Hoberman (CMU)

Big-O Analysis. Asymptotics

Combination Labelings Of Graphs

An Improved Shuffled Frog-Leaping Algorithm for Knapsack Problem

The isoperimetric problem on the hypercube

Mathematical Stat I: solutions of homework 1

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. Chapter 4. The Processor. Part A Datapath Design

A New Morphological 3D Shape Decomposition: Grayscale Interframe Interpolation Method

Morgan Kaufmann Publishers 26 February, COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. Chapter 5.

Homework 1 Solutions MA 522 Fall 2017

15-859E: Advanced Algorithms CMU, Spring 2015 Lecture #2: Randomized MST and MST Verification January 14, 2015

Hash Tables. Presentation for use with the textbook Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.

An Efficient Algorithm for Graph Bisection of Triangularizations

Adaptive Graph Partitioning Wireless Protocol S. L. Ng 1, P. M. Geethakumari 1, S. Zhou 2, and W. J. Dewar 1 1

Lecture 10 Collision resolution. Collision resolution

Lower Bounds for Sorting

Morgan Kaufmann Publishers 26 February, COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. Chapter 5

Transcription:

Guarateeig Hard Real Time Ed-to-Ed Commuicatios Deadlies K. W. Tidell A. Burs A. J. Welligs Real Time Systems Research Group Departmet of Computer Sciece Uiversity of York e-mail: ke@mister.york.ac.uk ABSTRACT I a distributed hard real time system commuicatio betwee tasks o differet processor must occur i bouded time. The ievitable commuicatio delay (termed the ed-to-ed delay) is composed from the delay i trasmittig a message o the commuicatios media, ad also from the delay i deliverig the data to the destiatio task. This paper gives schedulability aalysis boudig the media access delay ad the delivery delay, ad hece allows the ed-to-ed delay for a message to be bouded. Two access protocols are cosidered, TDMA ad a 802.5-style toke rig. Two approaches are also cosidered for delivery: a odemad protocol i which each packet is delivered to the host whe it arrives, ad a periodic server approach i which the host polls for icomig messages. The schedulability aalysis covers all combiatios of these access ad delivery methods. I additio the effect of icomig messages o the destiatio processor ad its workload is addressed. 1. INTRODUCTION A hard real time system is ofte composed from a umber of periodic ad sporadic tasks which commuicate their results by passig messages. I a distributed hard real time system messages are set betwee processors across a commuicatios media. I order to guaratee that the timig requiremets of all the tasks are met, the total commuicatios delay betwee sedig ad receivig a message must be bouded. This total commuicatios delay is ofte termed the edto-ed delay the time betwee a message beig queued by the sedig task ad the message fully arrivig at the receivig task [13]. This delay is composed of the access delay, the propagatio delay, ad the delivery delay (Figure 1). The access delay is the time a message queued at the sedig processor speds waitig for the use of the commuicatios media. With a shared media etwork architecture, a processor must compete with other processors for use of the media. The propagatio delay is usually defied as the time take for the data to reach the destiatio processor oce physically set by the source processor. I this paper we are cocered with packet etworks, ad cosider propagatio delay to mea the time take betwee a packet startig to be trasmitted ad all of the packet fially arrivig at the destiatio processor. The delivery delay is the amout of time take to process the icomig data ad deliver it to destiatio tasks. This work icludes such fuctios as: Decodig packet headers Reassemblig multi-packet messages Copyig a message ito a destiatio task message buffer (the buffer may be guarded by a semaphore to esure mutual exclusio) Notifyig the scheduler of the arrival of a message (the destiatio task may be blocked

Source task Destiatio task Network buffer Network buffer A C B A B C Access delay Propagatio delay Delivery delay Commuicatios media Figure 1: The Compoets of Commuicatios Delay awaitig a message). I practice the delivery delay ca be a sigificat part of the ed-to-ed delay. Most research ito hard real time commuicatios cocetrates o protocols boudig the access delay to shared commuicatios media. For example, the MARS project [4, 6] uses a simple Time Divisio Multiple Access (TDMA) protocol to resolve commuicatios media cotetio betwee processors. A simple priority queue ca be used to resolve cotetio betwee local messages. Strosider et al [14] provide aalysis for the Rate Mootoic [8] schedulig (RMS) algorithm applied to periodic ad aperiodic messages set o a 802.5 toke rig. Agrawal et al [1] also apply the Rate Mootoic schedulig approach to the FDDI access protocol. The 802.5 toke rig protocol is a example of a global priority scheme messages are packetised; before each packet is set a packet reservatio protocol is operated: effectively each processor bids for the right to trasmit the ext packet, with the processor holdig the highest priority* packet i the system gaiig the right to trasmit that packet ext. Whe cosiderig ed-to-ed deadlies, the Rate Mootoic schedulability aalysis for the 802.5 toke rig is o loger sufficiet. The Rate Mootoic approach for 802.5 forces the access deadlie for a message m,d m, to be equal to the periodicity of m, T m (a message m has a periodicity equal to the period of the task which queues m ). After allowig for the delivery time this would require that ay ed-to-ed deadlie,e m, be greater tha T m. Clearly this is iflexible, ad a more powerful schedulig approach is eeded. The Deadlie Mootoic schedulig (DMS) algorithm [7] applied to periodic message schedulig would allowd m T m, ad hece permit E m T m (sice the DMS approach is a static priority oe, with priorities assiged accordig to D m, the algorithm is a superset of the RMS algorithm). I this paper we apply DMS to a 802.5- style toke rig protocol, ad for compariso also apply the algorithm to a simple TDMA protocol. *The 802.5 stadard permits oly 8 distict priority levels, which is ormally too few to avoid priority iversio; this paper aalyses a 802.5-style protocol where there is always a sufficiet umber of priority levels

The paper gives schedulability aalysis which bouds the worst-case access delay, D m, for a message m, where D m D m for a schedulable message. Very little research has addressed the problems of aalysig the delivery delay. The MARS project reports o experimetal evidece suggestig that DMA cycle-stealig ad iterrupts from the commuicatios cotroller ca lead to a sigificat overhead [16]. I this paper we provide ew schedulability aalysis for two alterative delivery approaches: A o demad approach (where icomig data is processed as soo as it arrives) A periodic server approach (where the host processor regularly polls the etwork buffer i the commuicatios cotroller for icomig data). This schedulability aalysis is the itegrated with the aalysis boudig the access delays for the two access protocols to provide bouds o the ed-to-ed commuicatios delay for a give message m. The rest of the paper is structured as follows: Sectio 2 describes a example system ad etwork architecture based o hard real time periodic ad sporadic tasks statically assiged to processors coected to a shared commuicatios media via itelliget etwork cotrollers. Sectio 3 presets schedulig theory for the two media access protocols, guarateeig bouded access times for ay give message. Sectio 4 presets schedulig theory for the two delivery protocols, guarateeig bouded delivery times. Sectio 5 discusses some of the issues surroudig the access ad delivery protocols, ad summarises the schedulability aalysis. 2. SOFTWARE AND NETWORK ARCHITECTURE This sectio describes the hardware ad software of a example etwork architecture for the purposes of this paper. The architecture is chose to both represet a realistic etwork ad to provide a example framework for aalysis. Both the hardware ad assumed software of the example system will be described. Figure 2 shows the hardware. A umber of processors are coected to a shared commuicatios medium each via a Network Iterface Uit (NIU). The NIU provides a low-level packet-based iterface to the physical commuicatios medium. The iterface betwee the NIU ad the host processor is via a shared etwork buffer. The buffer is partitioed ito outgoig ad icomig sectios. Outgoig messages are assembled ito packets by the host processor ad placed i a priority-ordered queue i the outgoig etwork buffer. Icomig packets are placed i the icomig etwork buffer by the NIU i FIFO order. The NIU is capable of raisig a "packet arrived" iterrupt o the host processor. The etwork buffer has a limited capacity. Data is trasferred to ad from the etwork buffer uder the cotrol of the host processor by executig move istructios. I other etwork architectures data is ofte trasferred by Direct Memory Access (DMA). However, DMA is rarely used for speed: moder processors ca move data usig move istructios almost as fast as a DMA cotroller. DMA trasfers are sporadic with CPU cycles stole i a o-determiistic fashio. I this paper we aalyse two alterative media access protocols, ad two alterative message delivery protocols. Firstly, the media ca be accessed accordig to a 802.5-style global priority protocol. Packets are copied by the host processor ito the etwork buffer ad ordered by priority (we assume a adequate umber of priority levels). Before a packet is trasmitted by a NIU a packet reservatio protocol is performed where each NIU bids for the right to trasmit by passig a toke. The exact detail of the 802.5 protocol is described by Strosider et al [14]. We assume that packets are of a fixed size. The per-packet overheads of the reservatio protocol are variable, but bouded. We therefore assume that the time take to trasmit a packet, icludig the packet reservatio overheads, lies i the rage [ρ mi.. ρ max ]. The other protocol is based o TDMA, with each NIU allowed to trasmit a certai umber of packets i its slot. The slot is of a fixed size such that if there are o packets to sed the commuicatios medium is idle util the slot fiishes; each processor may be assiged a differet

source task shared etwork buffer HOST I N O U T HOST I N O U T outgoig packet queue (priority ordered) NIU NIU packet etwork Figure 2: The Physical Architecture fixed slot size. At the ed of the slot aother NIU ca begi trasmittig packets. After all NIUs have bee give access, the cycle repeats. This is illustrated i Figure 3. cycle time CPU 1....... CPU 2 CPU 3 time Figure 3: The Simple TDMA Cycle Note that the TDMA protocol requires clock sychroisatio betwee NIUs to esure that packet trasmissio is collisio free. Also ote that the packet trasmissio time for the TDMA protocol does ot vary (i.e. ρ mi = ρ max ) sice there is o packet reservatio protocol. Hece the packet trasmissio time is deoted ρ. There are two alterative delivery protocols. The "o demad" protocol requires that a

"packet arrived" iterrupt is raised by the NIU whe a packet arrives i the etwork buffer. The iterrupt is hadled by the host processor, which releases a sporadic hadler to process the packet. The hadler copies the packet data from the etwork buffer ad assembles a message. The hadler delivers the message to a message buffer owed by the destiatio task. To prevet cocurret access, the message buffer is guarded by a semaphore; the semaphore is locked ad ulocked accordig to the Priority Ceilig Protocol (PCP) [11]. The arrival of a message at the destiatio task may require that the scheduler be iformed of this (the destiatio task could be blocked awaitig the arrival of the message). The hadler might therefore update schedulig tables, markig the destiatio task as ready to ru. The schedulig tables also eed to be guarded by semaphores. I geeral, the hadler may ot have fiished before aother packet arrives. We assume that the curret ivocatio of the hadler is give a higher priority tha subsequet ivocatios of the hadler, ad thus is always allowed to fiish before subsequet packets are processed. Ideed, a sesible implemetatio of the packet hadler would have the hadler repeatedly processig packets util there were oe remaiig, ad the sleepig util a packet arrived (i.e. util the "packet arrived" iterrupt hadler re-released the sporadic packet hadler). The alterative delivery protocol is implemeted with a periodic server. The server is a ordiary periodic task. Whe it is released it examies the etwork buffer for packets, processes each packet util there are oe remaiig, ad the termiates. As with the sporadic hadler the server accesses semaphore-guarded message buffers ad schedulig tables. Tasks residet o host processors are scheduled accordig to the Deadlie Mootoic Schedulig (DMS) algorithm. Each task is assiged a static priority based upo its deadlie (a short deadlie results i a high priority). Some tasks, such as the sporadic hadler or the periodic server, have o deadlie requiremet assiged to them, ad a priority must be chose usig other meas. With the periodic server, for example, the worst-case respose time merely forms part of the ed-to-ed delay for a message; the timig requiremet for schedulability is the ed-to-ed message delay, ot the worst-case respose time of the server, ad hece it is ot meaigful to assig a deadlie to the server. We assume that, per ivocatio, a task ca sed a bouded umber of messages, of bouded size, to fixed destiatios. We also assume that each message m ca have ed-to-ed deadliee m (measured from the time the message is queued to the time it arrives at the destiatio task), ad that m is assiged a static priority accordig to this deadlie (some messages may ot have edto-ed deadlies, ad a priority must be chose usig other meas). To simplify the model we add the restrictio that m must arrive at the destiatio task before the ed of the period of the sedig task (otherwise messages set from successive ivocatios of the sedig task could delay each other). Furthermore, we assume that this iformatio is kow a priori. 3. BOUNDING ACCESS DELAYS This sectio applies DMS schedulig theory to the boudig of access delays for two shared media access protocols: a 802.5-style global packet priority protocol, ad a simple TDMA protocol. Aalysis is also provided which allows the outgoig etwork buffer space requiremets to be bouded. Note that Appedix A cotais a glossary of symbols used throughout this paper. A task ca queue a message at ay poit i its executio ad each message m therefore iherits a periodicity T m from the task. A message m cosists of P m fixed size packets, each of which is give the priority of the message. The packets are placed i a priority-ordered queue i the etwork buffer; the etwork cotroller removes packets from the queue ad trasmits them accordig to the etwork access protocol. Thus the access delay is the time the message m speds waitig i the queue (equivalet to the time from beig queued to the time the last packet i m begis trasmissio), ad is deoted Q m.

3.1. Global Priority (802.5-style) Protocol Access to the etwork via the global priority protocol ca be cosidered as access via a otioal system-wide etwork queue, where all messages geerated either locally or o differet processors, ad are placed i this otioal queue ordered by global priority. I order to fid Q m the iterferece from system-wide higher priority messages must be foud i the time message m is i the queue it ca be preempted by higher priority messages (i.e. higher priority messages ca be queued i frot of m ). The schedulig of deadlie mootoic messages is aalogous to schedulig deadlie mootoic tasks: the periodicity of a message is aki to the period of a task; the umber of packets i a message is aki to the worst-case executio time of a task. The iterferece o a message i from a higher priority message j could be foud by calculatig the umber of times successive istaces of message j could be queued i frot of a sigle istace of i. This is give by: Q i T P j j However, this is ot strictly true: a message could be queued aywhere withi the worst-case respose time of the task queueig the message. This gives rise to the back to back hit problem, idicated by Rajkumar et al [10] (i referece to the problem of volutary task suspesio for a period of time). Figure 4 illustrates the problem; although the task has a period T i, successive messages are (T i ) close together. T i task i executio widow time task released message queued task released message queued lower priority message queued here could receive double iterferece Figure 4: Back to Back Hits from Successive Messages A task could queue a message at the last possible istat o oe ivocatio, ad the as soo as possible i the subsequet ivocatio. If the worst-case respose time of the task was equal to the period of the task two messages could be potetially queued arbitrarily close together, givig rise to a back to back hit. The followig equatio gives the iterferece o a message spedig time Q m i the queue, ad allows the back to back hit problem to be aalysed. J m = Σ j hpm (m ) Q m + D s (j ) P T j (1) j

s (j ) The task which seds message j 10 hpm (m ) D j T j P j The set of messages i the system of higher priority tha m The worst-case respose time of the task queueig message j The periodicity of j, iherited from the task queuig j The umber of packets from which message j is composed The derivatio of this equatio is give i Appedix B. The Appedix also idicates how to fid the worst-case respose time of the sedig task, D s (j ). Give the iterferece o a message m the queueig time ca be foud: Q m = ρ max P m + J m ρ max The maximum time take to trasmit a packet. This icludes the overhead due to the packet reservatio protocol before data trasmissio takes place, ad the time take to physically trasmit the data across the etwork. Note that i the 802.5 protocol the overheads whe trasmittig a packet ca vary, ad hece the packet trasmissio time ρ ca vary. We defie ρ mi to be the smallest packet trasmissio time (i.e. the time take to trasmit the data of a full packet), ad ρ max to be the largest value of ρ (i.e. whe the maximum overheads are icurred). I order to calculate Q m the iterferece J m must be foud, ad vice versa. Hece the equatios are mutually depedet; a solutio ca be foud by iteratio: Q m +1 = ρ max P m + Σ j hpm (m ) Q m + D s (j ) P T j j A iitial value Q m 0 of 0 is suitable. It ca be show that Q m + 1 Q m, ad hece the iteratio is guarateed to either coverge (i.e. Q m + 1 = Q m ), or exceed a threshold value (such ase m ). 3.2. Local Priority (TDMA) Protocol The simple TDMA protocol, which resolves message cotetio locally, ad global cotetio via a roud-robi protocol, ca be aalysed i a similar way to that above. Firstly, the total umber of packets trasmitted i a sigle TDMA cycle is give by: N S p P TDMA = Σ N S p p =1 The total umber of processors give slots i the protocol The slot size, i packets, of processor p The TDMA cycle time, T TDMA, is therefore give by ρ P TDMA. Note that for the TDMA protocol ρ mi = ρ max sice there is o packet reservatio protocol, ad hece o variatio i packet trasmissio time. Hece ρ is used to deote the packet trasmissio time. The iterferece a message m queued o processor p experieces is similar to that experieced i the global protocol: Σ j lhpm (m ) Q m + D s (j ) P T j j

However, lhpm (m ) deotes the set of messages of higher priority tha m queued o processor p i.e. higher priority messages queued locally. The iterferece from messages at other processors ca be modelled as a sigle high priority message with periodicity equal to T TDMA composed from P TDMA S p packets the messages queued at processor p are oly trasmitted i the remaiig time after the other processors have set their packets. The iterferece o a message m queued at processor p is thus give by: J m = Σ j lhpm (m ) Q m + D s (j ) Q P T j + m j T TDMA P TDMA S p As ca be see, the local priority protocol causes priority iversio the messages at other processors are always treated as higher priority tha local messages. The queueig time Q m ca be foud iteratively, i the same way as for the global priority protocol: Q +1 m = ρ P m + Σ j lhpm (m ) Q m + D s (j ) Q P T j + m j T TDMA P TDMA S p 3.3. Boudig Network Buffer Space Used We ow tur to the problem of boudig the etwork buffer space usage for both access protocols. Recall that we have the followig message deadlie requiremet: D s (m ) +E m T m Hece messages set from successive ivocatios of the sedig task caot iterfere, ad thus for both the local ad global priority access protocols the worst-case outgoig etwork buffer space requiremets o processor p occurs at a critical istat (where all possible outgoig messages are queued together). The worst-case buffer space requiremet, i packets, is therefore give by: Σ P m m om (p ) om (p ) The set of outgoig messages from processor p 4. BOUNDING DELIVERY TIMES The previous sectio has show how DMS ca be applied to the problem of boudig the access delay. This sectio addresses the problem of boudig the delivery time. Sectio 2 has described the operatio of two alterative delivery approaches: the o demad protocol, ad the periodic server protocol. This sectio gives aalysis boudig the delivery times for messages delivered uder the two protocols, ad idicates how the schedulability of applicatio tasks o the destiatio processor ca be determied. Whe icomig packets arrive i the destiatio processor etwork buffer they must be delivered to the destiatio tasks. The delivery mechaism assembles a message ad places it i a destiatio task message buffer. Access to the buffer is cotrolled by a PCP semaphore (although other protocols, such the Four Slot Mechaism proposed by Simpso [12] could be used). The scheduler may eed to be iformed of the arrival of the message (so that the destiatio task ca be released), ad access to the schedulig iformatio is also cotrolled by a PCP semaphore. The delivery work performed is sigificat, ad ca rarely be igored as system overheads. Experimets with the MARS system [16] foud that i a 8ms iterval DMA cycle-stealig led to the loss of 1026 µs of processig time. The amout of time take to deliver a message is therefore o-trivial. This sectio provides schedulability aalysis which ca be used to determie the worst-case delivery times for the two alterative delivery protocols described i Sectio 2.

The delivery protocols used have a effect o the satisfactio of two major costraits. Firstly, the computatioal iterferece experieced by other tasks ruig o the destiatio processor eeds to be bouded to allow the schedulability of these tasks to be determied; secodly, the worst-case etwork buffer space requiremet for the destiatio processor eeds to be foud the overflow of the etwork buffer is a serious evet ad we wish to discout ay system i which this could occur. Aalysis is preseted which allows the worst-case icomig etwork buffer usage to be foud, ad the worst-case iterferece o other tasks due to icomig packets. 4.1. "O-Demad" Delivery The operatio of the O-Demad protocol has bee described i Sectio 2: as soo as a packet fully arrives at the destiatio NIU (i.e. all of the icomig packet is residet i the etwork buffer) a "packet arrived" iterrupt is raised o the host processor. The iterrupt hadler releases a sporadic task which copies the data from the etwork buffer, upacks the message, ad performs the ecessary delivery operatios. Figure 5 shows how the ed-to-ed delay is composed whe usig the o-demad delivery protocol. As ca be see the delivery delay for a message, D H, is equal to the respose time of the sporadic hadler, which is give by: C H K H I H D H = C H + I H + K H (2) Worst-case executio time of the hadler, icludig cotext switch costs, ad the time take to copy a packet from the etwork buffer (recall that data is trasferred from the NIU by copyig from the etwork buffer rather tha usig DMA) The worst-case blockig time the hadler ca experiece whe attemptig to lock either the destiatio task message buffer semaphore, or the schedulig iformatio semaphore (if the PCP is used this is equal to the logest critical sectio of all lower priority tasks accessig semaphores with ceiligs greater tha or equal to the priority of the hadler) The worst-case iterferece a istace of the hadler ca experiece. I order to fid the worst-case respose time D H the iterferece eeds to be bouded. The iterferece o the hadler is the time whe other activities pre-empt the hadler ad cosume the processor time, ad comes from three sources: Preemptio by higher priority applicatio tasks (i practice the hadler will be assiged a high priority ad so there will be few tasks of higher priority) Packet arrival iterrupts (more packets may arrive while the hadler is still active) The remaiig executio of previous ivocatios of the hadler released to hadle packets which arrived earlier Iterferece from Higher Priority Tasks The iterferece due to higher priority tasks ca be foud usig stadard DMS theory, ad is give by: D H Σ D H + B j C T j (3) j j hpt (H ) The worst-case respose time of the hadler hpt (H )The set of tasks of higher priority tha the hadler B j The maximum amout of time task j ca volutarily susped itself

A B C D Message m queued Fial packet of m removed from queue ad begis trasmissio Fial packet of m fully arrives at the destiatio NIU Fial packet of m fully processed (m delivered) Figure 5: The Ed-to-Ed Delay Compositio

The derivatio of this equatio follows from the simple DMS test give by Audsley et al [2], ad is show i Appedix B. Iterferece from Packet Iterrupts The sporadic hadler is cosidered to have bee released as soo as the iterrupt o the host is raised, ad therefore, the executio of the iterrupt hadler which actually releases the sporadic is cosidered to iterfere with the sporadic hadler (Figure 6). D H iterrupt hadler sporadic packet hadler...................... time A B C Figure 6: The Release of the Sporadic Hadler I geeral, the worst-case respose time of the sporadic hadler ca exceed ρ mi, the miimum time betwee packet arrivals, ad hece the iterferece from iterrupts raised by subsequet packets arrivals must be cosidered. There are two ways of boudig the umber of subsequet packet arrival iterrupts. Firstly, packets ca be assumed to arrive at the fastest possible rate (i.e. with a iter-arrival time of ρ mi ). Secodly, use ca be made of message passig iformatio kow a priori: the icomig messages are kow beforehad. Give the periodicity ad size (i packets) of the messages the total umber of icomig packets, ad therefore iterrupts, ca be bouded. Both of these approaches give a upper boud o packet arrival iterrupts i a iterval, ad hece the lower upper boud ca be used. by: C I The iterferece from iterrupts assumig that packets arrive at the maximum rate is give D H ρ C I mi Worst-case executio time of the "packet arrived" iterrupt hadler The umber of packets (ad hece iterrupts) arrivig i a iterval of duratio D H ca also be bouded usig the a priori message passig iformatio. By aalogy betwee Deadlie Mootoic task schedulig ad Deadlie Mootoic message schedulig we ca obtai a boud o the umber of packets arrivig; likeig worst-case computatio time to the umber of packets i a message, the periodicity of a task to the period of a message, ad the arbitrary blockig time of a task to the exteral blockig time of a message we ca say that the umber of packets arrivig i a iterval of duratio D H is: Σ D H + Q m + D s (m ) P T m m m im (p ) (4)

s (m ) The task sedig message m T m The periodicity of message m, equal to the period of task sedig m D s (m ) The worst-case respose time of the task sedig message m Q m The worst-case time message m speds i the outgoig packet queue im (p )The set of messages icomig to processor p The term Q m + D s (m ) is similar to the arbitrary blockig time of a task. However, it should be oted that this is a pessimistic boud. We have said earlier i Sectio 2 that the worst-case respose time of the sedig task plus the worst-case ed-to-ed delay should be less tha the period of the sedig task (i.e. D s (m ) + E m T m ). Give this we ca say that D H + Q m + D s (m ) = 1 T m ad therefore the umber of packets arrivig i the iterval is bouded by: Σ P m m im (p ) Hece the computatioal iterferece o the sporadic hadler from packet arrival iterrupts is bouded by: Σ P mc I m im (p ) (5) Equatios 4 ad 5 both give upper bouds o the packet iterrupt iterferece, ad the lower of the two ca be used. The packet iterrupt iterferece is bouded by: mi Σ P m, C ρ I (6) mi m im (p ) D H Iterferece from Previous Hadler Ivocatios The iterferece betwee successive ivocatios of the sporadic hadler is ow cosidered. As has bee metioed earlier, i geeral the worst-case respose time of the sporadic hadler could exceed ρ mi. This meas that aother ivocatio of the hadler could be released before the curret oe has fiished. We assume that a later ivocatio of the hadler is give a lower priority relative to a earlier oe so that a hadler is ever preempted by subsequet ivocatios of itself. We also assume that the hadler ever volutarily suspeds (i.e. it ever blocks arbitrarily). Cosequetly, oce a ivocatio of the hadler starts to execute all previous ivocatios must have termiated. Hece at ru-time there is o eed to implemet cocurret hadler threads. Aother importat result of this approach is the effect o the blockig time a ivocatio ca experiece whe attemptig to lock a semaphore uder the PCP. Cosider the followig situatio (illustrated i Figure 7). The curret ivocatio of the hadler (deoted τ 1 ) is ruig whe a packet arrives, releasig τ 2. The iterferece o τ 2 is bouded by: d τ1 mi d τ1 ρ mi, C H The time τ 1 actually takes to complete

D H* d τ1 Situatio 1 K H (blockig time) τ 1........................ τ 2 First ivocatio blocks ad iterferes Secod ivocatio doest block ρ D H* Situatio 2 D H First ivocatio doest block ad iterferes less τ 1........ τ 2 Secod ivocatio blocks K H (blockig time) first packet arrives secod packet arrives Figure 7: Iterferece o a Hadler from the Previous Ivocatio The respose time d τ1 might be large if τ 1 attempted to lock a semaphore uder the PCP ad was blocked. If d τ1 ρ mi (i.e. τ 1 iterfered with τ 2 ) the o lower priority task could execute betwee the termiatio of τ 1 ad the begiig of the executio of τ 2. Hece, if τ 1 locks a semaphore τ 2 will ever be blocked whe lockig a semaphore. This result is due to the extetio of the PCP property "a higher priority task ca be blocked at most oce" multiple overlappig ivocatios of the hadler behave as a sigle task (i fact, the delivery protocol would probably be implemeted as a sigle task). If τ 1 does ot attempt to lock a semaphore it will fiish i less time tha D H, ad we adopt the otatio D H* to deote this o-blockig worst-case respose time (where D H* D H ), ad is give by: D H* = C H + I H (7) We ca say that the worst-case iterferece/blockig patter occurs whe τ 1 does ot attempt to lock a semaphore but still iterferes with τ 2 (i.e. D H* > ρ mi ). ad thus whe calculatig the worst-case iterferece o a ivocatio of the hadler, the iterferece due to a earlier oblockig ivocatio should be used. Now cosider Figure 8. Sice D H* < ρ the iterferece o τ 4 from previous ivocatios comes oly from three previous ivocatios (τ 1, τ 2, ad τ 3 ), ad the maximum umber of previous ivocatios of a hadler which ca iterfere is give by:

τ 1 τ 2 D H*. τ 3... x.... τ 1 1 ρ 2 ρ 3 ρ packet 4 arrives Figure 8: Iterferece o the Hadler from Previous Ivocatios D H* ρ The iterferece from all previous ivocatios except the oldest is C H. The oldest (i Figure 8 the oldest is τ 1 ) iterferes over the iterval marked x. This iterval is give by: D H* D H* ρ ρ The iterferece from τ 1 is bouded by C H, ad if x is less tha C H the the iterferece is bouded by x. Hece the iterferece o a later ivocatio of the hadler from all earlier ivocatios ca be bouded by: D H* ρ 1 C H + mi D D H* H* ρ mi ρ mi, C H (8) mi As ca be see, whe D H* < ρ mi the iterferece is deemed to be zero (eve if D H > ρ mi ). Alteratively, the worst-case iterferece from previous ivocatios ca be bouded by usig the a priori message passig iformatio. Equatio 5 gave a boud o the umber of packets arrivig i a iterval of duratio D H. Thus for a later ivocatio of the hadler, the umber of earlier ivocatios of a hadler curretly outstadig is bouded by: Σ P m m im (p ) 1 Ad hece the computatio iterferece due to earlier ivocatios is bouded by: Σ P m m im (p ) 1 C H (9)

Sice both Equatios 8 ad 9 are both upper bouds o the computatio iterferece from earlier ivocatios, the least upper boud ca be take. D H* D H* mi 1 C ρ H + mi D H* ρ mi ρ mi, C H, (10) mi Σ P m m im (p ) 1 C H The total iterferece o a ivocatio of the hadler is foud by summig Equatios 3, 6, ad 10. I H = Σ D H + D j C T j + (11) j j hpt (H ) mi Σ P m, C ρ I + mi m im (p ) D H* D H D H* mi 1 C ρ H + mi D H* ρ mi ρ mi, C H, mi Σ P m m im (p ) 1 C H Equatios 2, 7, ad 11 are mutually depedet, ad a solutio ca be foud by iteratio. We defie the fuctio I H (a, b ) as equal to Equatio 11 with D H replaced with a, ad D H* replaced with b. We ca first fid D H* : D +1 H = C H + I H (D H*, D H* ) (12) 0 A suitable value for D H* is zero. The iteratio proceeds util either it coverges (i.e. D +1 H* = D 0 H* ) or util D H* exceeds some threshold (such ase m Q m ). If the threshold is exceeded the the respose time of the hadler is too large ad the system will therefore be uschedulable. Havig foud D H* we ca ow fid D H : D H +1 = C H + K H + I H (D H, D H* ) (13) A suitable value for D H 0 is zero. Agai, the iteratio proceeds util it coverges or exceeds some threshold. Boudig Iterferece o Other Tasks Havig bouded the delivery time for a message we ow tur to the problem of boudig the iterferece o other tasks o the destiatio processor, ad boudig the icomig etwork buffer memory requiremets. A task o the destiatio processor ca receive iterferece due to commuicatios from two sources: Packet arrival iterrupts Hadler computatio A task of higher priority tha the sporadic hadler receives iterferece oly from packet arrival iterrupts, whereas a lower priority task receives iterferece from both. We assume the worstcase respose time of a task i o the destiatio processor p is. I a iterval of duratio

the maximum umber of packet arrivals is give by: im (p ) mi D i ρ, mi Σ P m m im (p ) The set of messages destied for processor p Hece for a task residet o processor p the maximum computatioal iterferece due to icomig packets o a task i is give by: h (i,h ) C I C H C I + h (i,h )C H mi D i ρ, mi Σ P m m im (p ) A fuctio returig 1 if the hadler H is of higher priority tha i, ad zero otherwise The worst-case computatio time of the "packet arrived" iterrupt hadler The worst-case computatio time of the sporadic hadler Hece the full iterferece a task o the destiatio processor ca experiece is give by: hpt (i ) I i = Σ + B j C T j + j C I + h (i,h )C H mi D i ρ, mi j h (i ) Σ P m m im (p ) The set of tasks of higher priority tha i ad located o the same processor as i Appedix B shows how the worst-case respose time for a deadlie mootoic task ca be foud oce a equatio for the iterferece is developed. Boudig Network Buffer Space Used We ow tur to the problem of boudig the etwork buffer space requiremet. Recall that a packet takes time ρ mi from startig to fill a packet-sized slot i the etwork buffer to the time it fully arrives. Also recall that the sporadic hadler the takes time D H to remove ad process the packet. Hece the packet requires a etwork buffer packet slot for a iterval of duratio at most ρ mi + D H. The maximum umber of packets that ca arrive i this iterval, ad hece the maximum etwork buffer space requiremet (measured i packet slots) is therefore bouded by: mi D H + ρ mi, ρ mi Σ P m m im (p ) (14) 4.2. Periodic Server Protocol The periodic server protocol is a simple oe: a periodic task examies the etwork buffer ad processes all packets remaiig i the buffer. The task fiishes executig whe all packets have bee removed. Give the period of the server T S, ad the worst-case respose time of the server D S, the worst-case delivery delay for a packet oce arrived i the etwork buffer is give by: T S + D S (15) The worst-case respose time of the server ca be computed (i the same way as for other

Deadlie Mootoic tasks) by a iterative equatio: D S +1 = C S + K S + Σ D S + B j (16) T j j hpt (S ) hpt (S )The set of tasks of higher priority tha S ad located o the same processor as S K S C S The worst-case blockig time task S ca experiece whe attemptig to lock message buffer ad schedulig semaphores The worst-case executio time task S ca require The computatio time required by a ivocatio of the server depeds upo the umber of packets that are processed. The worst-case computatio time C S therefore ca be bouded by fidig the worst-case umber of packets that could be i the etwork buffer whe task S is released, plus the umber that could cotiue to arrive durig the executio of S. I the worst-case S could be released, fid o packets i the buffer, ad termiate immediately; packets could arrive util the task is re-released (T S later). Packets could the cotiue to arrive durig the executio of the server. Hece the worst-case umber of packets which must be dealt with by the server i a sigle ivocatio is equal to the maximum umber of packets that ca arrive i the iterval of duratio T S + D S. The aalysis for the maximum umber of packets arrivig i a iterval has bee derived for the "o demad" protocol ad give earlier (Equatio 14); the equatio below therefore gives the maximum umber of packets the periodic server must deal with i a sigle ivocatio: mp S = mi T S + D S, ρ mi Σ P m m im (p ) The worst-case executio time of the server, C S, is a fuctio of the worst-case umber of packets that the server eeds to process. We assume that the server requires computatio time C SP per packet processed, ad a additioal worst-case time C SF regardless of the umber of packets processed (to iclude the cost of cotext switches, for example). The worst-case time C S is give by: C S = C SF + mp S C SP (18) The above equatios are mutually depedet the worst-case computatio time caot be foud util the worst-case respose time is computed, ad vice versa. A solutio ca be foud by iteratio: D S +1 = C SF + K S + Σ D S + B j + C T SP mi T S + D j S, ρ mi j hpt (S ) Σ P m m imp The above equatio ca be show to be mootoically icreasig ad hece D S +1 D S. Therefore, give D S 0 = 0, the equatio will either coverge to a solutio (i.e. D S +1 = D S ), or will exceed some threshold (such as T S ). Q m The ed-to-ed delay for a message m delivered usig the periodic server protocol is thus: Q m + ρ + T S + D S (20) Worst-case access delay for message m, usig either the global or local packet priority protocol (17) (19)

T S The period of the server, chose by a appropriate cofiguratio techique D S The worst-case respose time of the server calculated accordig to the Equatio 19. Boudig the Iterferece o Other Tasks We ow address the problem of boudig the iterferece (ad hece determie the schedulability) of other tasks residet o the destiatio processor. This is straightforward for the "periodic server" protocol sice the server is a ordiary deadlie mootoic task, accessig semaphores accordig to the PCP, ad ever blockig arbitrarily (i.e. B S = 0). Hece the iterferece from the server o a lower priority task i is as for a ormal deadlie mootoic task: D i T C S S Boudig the Network Buffer Space Used The worst-case etwork buffer requiremets are easily foud. As metioed above, a icomig packet still requires a packet-sized slot i the etwork buffer as it arrives, ad hece the iterval over which buffer space is required is T S + D S + ρ mi. Hece the worst-case buffer space requiremet, i packets, is: T S + D S + ρ mi mi, T S Σ P m m im (p ) (21) 5. DISCUSSION AND SUMMARY The previous two sectios have show how the ed-to-ed commuicatios delay for a give message i a kow system ca be determied. This sectio discusses some of the issue surroudig each of the access ad delivery protocols. 5.1. Optimisig TDMA with Periodic Server The ed-to-ed delay is composed from the access delay Q m, the propagatio delay ρ, ad the delivery delay (D H for the "o demad" delivery protocol, ad T S + D S for the "periodic server" delivery protocol). However, there is a optimisatio i the worst-case ed-to-ed delay for the special case of the TDMA access protocol if it is used i cojuctio with the periodic server delivery protocol. Firstly, the period of the delivery server is chose to be equal to T TDMA, ad thus the server is always released at the same poit i the TDMA cycle. This fixed-offset iformatio ca be used to determie a lower delivery time tha T S + D S. Figure 9 illustrates this. The server at processor s sychroised so that it is released at the ed of the trasmissio slot of processor S. The amout of time messages set from S to D sped waitig i the etwork buffer i D before the periodic server o s released is bouded by: S S ρ max S S 1 The TDMA slot size for processor S Ad hece the ed-to-ed delivery delay for a message m is bouded by: Q m + ρs S + D S (D ) D S (D ) The worst-case respose time of the periodic server located o processor D

T TDMA Slot for S....................... Slot for S................ Server at D......... Server at D....... time Server at D released Server at D released Figure 9: sychroisig the periodic server with the TDMA cycle Sice ρs S < T S (D ) the upper boud o the ed-to-ed delivery time give above is better tha the oe suggested i Sectio 4. For messages set from other processors to processor D the upper boud o delivery delay will be larger (sice the destiatio server will have a larger offset from the ed of the trasmissio slot of the sedig processor) but will always be less tha T TDMA + D S (D ). I geeral, the slot size for a processor S p, the slot sequece (i.e. the order i which processors trasmit for their slot), ad the periodic server offset (i.e. the release time of the server relative to the ed of a slot of aother processor) are all free parameters of the TDMA plus periodic server commuicatios cofiguratio. These could be chose to reflect the message passig patter. For example, the task allocatio could be such that the ed-to-ed deadlies of messages set from tasks o processor A to tasks o processor B are short i compariso to other message deadlies. It would therefore be sesible to set the release time of the periodic server o processor B to correspod to the ed of the slot o processor A. The Cofiguratio Problem The above is a example of the cofiguratio problem. There are other free parameters: the relative priority of the sporadic hadler or the periodic server o each processor must be chose. A high priority i relatio to other tasks o a give processor reduces the respose time of the hadler/server ad thus reduces the worst-case ed-to-ed delay for a message. However, this reductio i ed-to-ed delay is at the expese of the schedulability of other tasks o the processor (for example, a high priority hadler/server may cause iterferece with high priority tasks ad cause them to miss deadlies). The packet size is aother parameter: if the packet size is too small the ρ mi becomes small, ad the worst-case umber of iterrupts arrivig becomes large (leadig to a loss of schedulability). However, if the packet size is large the etwork badwidth is wasted sice packets are mostly empty, ad the access delays become too large. As ca be see, the tradeoffs whe settig the commuicatios parameters are complex the full effect of some of the parameters ca oly be determied i the wider global settig, ad hece the commuicatios cofiguratio problem is part of the wide global cofiguratio problem. Curret work i the Real Time Systems Research Group at York is applyig the Simulated Aealig algorithm [5, 9] to solvig these global cofiguratio problems [15].

Priority Iversio ad Soft Real Time Messages As metioed earlier i Sectio 3, the TDMA access protocol leads to priority iversio a high priority message ca be delayed by low priority messages. Both of the delivery protocols also suffer from priority iversio sice packets are delivered i FIFO order. Furthermore, with the "o demad" protocol, the arrival of a low priority packet causes a high priority packet iterrupt, ad thus iterferes with high priority tasks tasks residet o the destiatio processor therefore also suffer priority iversio*. A side effect of the priority iversio problem with the delivery protocol is the problem of soft real time messages. With most access protocols (such as the 802.5 protocol) soft real time messages are queued at low priority ad hece are oly set whe there are o hard real time messages queued [14]. However, this approach is isufficiet whe cosiderig the effects of the delivery protocol. If too may soft real time packets arrive at a processor (i.e. i excess of the worst-case icomig load determied usig the a priori message passig iformatio) they could cause a high priority task to miss a deadlie. The problem stems from the assumptio that as log as soft real time traffic does ot iterfere with hard real time traffic it ca be set to ay processor. This is clearly a urealistic assumptio whe the full effects of the arrival of a message are cosidered. The problem ca oly be adequately solved if the soft real time message passig patter is also quatified a priori. 6. CONCLUSIONS The ed-to-ed deadlies of a message are the crucial deadlies which must be met. The delivery time for message is a sigificat compoet of the ed-to-ed time. Message delivery also has a sigificat effect o a destiatio processor. Usig the aalysis preseted i this paper ed-to-ed message deadlies ca be guarateed (with a variety of media access ad message delivery protocols). The schedulability of a destiatio processor ad the worst-case etwork buffer space requiremets ca also be determied. The delivery protocols proposed i this paper suffer from priority iversio which ca affect the schedulability of both messages ad tasks o the destiatio processor. A delivery protocol that does ot suffer from this is more complex to implemet, ad the schedulability aalysis more difficult. Because of this soft real time traffic caot be adequately hadled with the delivery protocols proposed uless soft real time message passig iformatio is kow a priori. APPENDIX A: GLOSSARY B j C i E m E m I i J m K i N Worst-case arbitrary blockig time of task i Worst-case executio time of task i Deadlie for task i Worst-case respose time for task i Ed-to-ed message deadlie for message m Worst-case ed-to-ed respose time for message m Computatioal iterferece o task i Packet iterferece o message m Worst-case Priority Ceilig Protocol blockig time Number of processors i system ρ max Maximum packet trasmissio time *It is easy to evisage a priority-based "o demad" delivery protocol which ivokes a ew hadler thread for each packet arrivig, with the thread iheritig the priority of the packet. However, this approach requires ru-time support for cocurret threads. The schedulability aalysis for such a protocol becomes complex.

ρ mi P m Q m S p T TDMA T x C SF C SP mp S C I D H* hmp (m ) lhpm (m ) im (p ) om (p ) s (m ) Miimum packet trasmissio time Worst-case umber of packets message m requires Access delay for message m TDMA slot size (i packets) of processor p TDMA cycle time Periodicity of task or message x Base worst-case executio time of periodic message delivery server Additioal per-packet worst-case executio time of periodic message delivery server Worst-case umber of packets periodic message delivery server hadles Worst-case executio time of "packet arrived" iterrupt hadler Worst-case respose time of sporadic packet hadler if hadler does ot lock a semaphore The set of messages i the system of higher priority tha message m The set of messages of higher priority tha m ad set from the same processor as m The set of messages comig ito processor p The set of messages goig out of processor p The task which seds message m APPENDIX B: SCHEDULING THEORY This appedix cotais detail of the backgroud task schedulig theory used i the paper, addresses the problems of fidig the worst-case respose time of a task, ad discusses the problem of arbitrary blockig (ad exteds the schedulig theory to cater for it). Worst-case Respose Time The Deadlie Mootoic Schedulig problem is usually expressed i the form "does each task meet its deadlie?" Ofte, though, a useful alterative questio is "i the worst-case how log will each task take?" We therefore make the distictio betwee deadlie ad respose time: the deadlie of a task i is the timig requiremet, deoted ; the respose time is the timig performace, deoted. [3]: A sufficiet but ot ecessary test for deadlie mootoic tasks is give by Audsley et al* "A set of tasks τ 1, τ 2,..., τ N, ordered by deadlie such thatd D +1, is schedulable if: C i + I i (22) Where C i is the worst-case computatio time of task τ i, ad I i is the worst-case iterferece a task τ i ca experiece, give by: i 1 I i = C T j (23) j Σ j =1 Where T j is the period of task τ j " The test is ot ecessary (i.e. failig the test does ot imply the task set is uschedulable) because the pre-emptio widow i Equatio 23 is assumed to be istead of the actual computatio widow, i.e.. The test becomes both sufficiet ad ecessary if is replaced with i Equatio 23: * Note that Audsley et al also gives sufficiet ad ecessary tests that are equivalet to Equatio 26.

i 1 I i = Σ C j =1 T j (24) j The worst-case respose time ca be foud by observig: = C i + I i (25) Equatios 23 ad 24 are mutually depedet; a solutio ca be foud by iteratio: D +1 i = C i + C T j (26) j The iteratio starts with 0 = 0. The iteratio proceeds util exceeds (task is therefore uschedulable), or whe +1 =. I the latter case the iteratio has coverged to a solutio. The iteratio is guarateed to either halt or coverge if it ca be show that +1, which we will ow prove. Theorem Give: D +1 i = C i + C T j j Where C i, C j, ad T j are iteger costats 0, ad 0 = 0. The: +1 Lemma 1 +1 Proof If: The: 1 T j 1 Tj Therefore: D i 1 D i T j T j Hece: Ad so: 1 i 1 C i + Σ i 1 C j =1 T i + j Σ j =1 T j +1

Proof of Theorem We have: ad 0 = 0 1 = C i Therefore 1 0, ad hece by iductio +1. The Priority Ceilig Protocol [11] ca be used with Deadlie Mootoic Schedulig Equatio 26 is modified to iclude the time a higher priority task could be blocked by a lower priority task: D +1 i = C i + K i + C T j (27) j Where K i is the worst-case time a task of lower priority tha task i ca hold a semaphore with ceilig greater tha or equal to the priority of task i. The "Arbitrary Blockig" Problem The problem of arbitrary blockig (sometimes called exteral blockig) is addressed here. Arbitrary blockig is the blockig of a task which occurs whe a task volutarily suspeds itself for a period of time (perhaps waitig for a certai exteral evet to occur). Whe a task suspeds itself i this way a lower priority task may start or resume executio while the higher priority task is blocked. Arbitrary blockig ca give rise to the back to back hit problem, idicated by Rajkumar et al [10], ad discussed i Sectio 3.1. Figure 10 further illustrates the problem. The simple DMS iterferece value test give i Equatio 24 would idicate that task L receives a iterferece of C H. However, as ca be see, the worst-case iterferece is 2C H. This additioal iterferece is due to the ivasive effects of arbitrary blockig a previous ivocatio of H ca be delayed so that two ivocatios of H ca occur closer together tha would ordiarily occur. To calculate the ew worst-case iterferece cosider Figure 10. Task H ca susped itself, per ivocatio, for a time ot exceedig B H. From Figure 10 it ca be see that the iterferece o task L is C H if D L T H B H. The iterferece is 2C H if T H B H < D H 2T H B H, ad 3C H if 2T H B H < D H 3T H B H, ad so o. Hece: I geeral, the iterferece o task L from task H is C H whe: ( 1)T H B H < D L T H B H ( 1)T H < D L + B H T H D L + B ( 1) < H T H ( 1) D L + B < H T H ( 1) D L + B < H = T H Hece the iterferece from higher priority tasks o a task i is give by: i 1 + B I i = j C T H (28) j Σ j =1