ANALYSIS OF HARD REAL-TIME COMMUNICATIONS

Size: px
Start display at page:

Download "ANALYSIS OF HARD REAL-TIME COMMUNICATIONS"

Transcription

1 ANALYSIS OF HARD REAL-TIME COMMUNICATIONS Ken Tindell, Real-Time Systems Research Group, Department of Computer Science, University of York, England ABSTRACT In a distributed hard real-time system, communications between tasks on different processors must occur in bounded time. The inevitable communication delay is composed of both the delay in transmitting a message on the communications media, and also the delay in delivering the data to the destination task. This paper derives schedulability analysis bounding the media access delay and the delivery delay. Two access protocols are considered: a simple token passing approach, and a hypothetical real-time broadcast bus. A simple delivery approach is considered where the arrival of a message generates an interrupt the so-called on demand approach. 1. INTRODUCTION A hard real-time system is often composed from a number of periodic and sporadic tasks which communicate their results by passing messages; in a distributed system these messages are sent between processors across a communications device. In order to guarantee that the timing requirements of all tasks are met, the communications delay between a sending task queueing a message, and a receiving task being able to access that message, must be bounded. This total delay is termed the end to communications end delay. For the purposes of this report we define the end-to-end communications delay to be made up of four maor components: 1. the generation delay: the time taken for the application task to generate and queue the message 2. the queueing delay: the time taken by the message to gain access to the communications device after being queued 3. the transmission delay: the time taken by the message to be transmitted on the communications device 4. the delivery delay: the time taken to process the message at the destination processor before finally delivering it to the destination task The generation delay is the worst-case time taken between the arrival of the sender task and the queueing of the message. This represents some element of application processing to generate the contents of the message, and the time taken to queue the message. The queueing delay is the time the message spends waiting to be removed from the queue by the communications device. With a point-to-point communication link, the message must contend with other messages sent from the same processor; with a shared communications link, the message must also contend with messages sent 1

2 from other processors. The transmission delay is the time taken for the message to be sent once it has been removed from the queue. The delivery delay is the amount of time taken to process the incoming data and deliver it to destination tasks. This work includes such functions as decoding packet headers, re-assembling multi-packet messages, copying message data between buffers, and notifying the dispatcher of the arrival of a message. This latter function is important, since the destination task may be blocked awaiting the arrival of the message. In practice the delivery delay can form a significant part of the end-to-end communications delay. Most research into hard real-time communications has concentrated on protocols bounding the access delay to shared communications media. For example, the MARS proect [7] uses a simple TDMA protocol to resolve communications media contention between processors. A simple priority queue can be used to resolve contention between local messages. Strosnider et al [22], and later Pleinvaux [19], apply the rate monotonic analysis to periodic and aperiodic messages sent across an token ring. Agrawal et al [1] also apply the rate monotonic scheduling approach to the FDDI access protocol. The token ring protocol is an example of a global priority scheme packets sent on the bus are assigned a priority, and the highest priority packet in any node is transmitted. This priority arbitration is carried out by a reservation protocol, whereby each node bids for the right to transmit the next packet; the node with the highest priority packet wins the bidding. There are some disadvantages to the protocol: the standard defines only 8 distinct priority levels, and Strosnider reports that in common implementations this is reduced to 4. An insufficient number of priority levels can lead to large reductions in schedulability [31]. Hitherto, existing fixed priority schedulability analysis (such as the Rate Monotonic family of analysis [17, 15]) has suffered from restrictions on deadlines: the rate monotonic approach requires task deadlines to be equal to their periods, and thus applying this analysis to message schedulability gives the restriction that a message must arrive at the destination processor before the message from the next period can be queued. Further, the rate monotonic approach does not lend itself easily to supporting sporadic activities, requiring the need for periodic servers to poll for these activities. For the scheduling of messages this can be very restrictive, since the complex behaviour of the priority exchange server algorithm (for example) can be very difficult to implement in a distributed system. Deadline monotonic [16] analysis [3, 14] would seem to offer some solutions: for example, deadlines are permitted to be less than or equal to periods, and the approach can easily accommodate sporadic activities (so long as the inter-arrival times can be bounded in some manner [3]). However, in a real system it is often required that the deadline on the arrival time of a message be longer than the period of a message: certain control and multi-media applications can tolerate long lags (i.e. end-to-end communications delays) so long as the rate at which data arrives is maintained. Hence an approach permitting arbitrary deadlines [14] is required. Such analysis has been provided whereby the worst-case response time of tasks with arbitrary deadlines can be determined [30, 26]; we adopt this analysis and apply it to the analagous problem of scheduling messages on a shared broadcast bus. 2

3 In this report we examine two different media access protocols: a simple token passing protocol, and a hypothetical real-time bus protocol. Agrawal et al adopt a similar token passing communications model based upon FDDI [1]. As has been mentioned earlier, the end-to-end communications delay for a message cannot be bounded simply by knowing the time taken to access and transmit data upon a communications link the time taken to deliver the message at the destination processor must also be taken into account, as indeed must be the time taken to assemble and queue the message at the source processor. Thus we must also describe a model whereby message data is assembled and placed in a network adapter ready for transmission, and where the data can be re-assembled into a message at the destination processor. We now describe such a model. 2. MODEL FOR COMPUTATION The computational model is substantially the same as that assumed by previous work [26, 30]; a task is a potentially infinite sequence of requests for processing time and message transmission. Each request is separated by a minimum time termed the period, denoted T. A task is assigned a unique fixed priority and dispatched on the basis of this priority (although a higher priority task may be delayed by a lower one for some bounded time as part of the operation of the priority ceiling protocol [21]). A task may arrive at some time t, and be ready to be processed, requiring a bounded amount of computation, denoted C. The processor recognises this arrival within a bounded time and releases the task (i.e. places it in a priority-ordered queue of runnable tasks). This bound between arrival and release time is termed the release itter, and denoted J. The period of a task is measured from the arrival time of the task, and thus release itter can give rise to a shorter time between subsequent releases [20], termed back to back hits [20]. The release itter of a task can be thought of as the difference between the earliest and latest release of a task for a given arrival. We assume that an allocation of tasks to processors is static 1 and found according to some means [29]. Similarly priorities are static and chosen by some configuration approach (Audsley gives an algorithm to optimally set priorities for tasks on a single processor; it is likely that setting priorities across a distributed system will be done by an optimisation algorithm [29]). A task can potentially queue a message at any point whilst executing. Thus a message can be queued at the earliest when the task commences execution, or at the latest at the end, immediately prior to completion. The total number of messages queued by a task is bounded, and the destination tasks of these messages are known. Further, each message is assumed to be of bounded size and assigned a unique fixed priority. Messages may be broken down into a number of packets, of fixed size, and placed in priority order in a packet queue shared between the host processor and the communications adapter responsible for transmitting the packets. Packets of the same priority are queued in FIFO order with respect to each other 2. Access to the queue is controlled by a protected obect 3. 1 Dhall and Liu show how a static allocation of tasks to processors leads to better schedulability than dynamically allocating the n highest priority tasks to a pool of n processors [9]. 2 Recall that messages and tasks are assigned unique priorities; packets are queued in FIFO order so that previous queuings of the same message are sent first. 3 A protected obect is a Hoare-style monitor guarded by a priority ceiling [20] semaphore. 3

4 The following diagram shows the assumed hardware arrangement for the transmission and reception of packets: Host processor Application task Communications adapter Priority-ordered outgoing packet queue Shared broadcast bus Figure 1 Shared memory can be accessed by normal read and write instruction in application software, except that extra wait states are incurred. The packet queue is stored in shared memory. There is a single reader of packets (the communications adapter). Writes to the queue are atomic a packet can be written to a spare slot in the buffer; when complete it can be atomically inserted into the queue by simple pointer manipulation. We assume that there is always sufficient space to store these packets. Later we will show how this assumption can be removed. A protected obect (a Hoare-style monitor guarded by a priority ceiling semaphore) ensures concurrency control between application tasks queueing packets. The communications adapter removes the packet at the head of the queue (by simple pointer manipulation) and transmits it on to the bus. In the worst-case there is time ρ between subsequent packet transmissions. For each packet removal the communications adapter may be blocked in the worst case for a few processor cycles while the host processor completes pointer manipulation (and vice versa). This time is included in ρ. Incoming packets are stored in a FIFO queue. An interrupt is raised for each arrival of a packet, requiring computation time C packet to process (the costs of copying the packet from the shared network buffer, stripping headers, etc). The interrupt handler on the host processor removes the packet from the buffer; when all the packets of a message have arrived the interrupt 4

5 handler releases the task blocked awaiting the message. Note that this approach avoids the need for direct memory access (DMA) to transfer the data 4. When all C m packets of a message m have arrived, the destination task is released to process them. This processing may involve simply assembling and copying the message to a shared data area, or may involve application-level processing of the message (for example, discarding of duplicated messages as part of some fault tolerance scheme, or perhaps fusion of the data between messages [9]). The primary goal of the analysis in this report is to bound the time between the arrival of a sender task, and the time at which the destination task is released. The first step towards this is bounding the time between the arrival of the sender task and the time at which the last packet of the message reaches the destination processor, which we term the worst-case response time of the message. The following diagram illustrates these times: Time r i ρ+ τ notif r m Figure 2: worst-case response time of a message Arrival of the sender task Latest queueing time of message m Last packet of message m removed from packet queue. ρ is the worst-case time taken to transmit a packet, and is the electrical propagation delay. Transmission of last packet begins Last packet of message reaches communications adapter packet arrived interrupt raised. τ notif is the worstcase delay between the packet arriving at the communications adapter and the adapter notifying the processor of the packet arrival Notional release of destination task Key to Figure 2 As a secondary goal we wish to bound the processing overheads due to sending and receiving messages, including the costs of the packet interrupts described earlier, permitting the worst-case response time of the destination task to be determined. This then permits us to compute the worst-case response time of the destination task, and 4 DMA access is notoriously unpredictable, potentially deferring interrupts for long periods of time. 5

6 hence bound the end-to-delay between the computation of a result on one processor and the processing of that result on another processor. 3. TWO REAL-TIME COMMUNICATIONS MODELS In this section we describe two communications models for real-time message passing on a shared packet broadcast bus. We present a simple token passing protocol and a hypothetical priority-based communications protocol. The token passing protocol is a commonly proposed one for real-time communications [1] due to its simplicity of implemention. The priority based protocol has not, as yet, been implemented in available communications hardware. However, the token ring approximates such a protocol. THE TOKEN-PASSING PROTOCOL A number of processors are connected to a shared broadcast bus, each via a communications adapter; access to the bus is arbitrated by using a right to transmit token: each adapter p is permitted to transmit at most S p packets upon receipt of the token. This is termed the token holding slot. If there are less than S p packets waiting to be sent when the token arrives these are transmitted, and the token then passed on to the next processor. The packets must be transmitted continuously (i.e the communications adapter is not permitted to become idle between packet transmissions). Thus the total time a processor p holds the token for is bounded, and therefore the worst-case token rotation time can be found. The communications adapter shares a priority-ordered packet queue with the host processor: packets are placed in the queue by the host processor, and removed by the network adapter. Immediately after the token arrives, the communications adapter initiates the removal of the first packet (if any) from the packet queue on processor p, and then proceeds to transmit it on the bus. As soon as this packet has been sent, the next packet is removed from the queue and transmitted. This continues until either there are no more packets to send, or until S p packets have been sent. Once a packet reaches the destination network adapter it is placed in a buffer, and a packet arrived interrupt raised on the host processor. The packet can then be removed, processed, and delivered. THE HYPOTHETICAL PRIORITY BUS The hardware associated with the hypothetical real-time bus protocol is substantially the same as for the token protocol: a number of processors are connected to a shared broadcast bus, again via a communication adapter. Again, packets are placed in a priority-ordered queue shared between the network adapter and host processor. However, the transmission on the network is arbitrated by the priority of the packet. This can be implemented straightforwardly as follows: when silence is detected on the bus, each network adapter with packets to transmit attempts to start transmitting its highest priority packet. Using the approach described by Zhao [33] and Znati and Ni [34] collisions are resolved in priority order such that only the highest priority packet can commence transmission. Once the packet has started transmission it is removed from the priority-ordered packet queue, and continues transmission until finished. The 6

7 priority resolution protocol then continues as described above. This mechanism ensures that packets are transmitted in priority order. Note that the packet transmission time using this approach will be typically larger than that of the token passing approach. 4. SIMPLE ANALYSIS OF THE PROTOCOLS Much work has been done developing analysis for task sets scheduled by fixed priorities [17, 13, 15, 3, 4, 30]. It is clear that the results of this could be applied to more general real-time scheduling problems than ust processor scheduling some work has been done applying the results of early task schedulability analysis [3] to the scheduling of messages on a priority-based communications bus [25]. However, until recently, much of the analysis for task scheduling was too restrictive to be applied to the scheduling of messages: the transmission and reception of messages has complex timing characteristics, such as large variability in arrival times. These characteristics meant that previous analysis could not be successfully applied to messages. For example, until recent analysis [4], the concept of release itter was not fully explored, and hence messages queued at variable but bounded times after periodic events could not be accurately modelled. The previous analysis for arbitrary deadlines [26] defines the worst-case response time of a task i as the largest time between the arrival of an invocation of the task and the latest completion time of the task. The worst-case response time thus includes any delay between arrival and release. Tindell et al [30] give the following equation for the worst-case response time of a task i: r = max w + J qt i q= 012,,, 3, i, q i i where T i is the worst-case re-arrival time of task i, and w is the width of the level i busy period [14] starting qt i before the current release of task i. A level i busy period is the time when a defined invocation of a task i is continuously in the notional run queue. This period extends until all computation of priority greater than or equal to that of task i completes. It is a key concept in fixed priority scheduling theory and we will extend this concept to message scheduling. We therefore define a processing busy period as one concerned with task scheduling, and a communications busy period as one concerned with message scheduling. (1) The processing busy period, of width w, defined by equation 1 is given by: w = q+ 1 C + B + I i, q i i hp( i) (2) The set hp(i) is the set of all tasks of higher priority than task i, and I is the worst-case time the processor spends processing task during the busy period of width w, and is given by: I = w i, q + J T C (3) 7

8 where C is the worst-case computation time of task, T is the period of task (i.e. the shortest time between subsequent arrivals of the task). The term B i is the blocking factor for task i, and represents the worst-case time that a lower priority task can delay the execution of task i. The value of this can be calculated from the operation of the priority ceiling protocol, for example [21]. THE TOKEN PROTOCOL To apply the arbitrary deadline analysis to communicationd we must establish the analogy between scheduling messages and scheduling tasks. Firstly, the time the bus is not available to transmit packets from processor p can be considered as a high priority activity, with period T token, equal to the worst-case token rotation time [25]. The worst-case time that processor p cannot transmit data is equal to: T token H p where p is the processor we are considering, and H p is the worst-case time that processor p spends transmitting data while holding the token. Thus a given level m communications busy period of width w m,q on processor p experiences interference (i.e. total pre-emption) from all other processors equal to: wm, q T T token token H p Additionally, messages of higher priority than m and sent from processor p can interfere over the busy period by pre-empting message m. This time is given (from equation 2) by: I h h hp( m) out( p) ρ (4) (5) where I n is given (from equation 3) by: I h = w m, q T + J h h C h (6) and where ρ is the worst-case time taken to transmit a packet, hp(m) is the set of messages of higher priority than m, out(p) is the set of messages sent by tasks on processor p and destined for tasks on processors other than p (and therefore hp( m) out( p) is the set of all messages of higher priority than m that are transmitted from processor p), C h is the worst-case number of packets in message h, T h is the period of message h, and J h is the release itter of message h. 8

9 There is clearly a notation clash between (for example) C m denoting the worst-case number of packets to be transmitted in a given message m, and C i denoting the worstcase computation time of a task i. This clash is deliberate since we wish to preserve the analogy between analysis of task scheduling and message scheduling. To find J m and T m we use the concept of attribute inheritance [24, 27]. The task sending the message m, denoted s(m). has a period 5 T s(m). Therefore a message m queued once per invocation of task s(m) inherits a period of T s(m). If m is queued every other invocation of s(m) then m inherits a period 2T s(m), and so on. In general, therefore, the period of a message m is equal to: T m = e T ( ) (7) m s m where e m is the every attribute of the message (i.e. message m is queued at most once every e m invocations of the sender task s(m)). The release itter J m is also inherited from the sending task. As mentioned earlier, release itter can be thought of as the difference between the earliest and latest releases; at the earliest, a message m could potentially be queued as soon as the sending task arrives. At latest, the message can be queued ust before the sender task completes. This time difference is bounded by r s(m), the worst-case response time of the sender task (equation 1). Thus the release itter of a message m is: J m = r ( ) (8) s m where s(m) is the task sending message m. We now turn to the blocking behaviour of the communications system. Because a packet cannot be pre-empted once it starts transmitting (since it is removed from the queue prior to transmission), a higher priority message can be blocked by the time taken to transmit a single packet, and thus the blocking factor B m is equal to ρ. The transmission of the C m th packet of a given message m implies the transmission of the message (in the same way that the execution of the final instruction of a task implies the completion of the task). For the moment, however, we seek to bound the worst-case response time of the kth packet of a given message m (where 1 k C m ). Obviously, the worst-case response time of the message can be trivially found by substituting C m for k. Burns et al showed how the worst-case response time of a task can be reduced if the task executes a final non-pre-emptable phase [5]. Tindell and Burns [23] later extended this analysis to the simple arbitrary deadline analysis [26] to analyse the processing of disk drive requests. We can use the same approach here, since the transmission of any packet of a message cannot be pre-empted once it has begun. To find the worst-case response time of the kth packet we find the queueing time of the previous k 1 packets 5 Recall that period is the minimum inter-arrival time of the task; the sender task can be either sporadic or periodic. 9

10 ! " % & && ' ( ( and then add the time taken to transmit the kth packet 6 ; this latter time is given by ρ. The worst-case response time of the kth packet is given by: r = max J + w qt + ρ+ + τ m, k q=012,,, 3, m m, q, k m notif (9) where w m,q,k is given by: wm, q, k wm, q, k = qcmρ+ k 1 ρ+ Bm + Ih ρ+ Ttoken H p T h hp( m) out( p) token (10) Note that equation 9 also includes (the electrical propagation delay 7 ) and τ notif. This latter term represents the worst-case time taken for the communications adapter to recognise the arrival of the packet and raise an interrupt on the host processor. Equation 10 can be simplified, since B m is equal to ρ: wm, q, k wm, q, k = qcm + k + Ih ρ+ Ttoken H p (11) T h hp( m) out( p) token The value of w m,q,k can be found by forming a recurrence relationship (as with previous work): n+ 1 m, q, k w = qc + k + I where: 0 w m, q, k =0 m h h hp( m) out( p) w T n m, q, k token # $ ρ + () Ttoken H p* is a suitable starting value; it can be show that the sequence of values are guaranteed to converge for a utilisation of less than or equal to 100% [4, 25]. The worst-case response time of a given message m can be found by substituting C m for k in equation 9, i.e.: r m = r, (12) m C m The value of the token rotation time, T token, can be found by summing the token holding times for all processors: 6 This approach assumes that k + B m > 0; if this assumption is not met then a value of w equal to zero results, which is clearly invalid. 7 The electrical propagation delay may be small for a local area network, or may be very large if, say, a satellite link forms part of the network. 10

11 T = H + τ token p token p processors (13) where τ token is the time taken to pass on the token (including both transmission of the token, and the time between an adapter receiving the token and then starting to transmit the packets). Note that it would be possible to have a different τ token for each processor in order that we take account of the difference in time taken by different processors to process the token. For the purposes of this report we assume a single figure. The time a given processor p spends transmitting data in a token holding slot can be found by determining the maximum number of packets that can be transmitted whilst holding the token: H p =ρs p (14) where S p is the maximum number of packets that can be transmitted each time the token arrives at processor p. Thus we have derived analysis bounding the worst-case response time of a message m for the token protocol. THE PRIORITY BUS The single underlying concept here is that of the global packet queue: because the protocol operates global priority pre-emptive packet transmission, we can consider there to be ust a single priority-ordered packet queue. We can model this global packet queue by taking the earlier token protocol analysis and setting T token to H p : this has the effect of setting the token passing time τ token to zero, and arbitrating the transmission of all messages by the priority of messages in the packet queue on a given processor p. If we then assume that all messages to be transmitted by all processors on the bus are queued in the packet queue on p then the packet queue on p behaves as the notional single global packet queue. In effect, processor p models the behaviour of the priority bus protocol. Thus setting T token = H p we have (from equation 11): wm, q, k = qcm + k+ In ρ (15) n hp( m) outgoing where outgoing is the set of all messages that will be transmitted on the bus 8. Note that this replaces the set out(p) in equation 11, since we now consider that the packet queue for processor p is the notional global packet queue. As we would expect, this equation is independent of p. Note that the value of ρ for the priority bus (equation 15) 8 Some messages may be sent between tasks on the same processor; we do not wish to consider these here. 11

12 will be typically larger than the value of ρ for the token bus (equation 11) because of the additional time taken to arbitrate priorities. The worst-case response time of a message m is defined by equation 9. As with the token protocol analysis we can determine the worst-case response time of a message m, r m, by substituting C m for k in equation 9. A SMALL EXAMPLE We now present a small example to illustrate the analysis developed. A number of tasks are allocated across three processors. The tasks have given periods and worstcase response times as follows: task processor T r t1 cpu t2 cpu t3 cpu t4 cpu2 t5 cpu2 t6 cpu2 t7 cpu t8 cpu t9 cpu t10 cpu3 t11 cpu3 t12 cpu3 Where the symbol indicates not specified ; the tasks where this symbol applies are the destination tasks for messages in the system, and will have timing attributes set as a result of the communications analysis (for example, by inheriting a period). The following table details the messages sent from the tasks described above. The messages are listed in priority order. message sender destination e C m1 t1 t4 1 2 m2 t2 t5 2 5 m3 t3 t6 1 7 m4 t1 t m5 t7 t m6 t8 t Note that e is the every parameter, whereby a message m is queued at most every e m invocations of the sender task. For the token bus protocol, processors cpu1 and cpu2 are assumed to have a token holding time equivalent to 8 packets, and cpu3 has a holding time equivalent to one packet (no hard real-time messages are sent by this processor but in order to transmit other soft real-time traffic it needs a slot). The following times (in µs) are assumed: 12

13 ρ 75 τ notif 2 τ token 5 1 When running the above problem through a simple program embodying the analysis we obtain the following results, measured in µs: message J T r priority r token m m m m m m Where J is the release itter, inherited from the sender task, T is the period of the message, inherited from the sender task and the every attribute, r priority is the worstcase response time of the message when the system uses the hypothetical real-time priority bus protocol, and where r token is the worst-case response time when the system is connected with the token bus. The token rotation time T token is equal to 1290µs. As can be seen, the hypothetical real-time bus has a much better real-time performance than the token bus. This is to be expected, because a token bus fundamentally leads to priority inversion, which reduces real-time performance. This is illustrated by the response times for message m4 in the table above: with the priority bus message m4 is deferred only by higher priority messages; with the token bus all messages of all priorities from other processors cause deferral of message m4. The worst-case response times for the packets themselves can be found (since r m is defined as the worst-case response time of the last packet of message m). The table below illustrates the worst-case response times of the 8 packets of message m6 for the hypothetical real-time bus protocol and the token protocol. All times are given in µs. k r priority,k r token,k

14 Notice how the worst-case response times for the eighth packet are the worst-case response times for the message. Notice also the large difference (over 1ms) between the first and last packets reaching the destination processor when the token protocol is used. 5. COMMUNICATIONS SCHEDULING AND PROCESSOR SCHEDULING The previous section has described how to obtain the time a message reaches the destination processor, relative to the arrival time of the sending task. However, this is not the end of the problem: the message must be processed by software at the receiving end before it can be handed on to the destination task. Further, the overheads due to this processing must be bounded to ensure that the analysis bounding worstcase response times for tasks remains sufficient. To this end we develop analysis of the computational model for message delivery, and obtain both a measure of the time taken to process message packets, and the impact of the costs of these overheads on tasks on the destination processor. The results of the analysis lead to a cyclic dependency between message and task scheduling. We term this dependency holistic, and show how it does not prevent the analysis being used to obtain quantitative results. BOUNDING OVERHEADS ON THE DESTINATION PROCESSOR Recall that the response time r m,k is the worst-case time for the kth packet of message m (where 1 k C m ) to reach the destination processor and raise an interrupt, given by equation 9. The task released by the arrival of a message m is denoted d(m); this task is precedence constrained by the completion of the packet interrupt handling the last packet of message m reaching the processor upon which d(m) is located. From analysis of tasks with phasing relationships [2, 28] we can consider task d(m) to be released simultaneously with the raising of the interrupt for the last packet of m (as long as the interrupt has a higher priority than the destination task). Time r d(m) can be incorporated in the worst-case end-to-end response time of the message, obtaining an end-to-end measure of the time taken to complete and communicate the results of the computation associated with s(m) to d(m). The concept of attribute inheritance was described earlier, whereby a message m inherits certain timing properties from the sending task s(m), namely release itter and period. This concept extends to the destination task, since the destination task d(m) is released by the last packet of message m arriving at the destination processor. The destination task d(m) inherits a release itter from the message m equal to the difference between the earliest and latest times at which the message m could reach the task. The earliest time is deemed to be ρ best + (where ρ best is the best-case packet transmission time). This is because at run-time a given message may require only a single packet to carry the required data; the shortest time taken for the message to reach the destination task is ρ best + (assuming that the propagation delay is constant). The latest time a message reaches the destination task is the worst-case response time of the message m, denoted r m. The difference between the earliest and latest release of the destination task is inherited from the message thus: 14

15 Jd( m) = rm ρ best (16) Similarly, the period of the destination task is: T d( m) = T m The end-to-end response time for the generation and consumption of a given message is thus the worst-case response time of the destination task (as illustrated in figure 2). Hence the timing characteristics of task d(m) can be determined, and the computational overheads due to this task bounded using existing schedulability analysis (equation 1). However, we must also address the overheads due to the interrupts raised when packets arrive at a processor. An upper bound on the number of packets that can arrive at a processor in a computation busy period of width w is: w ρ best where ρ best is the best-case packet transmission time 9. This assumes that interrupts are recognised with no release itter. if, for example, packet arrivals are polled for using a regular timing event so-called tick scheduling then the packet handler could inherit a release itter of T tick, the tick scheduler period. This would give the following bound instead: w+ T tick ρ best Another bound on the number of packets arriving in a computation busy period of width w can be found by realising that packet interrupts are the result of incoming messages, and that the number of these reaching processor p in a computation busy period of width w can be bounded. Packet interrupts generated from arrival of a message are bursty, with timing behaviour inherited from the incoming message. We can model the bursty behavior of packet interrupts by creating a pseudo task for each incoming message m, with computation time C packet. The pseudo task behaves as a sporadically periodic task [4, 26]. Sporadically periodic tasks are bursty tasks: there are n invocations of a sporadically periodic task in the burst, with time t between arrivals within a burst. This time is termed the inner period. There is also an outer period equal to the periodicty of the bursts. The timing attributes of the pseudo task created for message m inherit certain timing properties from m. For each pseudo task pt(m), corresponding to message m, we have a release itter equal to the largest itter of any of the packets. This itter is equal to the (17) 9 With fixed size packets, the best- and worst-case packet transmission times will be almost identical. 15

16 worst-case response time of the last packet of m less the best-case response time. The best-case response time is given by: ρ best C + m (18) Hence the release itter is equal to: J pt( m) = rm ρbestcm (19) The outer period of the pseudo task pt(m) is inherited similarly from message m: T pt( m) = T m (20) The inner period of the pseudo task is equal to the lower bound on the time between subsequent arrivals of the pseudo-task within a burst. Since the interrupts are caused by packet arrivals, the inner period is equal to the shortest time taken to transmit a packet: t pt( m) =ρ best (21) Finally, the number of invocations of the pseudo task in the burst is equal to the number of packets in the message: n pt( m) = C m (22) The sporadically periodic task analysis [26] assumes that: n t T for a given task. For the pseudo task pt(m) this is clearly true. Theorem 1 An upper bound on the number of packets that can arrive at a processor in a window of width w is: m in( p) = pt( m) min n, J + w F T t + where in(p) is the set of messages destined for processor p, and F is given by: F = J + w T F n (23) 1 (24) Proof Follows directly from theorem 5.1 of previous work on arbitrary deadlines [26]. 16

17 $ &&% $ &&%! ' ' Equations 17 and 23 both give upper bounds on the number of interrupts and thus it is sufficient to take the least upper bound: w J + w FT min, min n, + Fn ρ t (25) best m in( p) = pt( m) The worst-case computational overheads on a processor due to packet interrupts in a lower priority 10 computation busy period of width w is (from equation 25) equal to: w J + w FT min, min n, "# + Fn Cpacket ρ t ()) ( ) (26) ) best m in( p) = pt( m) where C packet is the cost of processing a packet interrupt, n is the number of packets in message m, in(p) is the set of messages destined for tasks on processor p, and ρ best is the best-case packet transmission time. Note that it would be useful for the communications adapter to be able to distinguish between the first C m 1 packets of a message m and the last packet of the message: the adapter could then interrupt the host only when the a message arrives, reducing the number of interrupts and some of the overheads due to incoming messages. In this case the pseudo task pt(m) would be a simple task with C pt(m) = C packet and T pt(m) = T m (as before), but with release itter J pt(m) = r m, since the interrupt for message arrival would be raised on the arrival of the last packet of message m. BOUNDING THE OVERHEADS AT THE SENDING END There now only remains the problem of bounding the computational overheads due to outgoing messages. This can be solved easily by noting that the queueing of packets can be performed by the application task, and hence the sending overheads can be bounded simply by subsuming the overheads into the worst-case execution times of the sender task. As was described earlier, the computational model assumes that the messages are decomposed into packets by the sender task, and then copied into memory shared between the communications adapter and the main processor. The computation time taken to decompose a message can be found by determining the worst-case computation time of the packetising procedure parameterised by the size of the message. The costs of copying a packet into the queue stored in shared memory can be found by using straightforward worst-case execution time bounding techniques [18] assuming that writes to the shared memory incur a bounded number of waitstates. The costs of pointer manipulation can be determined similarly. THE EXAMPLE REVISITED We return to the simple example given earlier, and bound the computational overheads due to incoming messages. The following table shows bounds on the computational overheads for three different windows for two processors. Processor cpu1 is not shown since there are no messages destined for tasks located on cpu1. The assumed 10 In practice all tasks will be of lower priority than the packet arrival interrupt handler. 17

18 value of C packet is 30, and ρ best is assumed to be equal to ρ (i.e. 75). All times given are in µs. cpu2 cpu3 w Real-Time Token Real-Time Token Notice how the bounds are smaller for the real-time bus. This is because the worstcase response times of messages are smaller, entailing a smaller release itter for the pseudo-tasks (equation 19), and hence a potentially lower number of packet arrivals (equation 23). Notice also how the overheads are significant, from 42% over a small window of 1ms, to 19% over a window of 100ms. The MARS proect also found by measurement that overheads due to incoming packets are significant: in one example it was found that 1ms of computation time was lost due to DMA cycle stealing by the communications adapter in an 8ms period [32]. 6. HOLISTIC SCHEDULING We have shown how a message inherits some timing attributes from the sending task. For example, the release itter and period of a message are partially inherited from the sender task. We have also shown that there are processing overheads incurred when sending and receiving messages. For example, an incoming message generates packet interrupts which invoke the handler. When a message fully arrives the destination task is released; this destination task inherits a release itter equal to the worst-case response time of the message. The worst-case response time of the destination task represents the end-to-end delay associated with generating and queueing a message on one processor and receiving and processing it at the other processor. We use the analysis bounding the response time of a message to find the release itter of the destination task. By then applying the existing task analysis we can find the worst-case response time of the destination task, and thus the end-to-end deadline. There is a problem when computing the worst-case response time of a message or a task: the equations for task and message schedulability are mutually dependent, and thus cannot be solved in isolation. For example, the release itter of a message depends on the worst-case response time of the sender task (equation 8). The worst-case response time of a task depends, in part, on the overheads due to incoming messages (equation 26). These depend on the response times of messages (equation 9). A message response time depends on its release itter. We term this dependency holistic scheduling: the timing attributes of one aspect of a system can have a critical effect on the timing attributes of another. The schedulability of a given system can therefore only be determined by addressing the schedulability of the system as a whole. We overcome this dependency and solve the equations by forming a recurrence relation. The equations form a monotonic function, since an increase in message release itter does not lead to a decrease in task worst-case response times. Therefore by assuming task worst-case response times of zero a suitable starting point can be found. We can then compute a message response time, bound the overheads, and then 18

19 compute the task worst-case response times. We can continue the iteration until either the equations converge to a fixed set of response times, or until some threshold is reached. We can illustrate this by returning to the example task set given earlier: the table below gives the worst-case response time of each task and message in the system for each of the outer iterations (the token bus protocol is assumed): item m m m m m m t t t t t t t t t t t t Table 1: Worst-case response times of tasks and messages after each outer iteration; all times are measured in µs As can be seen, six outer iterations are required before the analysis converges to values for all messages and tasks. The above response times are for the example task set with a token bus protocol (the token rotation time T token is equal to 1290µs). The following table gives the worst-case response times for tasks and messages when the priority bus protocol is used instead. For comparison the final values from the above table for the token bus are also presented: 19

20 item r (priority) r (token) m m m m m m t t t t t t t t t t t t Table 2: the results of applying the holistic analysis to the priority and token bus protocols; all times are in µs Task t8 sends message m6 to task t12; task t12 inherits a release itter from the response time of message m2. The worst-case response time of task t12 (6.8ms for the priority bus, and 11.3ms for the token bus) represents the end-to-end delay from computing results in task t8 to the results being communicated and processed in task t EXTENDING THE MODEL INSUFFICIENT SHARED BUFFER SPACE We have assumed so far that the shared buffer space between the host processor and the communications adapter is sufficiently large never to overflow. One of the problems with such a shared packet queue is that in many architectures the total shared buffer space may be too small to hold the worst-case number of packets that could be queued (i.e. the buffer space could overflow). A solution to this problem is to have a two-level queueing system, where packets are queued in the host processor, and then transferred to the shared buffer when space is freed. In this section for each communications protocol we will first show how a bound on buffer space required can be found, and then describe an approach to implementing two-level buffering and extend the analysis to handle limited buffer space. BUFFERING AND THE TOKEN-PASSING PROTOCOL When the token arrives at the communcations adapter for processor p at most S p packets are transmitted. In the case where the packet queue is stored in the main memory of the host processor, the packets cannot be removed without affecting the main processor (as mentioned earlier, DMA access to main memory causes CPU cycles to be stolen and thus affects the processor). The approach taken here to transfer each 20

21 packet to the buffer in the communications adapter is for an interrupt to be raised upon the arrival of the token, with the main processor removing and copying the first packet 11. The host adapter then signals the host to copy the next packet by means of a packet sent interrupt. Upon receipt of the token, the communications adapter raises an interrupt in the main processor. The interrupt handler in the main processor commences to copy the first and second packets in the packet queue into the double buffer slot. As soon as the first packet is copied the communications adapter can commence transmission. The worstcase response time of the interrupt handler to copy the first packet is denoted r copy. When the first packet has been transmitted, the communications adapter raises a packet sent interrupt and starts transmission of the packet in the next double-buffer slot. The interrupt handler must fill the freed double-buffer slot within time ρ to ensure that the communications adapter will be able to commence transmission of the next packet as soon as it is ready. Therefore we assume (and require) that r copy < ρ (Figure 3). Token arrival Start transmitting 1st packet Start transmitting 2nd packet Start transmitting 3rd packet Time r copy copy 1st packet ρ copy 2nd packet ρ copy 3rd packet Figure 3 In order to adopt this approach we must update the current analysis for both message schedulability and for bounding the overheads incurred when transmitting packets. We first address message delays by the double buffering approach. Firstly, the blocking factor of a given message m increases from ρ to 2ρ, since there are at most two lower priority packets outstanding in the double buffer (one being transmitted, and one pending transmission). Secondly, the delay between receiving the token and commencing the transmission of the first packet in the queue must be bounded. This is accomplished by including the delay r copy in the term τ token (equation 13). The overheads incurred from sending packets with the double buffering approach are incurred from two maor sources: firstly, the token arrival interrupt handler is invoked once for every token, and secondly the copying of packets to the communications adapter buffer space. We denote the best case token rotation time by t token, and say 11 It might be thought that transferring the packet data via DMA would be faster, but this is not the case: most modern processors can copy data at least as fast as DMA transfers. 21

Real-Time (Paradigms) (47)

Real-Time (Paradigms) (47) Real-Time (Paradigms) (47) Memory: Memory Access Protocols Tasks competing for exclusive memory access (critical sections, semaphores) become interdependent, a common phenomenon especially in distributed

More information

Response Time Analysis of Asynchronous Real-Time Systems

Response Time Analysis of Asynchronous Real-Time Systems Response Time Analysis of Asynchronous Real-Time Systems Guillem Bernat Real-Time Systems Research Group Department of Computer Science University of York York, YO10 5DD, UK Technical Report: YCS-2002-340

More information

Implementing Sporadic Servers in Ada

Implementing Sporadic Servers in Ada Technical Report CMU/SEI-90-TR-6 ESD-90-TR-207 Implementing Sporadic Servers in Ada Brinkley Sprunt Lui Sha May 1990 Technical Report CMU/SEI-90-TR-6 ESD-90-TR-207 May 1990 Implementing Sporadic Servers

More information

Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network

Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network Probabilistic Worst-Case Response-Time Analysis for the Controller Area Network Thomas Nolte, Hans Hansson, and Christer Norström Mälardalen Real-Time Research Centre Department of Computer Engineering

More information

Schedulability-Driven Communication Synthesis for Time Triggered Embedded Systems

Schedulability-Driven Communication Synthesis for Time Triggered Embedded Systems Schedulability-Driven Communication Synthesis for Time Triggered Embedded Systems Paul Pop, Petru Eles, and Zebo Peng Dept. of Computer and Information Science, Linköping University, Sweden {paupo, petel,

More information

OPERATING SYSTEM CONCEPTS UNDERSTAND!!! IMPLEMENT!!! ANALYZE!!!

OPERATING SYSTEM CONCEPTS UNDERSTAND!!! IMPLEMENT!!! ANALYZE!!! OPERATING SYSTEM CONCEPTS UNDERSTAND!!! IMPLEMENT!!! Processor Management Memory Management IO Management File Management Multiprogramming Protection and Security Network Management UNDERSTAND!!! IMPLEMENT!!!

More information

DISTRIBUTED REAL-TIME SYSTEMS

DISTRIBUTED REAL-TIME SYSTEMS Distributed Systems Fö 11/12-1 Distributed Systems Fö 11/12-2 DISTRIBUTED REAL-TIME SYSTEMS What is a Real-Time System? 1. What is a Real-Time System? 2. Distributed Real Time Systems 3. Predictability

More information

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9 Implementing Scheduling Algorithms Real-Time and Embedded Systems (M) Lecture 9 Lecture Outline Implementing real time systems Key concepts and constraints System architectures: Cyclic executive Microkernel

More information

An Approach to Task Attribute Assignment for Uniprocessor Systems

An Approach to Task Attribute Assignment for Uniprocessor Systems An Approach to ttribute Assignment for Uniprocessor Systems I. Bate and A. Burns Real-Time Systems Research Group Department of Computer Science University of York York, United Kingdom e-mail: fijb,burnsg@cs.york.ac.uk

More information

Overview of Scheduling a Mix of Periodic and Aperiodic Tasks

Overview of Scheduling a Mix of Periodic and Aperiodic Tasks Overview of Scheduling a Mix of Periodic and Aperiodic Tasks Minsoo Ryu Department of Computer Science and Engineering 2 Naive approach Background scheduling Algorithms Under static priority policy (RM)

More information

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15 Introduction to Real-Time Communications Real-Time and Embedded Systems (M) Lecture 15 Lecture Outline Modelling real-time communications Traffic and network models Properties of networks Throughput, delay

More information

2.1 CHANNEL ALLOCATION 2.2 MULTIPLE ACCESS PROTOCOLS Collision Free Protocols 2.3 FDDI 2.4 DATA LINK LAYER DESIGN ISSUES 2.5 FRAMING & STUFFING

2.1 CHANNEL ALLOCATION 2.2 MULTIPLE ACCESS PROTOCOLS Collision Free Protocols 2.3 FDDI 2.4 DATA LINK LAYER DESIGN ISSUES 2.5 FRAMING & STUFFING UNIT-2 2.1 CHANNEL ALLOCATION 2.2 MULTIPLE ACCESS PROTOCOLS 2.2.1 Pure ALOHA 2.2.2 Slotted ALOHA 2.2.3 Carrier Sense Multiple Access 2.2.4 CSMA with Collision Detection 2.2.5 Collision Free Protocols 2.2.5.1

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

Real-Time and Concurrent Programming Lecture 6 (F6): Scheduling and bounded response times

Real-Time and Concurrent Programming Lecture 6 (F6): Scheduling and bounded response times http://cs.lth.se/eda040 Real-Time and Concurrent Programming Lecture 6 (F6): Scheduling and bounded response times Klas Nilsson 2015-10-06 http://cs.lth.se/eda040 F6: Scheduling and bounded response times

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information

Implementing a High-Integrity Executive using Ravenscar

Implementing a High-Integrity Executive using Ravenscar Implementing a High-Integrity Executive using Ravenscar Neil Audsley, Alan Burns and Andy Wellings Real-Time Systems Research Group Department of Computer Science, University of York, UK Abstract This

More information

Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

More information

(b) External fragmentation can happen in a virtual memory paging system.

(b) External fragmentation can happen in a virtual memory paging system. Alexandria University Faculty of Engineering Electrical Engineering - Communications Spring 2015 Final Exam CS333: Operating Systems Wednesday, June 17, 2015 Allowed Time: 3 Hours Maximum: 75 points Note:

More information

FIXED PRIORITY SCHEDULING ANALYSIS OF THE POWERTRAIN MANAGEMENT APPLICATION EXAMPLE USING THE SCHEDULITE TOOL

FIXED PRIORITY SCHEDULING ANALYSIS OF THE POWERTRAIN MANAGEMENT APPLICATION EXAMPLE USING THE SCHEDULITE TOOL FIXED PRIORITY SCHEDULING ANALYSIS OF THE POWERTRAIN MANAGEMENT APPLICATION EXAMPLE USING THE SCHEDULITE TOOL Jens Larsson t91jla@docs.uu.se Technical Report ASTEC 97/03 DoCS 97/82 Department of Computer

More information

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable What s An OS? Provides environment for executing programs Process abstraction for multitasking/concurrency scheduling Hardware abstraction layer (device drivers) File systems Communication Do we need an

More information

2. Which of the following resources is not one which can result in deadlocking processes? a. a disk file b. a semaphore c. the central processor (CPU)

2. Which of the following resources is not one which can result in deadlocking processes? a. a disk file b. a semaphore c. the central processor (CPU) CSCI 4500 / 8506 Sample Questions for Quiz 4 Covers Modules 7 and 8 1. Deadlock occurs when each process in a set of processes a. is taking a very long time to complete. b. is waiting for an event (or

More information

Analysing Real-Time Communications: Controller Area Network (CAN) *

Analysing Real-Time Communications: Controller Area Network (CAN) * Analysing Real-Tie Counications: Controller Area Network (CAN) * Abstract The increasing use of counication networks in tie critical applications presents engineers with fundaental probles with the deterination

More information

Overview. Sporadic tasks. Recall. Aperiodic tasks. Real-time Systems D0003E 2/26/2009. Loosening D = T. Aperiodic tasks. Response-time analysis

Overview. Sporadic tasks. Recall. Aperiodic tasks. Real-time Systems D0003E 2/26/2009. Loosening D = T. Aperiodic tasks. Response-time analysis Overview Real-time Systems D0003E Lecture 11: Priority inversion Burns/Wellings ch. 13 (except 13.12) Aperiodic tasks Response time analysis Blocking Priority inversion Priority inheritance Priority ceiling

More information

Multimedia Systems 2011/2012

Multimedia Systems 2011/2012 Multimedia Systems 2011/2012 System Architecture Prof. Dr. Paul Müller University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY http://www.icsy.de Sitemap 2 Hardware

More information

Tasks. Task Implementation and management

Tasks. Task Implementation and management Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration

More information

Analyzing Real-Time Systems

Analyzing Real-Time Systems Analyzing Real-Time Systems Reference: Burns and Wellings, Real-Time Systems and Programming Languages 17-654/17-754: Analysis of Software Artifacts Jonathan Aldrich Real-Time Systems Definition Any system

More information

Resource Reservation & Resource Servers

Resource Reservation & Resource Servers Resource Reservation & Resource Servers Resource Reservation Application Hard real-time, Soft real-time, Others? Platform Hardware Resources: CPU cycles, memory blocks 1 Applications Hard-deadline tasks

More information

Operating Systems (Classroom Practice Booklet Solutions)

Operating Systems (Classroom Practice Booklet Solutions) Operating Systems (Classroom Practice Booklet Solutions) 1. Process Management I 1. Ans: (c) 2. Ans: (c) 3. Ans: (a) Sol: Software Interrupt is generated as a result of execution of a privileged instruction.

More information

Controller Area Network (CAN) schedulability analysis: Refuted, revisited and revised

Controller Area Network (CAN) schedulability analysis: Refuted, revisited and revised Real-Time Syst (2007) 35:239 272 DOI 10.1007/s11241-007-9012-7 Controller Area Network (CAN) schedulability analysis: Refuted, revisited and revised Robert I. Davis Alan Burns Reinder J. Bril Johan J.

More information

Layer 2 functionality bridging and switching

Layer 2 functionality bridging and switching Layer 2 functionality bridging and switching BSAD 141 Dave Novak Sources: Network+ Guide to Networks, Dean 2013 Overview Layer 2 functionality Error detection Bridges Broadcast and collision domains How

More information

AirTight: A Resilient Wireless Communication Protocol for Mixed- Criticality Systems

AirTight: A Resilient Wireless Communication Protocol for Mixed- Criticality Systems AirTight: A Resilient Wireless Communication Protocol for Mixed- Criticality Systems Alan Burns, James Harbin, Leandro Indrusiak, Iain Bate, Robert Davis and David Griffin Real-Time Systems Research Group

More information

A Capacity Sharing and Stealing Strategy for Open Real-time Systems

A Capacity Sharing and Stealing Strategy for Open Real-time Systems A Capacity Sharing and Stealing Strategy for Open Real-time Systems Luís Nogueira, Luís Miguel Pinho CISTER Research Centre School of Engineering of the Polytechnic Institute of Porto (ISEP/IPP) Rua Dr.

More information

Operating Systems, Fall

Operating Systems, Fall Input / Output & Real-time Scheduling Chapter 5.1 5.4, Chapter 7.5 1 I/O Software Device controllers Memory-mapped mapped I/O DMA & interrupts briefly I/O Content I/O software layers and drivers Disks

More information

requests or displaying activities, hence they usually have soft deadlines, or no deadlines at all. Aperiodic tasks with hard deadlines are called spor

requests or displaying activities, hence they usually have soft deadlines, or no deadlines at all. Aperiodic tasks with hard deadlines are called spor Scheduling Aperiodic Tasks in Dynamic Priority Systems Marco Spuri and Giorgio Buttazzo Scuola Superiore S.Anna, via Carducci 4, 561 Pisa, Italy Email: spuri@fastnet.it, giorgio@sssup.it Abstract In this

More information

Following are a few basic questions that cover the essentials of OS:

Following are a few basic questions that cover the essentials of OS: Operating Systems Following are a few basic questions that cover the essentials of OS: 1. Explain the concept of Reentrancy. It is a useful, memory-saving technique for multiprogrammed timesharing systems.

More information

Precedence Graphs Revisited (Again)

Precedence Graphs Revisited (Again) Precedence Graphs Revisited (Again) [i,i+6) [i+6,i+12) T 2 [i,i+6) [i+6,i+12) T 3 [i,i+2) [i+2,i+4) [i+4,i+6) [i+6,i+8) T 4 [i,i+1) [i+1,i+2) [i+2,i+3) [i+3,i+4) [i+4,i+5) [i+5,i+6) [i+6,i+7) T 5 [i,i+1)

More information

Lesson 2-3: The IEEE x MAC Layer

Lesson 2-3: The IEEE x MAC Layer Module 2: Establishing Wireless Connectivity Lesson 2-3: The IEEE 802.11x MAC Layer Lesson Overview This lesson describes basic IEEE 802.11x MAC operation, beginning with an explanation of contention schemes

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

Lecture 12: An Overview of Scheduling Theory

Lecture 12: An Overview of Scheduling Theory Lecture 12: An Overview of Scheduling Theory [RTCS Ch 8] Introduction Execution Time Estimation Basic Scheduling Approaches Static Cyclic Scheduling Fixed Priority Scheduling Rate Monotonic Analysis Earliest

More information

Reference Model and Scheduling Policies for Real-Time Systems

Reference Model and Scheduling Policies for Real-Time Systems ESG Seminar p.1/42 Reference Model and Scheduling Policies for Real-Time Systems Mayank Agarwal and Ankit Mathur Dept. of Computer Science and Engineering, Indian Institute of Technology Delhi ESG Seminar

More information

Administrative Stuff. We are now in week 11 No class on Thursday About one month to go. Spend your time wisely Make any major decisions w/ Client

Administrative Stuff. We are now in week 11 No class on Thursday About one month to go. Spend your time wisely Make any major decisions w/ Client Administrative Stuff We are now in week 11 No class on Thursday About one month to go Spend your time wisely Make any major decisions w/ Client Real-Time and On-Line ON-Line Real-Time Flight avionics NOT

More information

Implementation of automotive CAN module requirements

Implementation of automotive CAN module requirements Implementation of automotive CAN module requirements Alan Devine, freescale semiconductors At first glance all CAN modules are very similar, the only difference being the number of message buffers which

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system Real world exposed programs Programs written to interact with the real world, outside the computer Programs handle input and output of data in pace matching the real world processes Necessitates ability

More information

1 What is an operating system?

1 What is an operating system? B16 SOFTWARE ENGINEERING: OPERATING SYSTEMS 1 1 What is an operating system? At first sight, an operating system is just a program that supports the use of some hardware. It emulates an ideal machine one

More information

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel

Chapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel Chapter-6 SUBJECT:- Operating System TOPICS:- I/O Management Created by : - Sanjay Patel Disk Scheduling Algorithm 1) First-In-First-Out (FIFO) 2) Shortest Service Time First (SSTF) 3) SCAN 4) Circular-SCAN

More information

Multiprocessor and Real-Time Scheduling. Chapter 10

Multiprocessor and Real-Time Scheduling. Chapter 10 Multiprocessor and Real-Time Scheduling Chapter 10 1 Roadmap Multiprocessor Scheduling Real-Time Scheduling Linux Scheduling Unix SVR4 Scheduling Windows Scheduling Classifications of Multiprocessor Systems

More information

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according

More information

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13 PROCESS SCHEDULING II CS124 Operating Systems Fall 2017-2018, Lecture 13 2 Real-Time Systems Increasingly common to have systems with real-time scheduling requirements Real-time systems are driven by specific

More information

A Practical Message ID Assignment Policy for Controller Area Network that Maximizes Extensibility

A Practical Message ID Assignment Policy for Controller Area Network that Maximizes Extensibility A Practical Message ID Assignment Policy for Controller Area Network that Maximizes Extensibility Florian Pölzlbauer Virtual Vehicle Research Center, Austria florian.poelzlbauer@v2c2.at Robert I. Davis

More information

Computer Network Fundamentals Spring Week 3 MAC Layer Andreas Terzis

Computer Network Fundamentals Spring Week 3 MAC Layer Andreas Terzis Computer Network Fundamentals Spring 2008 Week 3 MAC Layer Andreas Terzis Outline MAC Protocols MAC Protocol Examples Channel Partitioning TDMA/FDMA Token Ring Random Access Protocols Aloha and Slotted

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 34 This chapter describes how to use different methods to configure quality of service (QoS) on the Catalyst 3750 Metro switch. With QoS, you can provide preferential treatment to certain types

More information

Multitasking / Multithreading system Supports multiple tasks

Multitasking / Multithreading system Supports multiple tasks Tasks and Intertask Communication Introduction Multitasking / Multithreading system Supports multiple tasks As we ve noted Important job in multitasking system Exchanging data between tasks Synchronizing

More information

Operating System Concepts Ch. 5: Scheduling

Operating System Concepts Ch. 5: Scheduling Operating System Concepts Ch. 5: Scheduling Silberschatz, Galvin & Gagne Scheduling In a multi-programmed system, multiple processes may be loaded into memory at the same time. We need a procedure, or

More information

Latency on a Switched Ethernet Network

Latency on a Switched Ethernet Network Page 1 of 6 1 Introduction This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency as well as provide some real world examples.

More information

Scheduling Algorithm for Hard Real-Time Communication in Demand Priority Network

Scheduling Algorithm for Hard Real-Time Communication in Demand Priority Network Scheduling Algorithm for Hard Real-Time Communication in Demand Priority Network Taewoong Kim, Heonshik Shin, and Naehyuck Chang Department of Computer Engineering Seoul National University, Seoul 151-742,

More information

Efficient Implementation of IPCP and DFP

Efficient Implementation of IPCP and DFP Efficient Implementation of IPCP and DFP N.C. Audsley and A. Burns Department of Computer Science, University of York, York, UK. email: {neil.audsley, alan.burns}@york.ac.uk Abstract Most resource control

More information

Input Output (IO) Management

Input Output (IO) Management Input Output (IO) Management Prof. P.C.P. Bhatt P.C.P Bhatt OS/M5/V1/2004 1 Introduction Humans interact with machines by providing information through IO devices. Manyon-line services are availed through

More information

Schedulability-Driven Communication Synthesis for Time Triggered Embedded Systems

Schedulability-Driven Communication Synthesis for Time Triggered Embedded Systems Real-Time Systems, 26, 297±325, 2004 # 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. Schedulability-Driven Communication Synthesis for Time Triggered Embedded Systems PAUL POP PETRU

More information

OVERVIEW. Last Week: But if frequency of high priority task increases temporarily, system may encounter overload: Today: Slide 1. Slide 3.

OVERVIEW. Last Week: But if frequency of high priority task increases temporarily, system may encounter overload: Today: Slide 1. Slide 3. OVERVIEW Last Week: Scheduling Algorithms Real-time systems Today: But if frequency of high priority task increases temporarily, system may encounter overload: Yet another real-time scheduling algorithm

More information

Real-Time Mixed-Criticality Wormhole Networks

Real-Time Mixed-Criticality Wormhole Networks eal-time Mixed-Criticality Wormhole Networks Leandro Soares Indrusiak eal-time Systems Group Department of Computer Science University of York United Kingdom eal-time Systems Group 1 Outline Wormhole Networks

More information

Design Patterns for Real-Time Computer Music Systems

Design Patterns for Real-Time Computer Music Systems Design Patterns for Real-Time Computer Music Systems Roger B. Dannenberg and Ross Bencina 4 September 2005 This document contains a set of design patterns for real time systems, particularly for computer

More information

MODELS OF DISTRIBUTED SYSTEMS

MODELS OF DISTRIBUTED SYSTEMS Distributed Systems Fö 2/3-1 Distributed Systems Fö 2/3-2 MODELS OF DISTRIBUTED SYSTEMS Basic Elements 1. Architectural Models 2. Interaction Models Resources in a distributed system are shared between

More information

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array

More information

Configuration Guideline for CANopen Networks

Configuration Guideline for CANopen Networks Configuration Guideline for CANopen Networks Martin Rostan, Beckhoff Unlike most other fieldbus systems, CANopen provides many degrees of freedom to configure the communication behaviour of the network.

More information

Scheduling Periodic and Aperiodic. John P. Lehoczky and Sandra R. Thuel. and both hard and soft deadline aperiodic tasks using xed-priority methods.

Scheduling Periodic and Aperiodic. John P. Lehoczky and Sandra R. Thuel. and both hard and soft deadline aperiodic tasks using xed-priority methods. Chapter 8 Scheduling Periodic and Aperiodic Tasks Using the Slack Stealing Algorithm John P. Lehoczky and Sandra R. Thuel This chapter discusses the problem of jointly scheduling hard deadline periodic

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

Chapter 3: Industrial Ethernet

Chapter 3: Industrial Ethernet 3.1 Introduction Previous versions of this handbook have dealt extensively with Ethernet so it is not our intention to revisit all the basics. However, because Smart Grid protocols are increasingly reliant

More information

Scheduling Sporadic and Aperiodic Events in a Hard Real-Time System

Scheduling Sporadic and Aperiodic Events in a Hard Real-Time System Technical Report CMU/SEI-89-TR-11 ESD-TR-89-19 Scheduling Sporadic and Aperiodic Events in a Hard Real-Time System Brinkley Sprunt Lui Sha John Lehoczky April 1989 Technical Report CMU/SEI-89-TR-11 ESD-TR-89-19

More information

Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks

Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks Kok Soon Thia and Chen Khong Tham Dept of Electrical Engineering National University of Singapore Tel: (65) 874-5095 Fax:

More information

Real-time operating systems and scheduling

Real-time operating systems and scheduling Real-time operating systems and scheduling Problem 21 Consider a real-time operating system (OS) that has a built-in preemptive scheduler. Each task has a unique priority and the lower the priority id,

More information

Model answer of AS-4159 Operating System B.tech fifth Semester Information technology

Model answer of AS-4159 Operating System B.tech fifth Semester Information technology Q.no I Ii Iii Iv V Vi Vii viii ix x Model answer of AS-4159 Operating System B.tech fifth Semester Information technology Q.1 Objective type Answer d(321) C(Execute more jobs in the same time) Three/three

More information

Introduction. The fundamental purpose of data communications is to exchange information between user's computers, terminals and applications programs.

Introduction. The fundamental purpose of data communications is to exchange information between user's computers, terminals and applications programs. Introduction The fundamental purpose of data communications is to exchange information between user's computers, terminals and applications programs. Simplified Communications System Block Diagram Intro-1

More information

Barrelfish Project ETH Zurich. Message Notifications

Barrelfish Project ETH Zurich. Message Notifications Barrelfish Project ETH Zurich Message Notifications Barrelfish Technical Note 9 Barrelfish project 16.06.2010 Systems Group Department of Computer Science ETH Zurich CAB F.79, Universitätstrasse 6, Zurich

More information

THE TRANSPORT LAYER UNIT IV

THE TRANSPORT LAYER UNIT IV THE TRANSPORT LAYER UNIT IV The Transport Layer: The Transport Service, Elements of Transport Protocols, Congestion Control,The internet transport protocols: UDP, TCP, Performance problems in computer

More information

ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS

ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS JUAN CARLOS ARAGON SUMMIT STANFORD UNIVERSITY TABLE OF CONTENTS 1.

More information

Exam Review TexPoint fonts used in EMF.

Exam Review TexPoint fonts used in EMF. Exam Review Generics Definitions: hard & soft real-time Task/message classification based on criticality and invocation behavior Why special performance measures for RTES? What s deadline and where is

More information

1995 Paper 10 Question 7

1995 Paper 10 Question 7 995 Paper 0 Question 7 Why are multiple buffers often used between producing and consuming processes? Describe the operation of a semaphore. What is the difference between a counting semaphore and a binary

More information

Final Exam Preparation Questions

Final Exam Preparation Questions EECS 678 Spring 2013 Final Exam Preparation Questions 1 Chapter 6 1. What is a critical section? What are the three conditions to be ensured by any solution to the critical section problem? 2. The following

More information

Suggestions for Stream Based Parallel Systems in Ada

Suggestions for Stream Based Parallel Systems in Ada Suggestions for Stream Based Parallel Systems in Ada M. Ward * and N. C. Audsley Real Time Systems Group University of York York, England (mward,neil)@cs.york.ac.uk Abstract Ada provides good support for

More information

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies CSCD 433/533 Priority Traffic Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Commercial Real-time Operating Systems An Introduction. Swaminathan Sivasubramanian Dependable Computing & Networking Laboratory

Commercial Real-time Operating Systems An Introduction. Swaminathan Sivasubramanian Dependable Computing & Networking Laboratory Commercial Real-time Operating Systems An Introduction Swaminathan Sivasubramanian Dependable Computing & Networking Laboratory swamis@iastate.edu Outline Introduction RTOS Issues and functionalities LynxOS

More information

Extending RTAI Linux with Fixed-Priority Scheduling with Deferred Preemption

Extending RTAI Linux with Fixed-Priority Scheduling with Deferred Preemption Extending RTAI Linux with Fixed-Priority Scheduling with Deferred Preemption Mark Bergsma, Mike Holenderski, Reinder J. Bril, Johan J. Lukkien System Architecture and Networking Department of Mathematics

More information

AST: scalable synchronization Supervisors guide 2002

AST: scalable synchronization Supervisors guide 2002 AST: scalable synchronization Supervisors guide 00 tim.harris@cl.cam.ac.uk These are some notes about the topics that I intended the questions to draw on. Do let me know if you find the questions unclear

More information

4/6/2011. Informally, scheduling is. Informally, scheduling is. More precisely, Periodic and Aperiodic. Periodic Task. Periodic Task (Contd.

4/6/2011. Informally, scheduling is. Informally, scheduling is. More precisely, Periodic and Aperiodic. Periodic Task. Periodic Task (Contd. So far in CS4271 Functionality analysis Modeling, Model Checking Timing Analysis Software level WCET analysis System level Scheduling methods Today! erformance Validation Systems CS 4271 Lecture 10 Abhik

More information

Real-time Support in Operating Systems

Real-time Support in Operating Systems Real-time Support in Operating Systems Colin Perkins teaching/2003-2004/rtes4/lecture11.pdf Lecture Outline Overview of the rest of the module Real-time support in operating systems Overview of concepts

More information

Coordination and Agreement

Coordination and Agreement Coordination and Agreement Nicola Dragoni Embedded Systems Engineering DTU Informatics 1. Introduction 2. Distributed Mutual Exclusion 3. Elections 4. Multicast Communication 5. Consensus and related problems

More information

What s an Operating System? Real-Time Operating Systems. Cyclic Executive. Do I Need One? Handling an Interrupt. Interrupts

What s an Operating System? Real-Time Operating Systems. Cyclic Executive. Do I Need One? Handling an Interrupt. Interrupts What s an Operating System? Real-Time Operating Systems Provides environment for executing programs Prof. Stephen A. Edwards Process abstraction for multitasking/concurrency Scheduling Hardware abstraction

More information

15: OS Scheduling and Buffering

15: OS Scheduling and Buffering 15: OS Scheduling and ing Mark Handley Typical Audio Pipeline (sender) Sending Host Audio Device Application A->D Device Kernel App Compress Encode for net RTP ed pending DMA to host (~10ms according to

More information

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses. 1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and

More information

Computer Systems Assignment 4: Scheduling and I/O

Computer Systems Assignment 4: Scheduling and I/O Autumn Term 018 Distributed Computing Computer Systems Assignment : Scheduling and I/O Assigned on: October 19, 018 1 Scheduling The following table describes tasks to be scheduled. The table contains

More information

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013 Operating Systems Comprehensive Exam Spring 2013 Student ID # 3/20/2013 You must complete all of Section I You must complete two of the problems in Section II If you need more space to answer a question,

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

UNIT -3 PROCESS AND OPERATING SYSTEMS 2marks 1. Define Process? Process is a computational unit that processes on a CPU under the control of a scheduling kernel of an OS. It has a process structure, called

More information

NEW STABILITY RESULTS FOR ADVERSARIAL QUEUING

NEW STABILITY RESULTS FOR ADVERSARIAL QUEUING NEW STABILITY RESULTS FOR ADVERSARIAL QUEUING ZVI LOTKER, BOAZ PATT-SHAMIR, AND ADI ROSÉN Abstract. We consider the model of adversarial queuing theory for packet networks introduced by Borodin et al.

More information

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006 Operating Systems Comprehensive Exam Spring 2006 Student ID # 3/16/2006 You must complete all of part I (60%) You must complete two of the three sections in part II (20% each) In Part I, circle or select

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

7. Multimedia Operating System. Contents. 7.3 Resource Management. 7.4 Process Management. 7.2 Real Time Systems. 7.5 Prototype Systems. 7.

7. Multimedia Operating System. Contents. 7.3 Resource Management. 7.4 Process Management. 7.2 Real Time Systems. 7.5 Prototype Systems. 7. Contents 7. Overview 7.2 Real Time Systems 7.3 Resource Management Dimensions in Resource Design Reservation Strategies 7.4 Process Management Classification of Real-Time Scheduling Strategies Schedulability

More information

13 Sensor networks Gathering in an adversarial environment

13 Sensor networks Gathering in an adversarial environment 13 Sensor networks Wireless sensor systems have a broad range of civil and military applications such as controlling inventory in a warehouse or office complex, monitoring and disseminating traffic conditions,

More information