Combining Statistical and Online Model Checking based on UPPAAL and PRISM

Size: px
Start display at page:

Download "Combining Statistical and Online Model Checking based on UPPAAL and PRISM"

Transcription

1 Master Thesis Sascha Lehmann Combining Statistical and Online Model Checking based on UPPAAL and PRISM October 1, 216 supervised by: Prof. Dr. Sibylle Schupp Prof. Dr. Alexander Schlaefer Hamburg University of Technology (TUHH) Technische Universität Hamburg-Harburg Institute for Software Systems 2173 Hamburg

2

3 Eidesstattliche Erklärung Ich versichere an Eides statt, dass ich die vorliegende Masterarbeit selbstständig verfasst und keine anderen als die angegebenen Quellen und Hilfsmittel verwendet habe. Die Arbeit wurde in dieser oder ähnlicher Form noch keiner Prüfungskommission vorgelegt. Hamburg, den 1. Oktober 216 Sascha Lehmann iii

4

5 Contents Contents 1. Introduction 1 2. Conceptual Foundations Model Verication and Model Checking Boolean Satisability Problem (SAT) Computation Tree Logic (CTL) Linear-Time Temporal Logic (LTL) Model Checking Techniques Explicit-State Model Checking Symbolic Model Checking Statistical Model Checking (SMC) Online Model Checking (OMC) Statistical Methods UPPAAL and PRISM UPPAAL UPPAAL-SMC PRISM Approaching Verication Timing Timing Problem General Routine Query Preprocessor and Scheduler Scheduler Evaluation Framework Adaptability Extensions Modular Structure Intermediate Model Layer Simulator and Verier Integration Extensions Summary and Evaluation GUI Solutions GUI Introduction Graphical Experiment View Code-Based Experiment View Graphical Model View Reconstruction Routine Extension Starting Point Necessary Changes and Conditions State Determination, Tracing and Verication Routine Evaluation v

6 Contents 8. Head Motion Case Study Concept of the Case Study Motion Experiment Protocol Model Structure And Attributes Experiment Process Case Study and Framework Evaluation Experiment Results Case Study Evaluation Framework Evaluation Conclusion 65 A. Complete Verication Data Traces 67 A.1. Data Traces of the Freedom-Degree Constrained Model (Uppaal) A.2. Data Traces of the Value-Range Constrained Model (Uppaal) A.3. Data Traces of the Freedom-Degree Constrained Model (Prism) A.4. Data Traces of the Value-Range Constrained Model (Prism) B. User Manual 77 vi

7 List of Figures List of Figures 2.1. A basic example of a Discrete-Time Markov Chain A basic example of a Network of Stochastic Timed Automata A comparison of the static and online verication approaches The model view of the Uppaal GUI Additional components and concepts with the Uppaal-SMC add-on The code-based model view of the Prism GUI The query pre-processing steps (right) within the verication cycle The structure of the pre-processing strategy pattern The complete modular framework structure with component dependencies The original model checker integration solution with direct object access The new model checker integration with an abstract and concrete intermediate layer based on interfaces Comparison of the Load-method implemented for Uppaal and Prism Comparison of the Query-method implemented for Uppaal and Prism The GUI distributed with of the original OMC framework The graphical experiment view of the GUI The code-based experiment view of the GUI The graphical model view of the GUI The assumed and real paths of a potential simulation scenario The model simulation trace considering one path at a time The model simulation trace considering all possible paths The model simulation trace considering paths above a probability threshold An example for the determination of average variable values spanning multiple traces The translation and rotation sequences of the three patterns steady position (left), fast refocus (center), and falling asleep (right) The translation and rotation traces of 4 dierent test persons The components of the rst basic Uppaal head motion model The main component of the state-based Uppaal head motion model Exemplary components of the Prism head motion model A sample probability trace for the freedom-degree constrained pattern type based on a Uppaal model The rst experiment run for the value-range constrained pattern concept based on a Uppaal model The second experiment run for the value-range constrained pattern concept based on a Uppaal model vii

8 List of Tables 9.4. A sample probability trace for the freedom-degree constrained pattern type based on a Prism model A sample probability trace for the value-degree constrained pattern type based on a Prism model A.1. Freedom-Degree Constrained Model - Uppaal - Test Person A.2. Freedom-Degree Constrained Model - Uppaal - Test Person A.3. Freedom-Degree Constrained Model - Uppaal - Test Person A.4. Freedom-Degree Constrained Model - Uppaal - Test Person A.5. Value-Range Constrained Model - Uppaal - Test Person 1 - Run A.6. Value-Range Constrained Model - Uppaal - Test Person 1 - Run A.7. Value-Range Constrained Model - Uppaal - Test Person 2 - Run A.8. Value-Range Constrained Model - Uppaal - Test Person 2 - Run A.9. Value-Range Constrained Model - Uppaal - Test Person 3 - Run A.1.Value-Range Constrained Model - Uppaal - Test Person 3 - Run A.11.Value-Range Constrained Model - Uppaal - Test Person 4 - Run A.12.Value-Range Constrained Model - Uppaal - Test Person 4 - Run A.13.Freedom-Degree Constrained Model - Prism - Test Person A.14.Freedom-Degree Constrained Model - Prism - Test Person A.15.Freedom-Degree Constrained Model - Prism - Test Person A.16.Freedom-Degree Constrained Model - Prism - Test Person A.17.Value-Range Constrained Model - Prism - Test Person A.18.Value-Range Constrained Model - Prism - Test Person A.19.Value-Range Constrained Model - Prism - Test Person A.2.Value-Range Constrained Model - Prism - Test Person List of Tables 5.1. Methods declared by the ISimulator-interface Methods declared by the IVerier-interface viii

9

10 Abstract Abstract Building up on the insights obtained with the previous project work during which we analysed the advantages and limitations of the online model checking framework with statistical capability extensions, we focus on the most critical timing and realtime simulation aspects that need to be addressed to allow for the combination of both the online and the statistical model checking concept. For that reason, we extend and restructure the Java implementation of the SOMC framework, and use the improved modularity to handle dierent model checkers such as Uppaal and Prism. Furthermore, we describe a model simulation and trace reconstruction concept that is adapted to deal with the non-deterministic aspects of stochastic systems, and again point out the benets and possible problems of the concept application. Finally, we gather all SOMC-related routines in a more user-friendly integrated development and application environment. The overall results show that even though the possible state space explosion of the verication process is eliminated, a possibly unbounded state space can now be found during the simulation and reconstruction process, when stochastic elements are included and in case that parallel traces cannot be reduced. First experiments with the SOMC approach for pattern decisions in a head motion tracking system indicate its applicability as an alternative to explicit-state and symbolic online techniques. x

11 1. Introduction 1. Introduction For several decades, the approach of system verication and its sub-topic of model checking have been investigated and improved to deal with the continuously evolving eld of software and hardware systems. Propositions were thus derived from their specications, and checked against the formal system description to determine the overall satisability of these properties. At rst, the development of verication techniques was mainly aimed towards static systems, which were designed for a specic, completely dened purpose, and were therefore utilised in a separated environment that was not intended to react to changes occurring in its surrounding. Due to the limited exibility, a valid behaviour could already be guaranteed with a one-time analysis before the actual deployment of the system. This approach needed to be changed drastically as cyber-physical systems emerged, which required - due to their continuously increasing complexity and interconnection - a dynamic approach which was capable to consider a changing behavior of the system during run-time. The classical techniques failed at this point, as they assumed the given constraints to hold for innite paths or even globally, which did not allow for changing model parameters that stayed updated with non-deterministic environment observations. For that reason, new techniques were required to deal with properties that are subject to only temporary validity. A recent solution that is specically designed to handle real-time environments and adaptable models is the Online Model Checking (OMC) approach. This concept considers overlapping time intervals of the model execution for its verication process, and provides enough information to react to incorrect system behaviour with a dened time buer for which a valid behaviour is still guaranteed. Nevertheless, this model checking approach is based on techniques of symbolic and explicit-state model checking, which can result in an unbounded state space, leading to an excessive time consumption. To overcome this limitation, it is necessary to choose an alternative underlying model checking technique, which allows for a restriction of the resulting computation times during the verication process. In this work, we extend the given OMC framework with the necessary routines needed to integrate the existing concept of Statistical Model Checking (SMC), which is a sample-based verication approach that uses the partial results of simulation runs to estimate the probability of the proposition validity. Furthermore, we aim for an easier creation routine of experiment models, a more detailed overview of real-time verication data, and the support for dierent model checking engines apart from the originally used Uppaal model checker. Finally, we want to test the framework extension in a head motion experiment to show its basic applicability. Following this introduction, we rst give an overview of the foundations and concepts of model checking and statistics in Chapter 2, which form the basis of both timingand simulation-related SOMC solutions developed in this thesis. Next, we take a closer look at the two utilised model verication systems Uppaal and Prism in Chapter 3. Afterwards, we describe the concept, rationale, and implementation of each individual time-, extensibility- and simulation-relevant part of the framework, starting with a proper 1

12 1. Introduction verication query timing in Chapter 4. The following Chapter 5 deals with the dierent framework adaptability extensions that we implemented to allow the given framework to not only support Uppaal - which it was previously tailored to - but any other model checker that features a suitable API. For that purpose, we provide general interfaces to require as few code adaptions as possible. In Chapter 6, we explain the changes to the GUI delivered with the framework that we applied to allow for a more uent and dynamic experiment creation, simulation and verication process loop. After that, we move on to the conceptual part of the trace reconstruction and stochastic simulation routine in Chapter 7, in which we explain the required additions and involved limitations emerging from stochastic elements inside a model. The adapted framework is then applied in a case study in Chapter 8, dealing with the tracking and identication of head motion patterns for a possibly medical application. The case study results are evaluated and interpreted in Chapter 9, and a conclusion regarding the overall SOMC concept and especially its real-time simulation aspect is provided in the nal Chapter 1. 2

13 2. Conceptual Foundations 2. Conceptual Foundations To understand the dierent processes and model types we deal with in the course of this thesis, it is necessary to clarify the underlying concepts at rst. For that reason, we use this chapter to present the general idea of model verication and model checking as well as selected variations, and explain the model conceptions that are compatible with each of these techniques. We start with an overview of model verication and checking in Section 2.1, which also introduces the fundamental SAT problem, and furthermore explains the two logics CTL and LTL, which are most widely used to express and analyse properties in terms of their satisability. Afterwards, we explain dierent model checking techniques in Section 2.2, starting with Explicit-State and Symbolic Model Checking, which is followed by a closer look at Statistical Model Checking (SMC) and Online Model Checking (OMC) as the two fundamental concepts our framework implementation is built up on. Finally, we introduce the basic statistical concepts which are needed for the application of SOMC in Section Model Verication and Model Checking As an umbrella term for a large range of techniques that are used for the mathematical derivation of system correctness, (Model) Verication can be subdivided into dierent classes, at which all of these are based on attributes or axiomatic assumptions taken as granted for the verication process. In the case of formal verication, these attributes can be part of the proof construct, or they can be derived from the system specications, as in the process of software or model verication. The verication process can be divided into static and dynamic approaches. The already mentioned formal verication technique is classied as a static process, as it uses formal logics and mathematical methods to analyse a system and derive statements about the system correctness, whereas specic types of system verication that utilise a testing environment to check properties can be classied as dynamic. One specic branch dealing with abstract system models is the Model Checking technique. In most cases, certain attributes are derived from observations of a real-world system, and the model checking technique is then applied to show that these attributes also hold for the abstract model. Formally, two components are relevant for model checking: The formal description - often denoted as M - which represents the model itself, and the logical properties denoted as φ, which represent the attributes derived from either system specications or observations. The logical properties are checked against the formal description, which leads to one of the following results: If the specied properties hold and thus the formal description turns out to be a valid model for the set of propositions, the result is denoted as M = φ, and otherwise as M = φ. As the model and its properties are based on formal requirements, the model checking process can be automated, which provides a great advantage compared to a manual deductive technique. In the common implementation of a model checker, the system informs about the validity of the specied properties for the formal model description, and in case of a violation of 3

14 2. Conceptual Foundations one or more propositions, it additionally returns a counterexample containing the path and nal state of the violation. For the representation of a formal model description, nite state machines can be used, as well as transition systems or programmatic procedures, depending on the component requirements and conceptual specication of the system. The propositions can be formulated in a symbolic logic, including the widespread Computation Tree Logic (CTL) and the Linear-Time Temporal Logic (LTL), which are all based on the underlying Boolean Satisability Problem (SAT) Boolean Satisability Problem (SAT) The Boolean Satisability Problem (SAT) arises with the use of atomic or composed logical formulae, for which an assignment of the set of involved Boolean variables is sought that evaluates the expression to true. If that is the case, the formula is called satisable. As the problem is proven to be NP-complete, it is desirable to nd techniques that optimise the search process and at least limit the exponential growth of the state space to a certain extent. In the specic case of model checking, for which the composed formulae are derived from the system specications, two resulting statements in terms of the SAT problem are aimed for: Firstly, we want to nd out if the formula can be satised at all, which is the case when there exists at least one valid assignment of variables for which the expression evaluates to true, and secondly, we want to see if the actual environment allows for such an assignment based on the conditions which are derived for that environment from system observations Computation Tree Logic (CTL) One of the most commonly used logics for the purpose of model verication is the Computation Tree Logic (CTL), whose primary and derived syntactical components are dened below: Denition 2.1 The following syntax is inductively dened for the Computation Tree Logic (CTL): φ ::= true a φ 1 φ 2 φ (2.1) Xφ F φ Gφ φ 1 Uφ 2 We can derive the following adequate set of expressions by a combination of the basic connectives and addition of path quantiers: φ ::= false φ 1 φ 2 φ 1 φ 2 φ 1 φ 2 (2.2) AXφ EXφ AF φ EF φ AGφ EGφ A[φ 1 Uφ 2 ] E[φ 1 Uφ 2 ] 4

15 2. Conceptual Foundations For this denition, a represents an arbitrary atomic proposition, and φ represents an arbitrary CTL formula which can again be recursively inserted into another formula. Formally, CTL is categorised as a branching-time logic, meaning that all possible paths at all times are covered by an expression. For that reason, the path-spanning quantiers A (all) and E (exists) are introduced to express the paths on which the propositions are supposed to hold Linear-Time Temporal Logic (LTL) The Linear-Time Temporal Logic (LTL) is another widely-used temporal logic, and similar to CTL, it is derived as a sub-logic from the superset CTL*. The primary and derived syntactical elements are dened below: Denition 2.2 The following syntax is inductively dened for the Linear-Time Temporal Logic (LTL): φ ::= true a φ 1 φ 2 φ (2.3) Xφ F φ Gφ φ 1 Uφ 2 We can derive the following adequate set of expressions by a combination of the basic connectives: φ ::= false φ 1 φ 2 φ 1 φ 2 φ 1 φ 2 (2.4) Unlike the previously covered CTL, the design of the LTL does not aim at the consideration of all possible paths, but rather applies to single paths of individual program executions, which excludes additional path quantiers such as A (all) or E (exists) Model Checking Techniques For the model checking process, several techniques exist which follow quite dissimilar concepts to determine whether a given proposition holds or not. This section gives an overview about the currently most widespread approaches of Explicit-State Model Checking and Symbolic Model Checking, explains the sample-based Statistical Model Checking (SMC) technique, and introduces the only recently developed concept of Online Model Checking (OMC) Explicit-State Model Checking The Explicit-State Model Checking approach was one of the rst and most basic techniques that were applied for the formal verication of a system model, and came into operation for multiple decades by now. The core of explicit-state model checking consists of a depth-rst exhaustive search routine with propositions that are composed of temporal operators, and it analyses the complete state space spanned by the model locations and variables to derive a statement about the satisability of each operator. Due 5

16 2. Conceptual Foundations to the consideration of the complete state space without simplication or grouping, the approach goes along with a severely limiting factor: As all combinations of assignments need to be analysed in the worst case, the potential state space of the model grows exponentially, causing high computation times even for comparatively small models with few variables Symbolic Model Checking As a rst approach to deal with the described state space explosion of explicit-state model checking, the concept of Symbolic Model Checking was introduced, which is built up on a xed-point method and - in contrast to the former enumeration of explicit states - groups the model states to sets in an iterative manner. As soon as this approach was backed up with a correspondingly ecient data structure - Binary Decision Diagrams (BDDs) were chosen for this purpose in many cases - it was possible to reduce the amount of consumed time signicantly. Nevertheless, the general performance problem arising from a continuously growing state space was still not solved with this approach, so that completely new techniques were needed for models with increasing complexity and dynamic behaviour Statistical Model Checking (SMC) One concept that solves the problem of growing state spaces is the Statistical Model Checking (SMC) technique. Instead of structuring and formally analysing the state space, the sample-based statistical model checking approach simulates the system model a selected amount of times, and derives a Boolean statement on the validity of the proposition in each case, which can nally be combined into one single statement weighted with a certain probability. In this way, the model checker only considers one single path through the model at a time, and is thus independent from the actual size of the complete state space and the involved signicant time consumption of its traversal. The limitations of the approach are mostly two factors: Firstly, the absence of a completely reliable statement about the property satisability can lead to a situation, in which the behaviour of the model diers from the verication results we obtained before. Therefore it is necessary to consider suitable countermeasures in case of a violation at an early point of time. Secondly, as statistical model checking in its basic form is a static approach similar to explicit and symbolic model checking, it needs to be combined with a runtime-based technique to apply it in a dynamically changing environment. To get a better understanding of the additional model types and components which can be analysed via statistical model checking, we will have a look at the structures of Discrete- Time Markov Chains (DTMCs) and Networks of Stochastic Timed Automata (NSTAs), together with the concepts of the respectively applicable verication logics Probabilistic Computation Tree Logic (PCTL) and Metric Interval Temporal Logic (MITL). 6

17 2. Conceptual Foundations Figure 2.1.: A basic example of a Discrete-Time Markov Chain Discrete-Time Markov Chains (DTMC) and PCTL The technique of statistical model checking allows for a probabilistic weighting of model edges, which are then triggered according to their individual probability. One type of models that are directly built up on this concept are the Discrete-Time Markov Chains (DTMCs), which are among others used by the Prism model checker [1]. A basic example for that kind of model is illustrated in Figure 2.1. We see that a DTMC shows similarities to a nite state machine composed of locations and edges, but instead of controlling the transitions between locations with invariants and guards, one of the outgoing edges is randomly triggered, based on the distribution of probabilities among these edges. The probability values are given in a relative manner, which means that they sum up to 1. in total (in practice, model checkers like Prism indicate an error in case that the sum of probabilities results in a dierent value). In the displayed example, s 1 s 2 is triggered in 5% of the cases, while s 1 s 3 and s 1 s 4 are both activated with a chance of 25%. As the common logics CTL and LTL are designed to express properties for deterministic models without statistical components, it is necessary to use another logic for the description of probabilistic attributes. One logic that extends the CTL language with the ability to formulate query expressions for state probabilities is the Probabilistic Computation Tree Logic (PCTL). Denition 2.3 The following syntax is inductively dened for the Probabilistic Computation Tree Logic (PCTL): φ ::= true a φ 1 φ 2 φ P p [ψ] (2.5) ψ ::= Xφ φ 1 U k φ 2 φ 1 Uφ 2 F φ Gφ (2.6) The expression a represents an arbitrary atomic proposition, p [, 1] is a path probability value, {<,,, >} can be one of the four possible inequalities, and k N is an arbitrary natural number. 7

18 2. Conceptual Foundations Figure 2.2.: A basic example of a Network of Stochastic Timed Automata The rst row of expressions (2.5) is applicable for the description of state properties, and the second row of expressions (2.6) describes properties that are expected to be valid for complete paths. We can notice two dierences to the underlying CTL logic: Firstly, the new probabilistic operator P p [ψ] allows for an evaluation of the probability of an expression ψ in comparison to the xed probability value p. Secondly, the bounded-until operator φ 1 U k φ 2 adds the ability to express that a property φ 1 holds at rst, until a second property φ 2 nally becomes true within a time frame of k units. Further information on the denition and application of PCTL can be found in Principles of Model Checking [2]. Basic examples for the formulation of semantic properties using the covered formulae are shown below: Composition: s = φ (Example: s = head tail) (2.7) Bounded-Until: ω = φ 1 U k φ 2 (Example: ω = hot U 2 cold) (2.8) Evaluation: s = P p [ψ] (Example: s = P.9 [F head]) (2.9) Networks of Stochastic Timed Automata (NSTA) and MITL A dierent type of models for the representation of probabilistic systems are the Networks of Stochastic Timed Automata (NSTAs), which are the core model variant for the Uppaal model checker with its SMC extension. It consists of one or multiple Stochastic Timed Automata (STAs) - a model extension of standard timed automata paired with stochastic elements - which are combined to an automata network with possible communication via synchronisation channels. An example for such a network is shown in Figure 2.2. For the application inside an NSTA, the synchronisation channels are supposed to behave as broadcast types (in Uppaal-SMC, this is a strict requirement), meaning that they are triggered by the caller (sync_x!) without blocking it until a lis- 8

19 2. Conceptual Foundations tener reaches the corresponding command (sync_x?). By that, it is not possible to keep the caller in an innite blocking state. The concept of Stochastic Timed Automata allows for a more exible handling of clock variables, i.e. dierent rates can be applied to each clock, which dene the scaled number of time units the clock should proceed for each time step of a global clock at standard rate 1. Figure 2.2 illustrates how these rates can be used. One possible logic that can describe properties on NSTAs is the Metric Interval Temporal Logic (MITL), which was chosen for the proposition formulation in Uppaal: Denition 2.4 The following syntax is inductively dened for the Metric Interval Temporal Logic (MITL): φ ::= a φ 1 φ 2 φ Oφ φ 1 U x d φ 2 (2.1) Once again, a represents an arbitrary atomic proposition, k N is an abitrary natural number, and additionally, a clock variable x is introduced. As a new functionality in comparison to the previously covered PCTL, MITL supports an even further extended bounded-until operator φ 1 U x d φ 2, which allows for an additional selection of the clock variable that is used for the evaluation of that proposition in a multi-clock environment. For the standard MITL, it is not generally possible to decide if the probability of a property exceeds a certain threshold (P M (ψ) p). For its usage in Uppaal-SMC, the logic is therefore reduced to a cost-bounded type, the Weighted Metric Temporal Logic (WMTL), which is expressed as P M ( x C φ), containing a clock variable x and a bounded arbitrary natural number C N. This logic type can nally be used to express the set of queries that are needed for the evaluation of property probabilities, which is achieved by Uppaal-SMC in a sample-based manner: Probability Evaluation: P M ( x C φ)? (2.11) Hypothesis Testing: P M ( x C φ) p [, 1]? (2.12) Probability Comparison: P M ( x C φ 1 ) > P M ( x C φ 2 )? (2.13) Online Model Checking (OMC) All of the previously covered model checking concepts are designed to be used statically, with an application in a xed environment featuring a known set of variable assignments and initial states of the model which the propositions are checked against. In some elds of application - especially for cyber-physical systems, which are often embedded into an environment that is subject to dynamic changes and thus cannot be fully predicted - these types of model checkers do not deliver sucient information on the satisability of propositions, which is partly depending on the current environment state. For this purpose, the Online Model Checking (OMC) technique was introduced at the beginning of the twenty-rst century. The underlying concept of real-time model checking was rst described in the early 9s by R. Alur as part of the work Model-checking for real-time systems [3]. It already allowed 9

20 2. Conceptual Foundations for a rather shallow analysis of real-time systems, which was achieved by introducing dynamic time delays for the modelled processes. Nevertheless, this approach was still meant to be static, as the application during run-time was not supported. This should be changed with the concept of Runtime Verication a few years later in the early 2s. As part of a monitoring routine, this type of verication was introduced by Geilen [4] (amongst others) during an annual workshop with the same name. It was the rst attempt that concentrated on a run-time situation instead of the former application before deployment. While this concept can be considered dynamic in terms of its eld of operation, it was still limited due to its focus on the post-checking detection of violations and prevention of failure recurrences. A real failure prevention before its rst occurrence was not possible at this point. This limitation was nally dealt with when the rst attempts of online model checking came up. Among the rst, Rammig et al. [5] implemented this technique for an operating system service, which made it possible to track the system execution and the involved tasks, and initiate validated adjustments of the system as response to changing congurations. Compared to the previously explained real-time approaches, this technique can be categorised as a pre-checking type, as the verication process is now directed towards the near future, by which failures can be predicted and prevented before their rst occurrence. With this specic application of a real online model checking concept in mind, the next step was then to develop a complete framework, which would allow the technique to be applied to any kind of supported state machines. The creation of a corresponding framework with online capabilities was recently tackled by Rinast in his PhD-thesis An Online Model-Checking Framework for Timed Automata [6]. The general process - as it was pointed out in his work - is depicted in Figure 2.3. We see that the normal model checking concept (2.3a) needs to analyse the complete state space to derive a statement about the validity of a property, and in case that already a single state causes a violation of a safety condition, it needs to be declared false as a whole, and a potential execution process would need to be stopped until the model is adapted (a) Static verication (b) Online verication Figure 2.3.: A comparison of the static and online verication approaches 1

21 2. Conceptual Foundations accordingly. At this point, the online model checking technique helps to keep the eects of a property violation locally. It only considers a xed time interval and the corresponding limited state space as shown in Figure 2.3b, which is illustrated by the overlapping triangles. During the rst two simulation steps, this concept would not encounter a violation due to the limited scope, and as soon as the violation is spotted, the verication of overlapping time intervals creates a certain time frame during which the violation can be xed while keeping the system running. A non-violating execution of the system in this time frame can be expected as this time interval was part of previous verication steps, during which the system properties were already determined as satised. Even though the dynamic property of this concept is given, and the verication process only spans a limited time frame, it can still suer from computation times that exceed the bounds of real-time constraints. This can occur for two reasons: The verication process may still deal with an extensive state space despite its temporal limitation, and the continuous model update, which is needed to adapt the model to the dynamic environment, requires a State Space Reconstruction routine, which can be unbounded in the worst case and which is briey explained in the following. State Space Reconstruction The state space reconstruction routine is needed by the model simulator to return to the state which the model was previously in, while including the parameter assignments that are required to comply with the most recent system observations. The main focus during the development of this routine was put on the reduction of states that are traversed in the taken reconstruction path. The assignment of parameters on certain edges can lead to a state which is independent from the inuence of variable manipulations between previous states. In that case, the said sequence of states can be excluded from the reconstruction path. For the routine to be computationally ecient during a progressing experiment, these reconstruction paths need to be bounded, which is only possible with a restricted interval of parameter values and resets of all involved clock variables on each possible model path. A more detailed explanation of this concept and the implied model requirements can be found in the work by Rinast [6] Statistical Methods In this nal section of the conceptual foundations, we want to take a brief look at the statistical methods which are normally applied internally by a corresponding model checker. This step is required as we need to perform the calculation of some statistical parameters manually for the query pre-processing and scheduling routine which we describe and implement later on in Chapter 4. The central parameters for the query evaluation of any SMC approach are the number of samples n, the ratio of samples p which satisfy the given attribute, the error margin em, and the condence level z. The error margin is a value em [, 1] which expresses the width of the condence interval ci = [ x em, x + em] around the mean value x in which a sample result lies with the probability dened via the condence level. The condence 11

22 2. Conceptual Foundations level itself is either expressed as a percentage value (e.g., a condence level of 95% is commonly used) or as a z-value, i.e. the factor of the standard deviation σ which in turn together represent the margin error. The two variables of error margin and z-valued condence level are therefore connected through the following equation: em = z σ. Combining this equation with the formula for the standard deviation that applies for p (1 p) binary samples [7], σ = n, we can derive the following equation for the resulting error margin em res : em res (n, p, z) = z p (1 p) n (2.14) Furthermore, we can transform this equation to calculate any of the four parameters, provided that the remaining three values are given: n z res (n, p, em) = em p (1 p) p res (n, em, z) =.5 (2.15).25 n me2 z 2 (2.16) n res (p, em, z) = z2 p (1 p) me 2 (2.17) Using these equations, we can then calculate the approximately required number of samples that are needed to achieve a desired error margin and condence level. As this basic approach returns increasingly deviating parameter values for very small sample numbers, several adapted solutions exist, one of them being the Adjusted Wald method [8], which introduces the following changes: n aw = n + z 2 (2.18) p aw = n p + z2 2 (2.19) n aw p aw (1 p aw ) em res = z (2.2) n aw For the course if this thesis, we will use the former basic approach, especially as we try to get at least a few hundred simulation samples for each verication query to derive reliable statements on the satisability. Nevertheless, the adapted methods can be considered for the framework in the future, combined with a switch-routine that selects the most suitable calculation method based on the magnitude of the given sample count. 12

23 3. UPPAAL and PRISM 3. UPPAAL and PRISM One adaption of the existing online model checking framework that we cover in our work serves the purpose to allow for an easier and more straight-forward integration of dierent model checkers that take over parts of the simulation and verication process. The original framework utilises Uppaal for these tasks, and we choose the Prism model checker as an alternative to be used by the framework for a comparison of their performance. As a basis, this chapter presents the features of both model checkers in more detail, explains their individual components, and highlights the capabilities that are suitable for the SOMC framework UPPAAL The model checker Uppaal emanated from a cooperation of the Uppsala and Aalborg universities via a project initiated by Larsen, Yi et al. It is widely used in the academic environment, and was also applied in multiple industrial use cases, proving it to be a solid and ecient tool for the purpose of static verication. The model of a system is split up into a set of templates, each containing locations and edges that together form a distinct component of the model. The system which the verication properties are checked against, and which is also used for the simulation runs, is then composed by instances of these templates, the so-called processes. The locations are provided with invariants, which describe the conditions under which the location is allowed to be active (e.g., t <= 5). The edges can be provided with optional guards, i.e. conditions that need to hold for the system to allow the triggering of the edge (e.g., t >= 4), and updates, which describe changes to system variables that are Figure 3.1.: The model view of the Uppaal GUI 13

24 3. UPPAAL and PRISM (d) Custom Delay (a) Probabilistic Branch (b) Dierential Equation (c) Exponential Rate (e) Dynamic Creation Figure 3.2.: Additional components and concepts with the Uppaal-SMC add-on applied as soon as the edge is activated (e.g., x = 4). Each of these system variables can be dened either in a local context as part of the template declaration, or in a global context within the system declarations. The timing data inside the model is represented by specic clock variables, which can take a concrete value in the update section of an edge, and apart from that incrementally count the abstract time units consumed during the simulation of the model. As an important feature to control the timing behaviour of the processes that depend on each other, a synchronisation component called channels is included, which consists of a calling (channel!) and a listening (channel?) part. It can be dened as either binary, meaning that it is blocking for the caller and synchronises two dierent transitions, or broadcast, which results in a synchronisation that is non-blocking for the caller, as it continues the transition after calling channel!. These synchronisation attributes are dened in the sync section of an edge. An overview of the graphical interface for Uppaal is presented in Figure UPPAAL-SMC The extension of the Uppaal model checker with special importance for our work is the Uppaal-SMC add-on. It adds model components to the environment which are of a non-deterministic and probabilistic nature, and extends the verication process with a routine that utilises sample-based concepts such as Monte-Carlo simulation and Sequential Hypothesis Testing. This sort of sample-based simulation forms the basis of the real-time verication eort that our work is aimed towards. A brief overview of the added components is provided in Figure 3.2, and a more detailed description of each part 14

25 3. UPPAAL and PRISM can be found in the paper of the preparative project work [9]. Based on the extensions of the verication query language used by Uppaal due to the SMC add-on, we can formulate a set of dierent queries that are evaluated via simulation runs rather than explicit or symbolic derivation, and which are then directly callable by the SOMC framework: Simulation: simulate samples [<= scope] {vars} (3.1) Probability Evaluation: Pr[<= scope] (property) (3.2) Hypothesis Testing: Pr[<= scope] (property) >= threshold (3.3) Probability Comparison: Pr[<= scope] (property 1 ) >= Pr[<= scope] (property 2 ) (3.4) Bound Determination: E[<= scope; samples] ([max/min] : var) (3.5) The major dierence between these queries and the ones that Uppaal uses without its SMC extension is the denition of a scope value, which indicates how far into the future the simulation samples should reach. Similar to the other clock-related parameters in Uppaal, the scope can only take integer values, and additionally, it needs to be greater than zero. The simulation query initiates a certain number of simulation runs which is set at the position of the placeholder samples, and optionally, multiple variable names can be inserted as vars whose traces throughout the simulation can be plotted inside Uppaal afterwards. In contrast to that query, actual probability data can be obtained with the next three queries, containing the Pr-ag: Probability Evaluation, Hypothesis Testing and Probability Comparison. The Probability Evaluation query determines the probability with which the provided atomic or composed property may hold, returning a pair of values in the range [., 1.] that represent the lower and upper bounds of the determined probability, which are inuenced by several SMC parameters covered later on in this section. The Hypothesis Testing query compares the probability of a property with a threshold value, and returns a Boolean result that indicates whether the property is greater or smaller than the threshold, or if the result is undened due to a probability that is too close to the given threshold. The third basic probability-related query - the Probability Comparison - is provided with two dierent properties, and again returns a Boolean result indicating whether the specied inequality of both properties holds or not. With the last of the given queries - the Bound Determination - Uppaal also provides a way to determine the expected value bounds of a set of simulation samples for a single variable by using the E-ag. Similar to the basic simulation query, this bound query allows to set both the time scope and the number of samples, and depending on the desired bounds (max or min), it returns the determined bound value as well as the average value deviation. For a more in-depth explanation of these queries, the mathematical concept behind them, and exemplary use cases, the ocial documentation of the Uppaal SMC add-on [1] provides additional information. 15

26 3. UPPAAL and PRISM The last aspect to consider for our framework is the set of SMC specic parameters mentioned before, which inuence the interval bounds and widths of the resulting probability as well as the region in which a denite result cannot be obtained. In particular, the following parameters are of importance for our application: Probabilistic Deviation: lower deviation( δ), upper deviation(+δ) (3.6) Result Probability: f alse negatives(α), f alse positives(β) (3.7) Probability Uncertainty: ɛ (3.8) Ratio Bounds: lower bound(u ), upper bound(u 1 ) (3.9) For hypothesis testing, the rst two parameters, i.e. the Probabilistic Deviation and Result Probability of false negatives and false positives, have the greatest impact, as they dene both the region of indierence that does not allow for a clear decision, and the false acceptance probability of the alternative hypothesis. For the use of the probability evaluation query, the Probability Uncertainty directly aects the deviation bounds of the probability interval, as it represents the width of that interval, in which the real probability lies with a condence level that is again given by the specied probability of false negatives α. At last, the probability comparison step is directly inuenced by the ratio bounds u and u 1, as a denite statement on the validity of the inequality can only be obtained when the ratio of both involved property probabilities lies above / below the specied bounds. More detailed explanations of the presented parameters and additional settings regarding the simulation trace, value discretisation and graphical representation of the simulation data can again be found in the ocial documentation [1] mentioned before PRISM The second piece of verication software that we apply in our experiments is the probabilistic and symbolic model checker Prism, developed by Kwiatkowska et al. [11]. It is specialised in the eld of stochastic systems, and thus supports a variety of dierent model types for that purpose. These types include the Discrete-Time Markov Chains (DTMCs) described in the previous chapter, as well as Continuous-Time Markov Chains (CTMCs), which are built up on the concept of rates at which the single edges are periodically triggered. Furthermore, Probabilistic Timed Automata (PTAs) are supported, which are the closest to the models that can be composed in Uppaal, even though they hold several restrictions in terms of modelling depending on the selected verication engine, and the simulation of PTAs is not supported at the current point of time. Finally, Prism also allows to model Markov Decision Processes (MDPs), in which a following state only depends on the current state, as well as a probability distribution of possible next states which is bound to the specic action the agent chooses to perform. As verication engines, the model checker allows to choose between MTBDD, Sparse, Hybrid and Explicit for the symbolic part, and can also make use of statistical model 16

27 3. UPPAAL and PRISM checking for certain model types, which goes along with a sample-based verication approach and which we want to utilise in our framework. The Prism model checker supports various model components that have a corresponding part in Uppaal, which facilitates the introduction of an abstraction layer later on in this work. A model in Prism is composed of one or multiple modules, which correspond to the template concept in Uppaal. Each of these modules is made up of constructs that represent locations, edges, and the rules for transitions. The edges are modelled as so called commands with the following structure: Command (structure): [action] guards -> (update 1 ) & (update 2 ) &...;(3.1) Command (example): [addone] s = -> (s = 1) & (x = x + 1); (3.11) The guards and updates of one or multiple edges can be gathered in such a command. Without the introduction of probabilities, the commands correspond to single edges, with the guards given on the left-hand side and the updates dened on the right-hand side. A probabilistic branch - and thus multiple edges with probability weights - can be expressed by a command in the following form: Prob (structure): [action] guards -> prob 1 : (upd 1 ) + prob 2 : (upd 2 ) +...;(3.12) Prob (example): [next] s = ->.5 : (s = 1) +.5 : (s = 2); (3.13) The synchronisation of edges in dierent modules is achieved via the selected action names. All commands with the same name are supposed to be executed simultaneously, which means that the synchronisation process in Prism follows a blocking concept and does not distinguish between callers and listeners. The locations and states are represented by one or even multiple variables that can take integer-values in a restricted interval. Specically with the use of PTAs, the model can also be extended with invariants that need to hold for certain states, and clocks can Figure 3.3.: The code-based model view of the Prism GUI 17

28 3. UPPAAL and PRISM be introduced in a similar way as described for Uppaal. The declaration of variables can again take place in a local or global context, for which it is either included inside a concrete module, or in the rst section of the system declaration. It is important to note that variables which are dened in a global context cannot be accessed by any command which is part of a synchronisation process, so this can only be performed via commands without an action name. While the variables dened inside the modules as well as the modiable variables in the system declaration can represent integer and Boolean types, it is also possible to include double values in the global context as long as they are declared as const. Finally, we should remark that in contrast to Uppaal, Prism does not provide a graphical representation of its models, which are completely dened in a textual way. The graphical user interface used by Prism, containing an example of a text-based model denition, is shown in Figure

29 4. Approaching Verication Timing 4. Approaching Verication Timing In this chapter, we will investigate the critical timing aspect of the verication component of statistical online model checking, which arises from the fact that multiple verication queries and their maximum simulation sample counts need to be scheduled and reduced to the given limited time intervals between verier calls - especially in combination with the underlying and previously covered model checkers Uppaal and Prism. For that purpose, we initially describe the timing problem in Section 4.1. After that, we give an explanation of the general routine which is applied to overcome the partly exceeded time constraints in Section 4.2, and describe the actual implementation of the query pre-processor and scheduler in Section 4.3. In the end, Section 4.4 gives a summary of the solved timing aspects and points out the restrictions of the chosen approach Timing Problem The limited time frame of a verication and simulation cycle in combination with the real-time constraints of the SOMC framework requires the individual process timing to be as controllable as possible. Besides the time-related problems going along with potential state space explosions during model simulation and trace reconstruction, one critical timing aspect emerges from the sample-based Monte-Carlo simulation which replaces the explicit and symbolic model verication process for the statistical model checking approach. The basic and commonly used verication query calls of Uppaal only allow for a rough time adaption either by altering the model-internal time frame, by changing the query itself (e.g., compare probability evaluations instead of hypothesis testing), or by adapting the statistical parameters such as probabilistic deviation or probability uncertainty. Unfortunately, using only these adjustments does not provide sucient timing accuracy for our application. In the previous project work [9], we pointed out that even with constant parameters, the consumed time of each individual query may vary by integer factors, making it impossible to predict the time for our real-time constraint. For that reason, we need to resort to a solution that is more directly connected to the resulting verication time. As both Uppaal and Prism do not provide the ability to set direct temporal bounds for the verication process, we want to focus on the number of executed simulation samples instead. While Prism ocially supports the sample count as one of its arguments besides the condence level and probability interval width (two of these parameters need to be given while the missing one is then automatically calculated), the desired sample count can be set in Uppaal as well, using the following unocial query template for the basic probability evaluation: Probability Evaluation: simulate 5 [<= 1] {dummy} : 5 : {expression} (4.1) In the shown case, exactly 5 samples are simulated for 1 time units each, and used for the nal probability calculation. The dummy is supposed to refer to a static, unused 19

30 4. Approaching Verication Timing variable to keep the memory overhead of this query low. The other common queries (e.g., hypothesis testing and probability comparison) can be built up on the probability evaluation query, which allows us to fully utilise the sample count adjustment for our purpose and develop a routine which pre-processes each query to conform with the limited real-time frame General Routine The routine which we need to execute during each time interval before the actual verication process consists of 5 steps, and it requires us to move some of the statistical calculation steps from the underlying model checker to the level of our framework: 1. Manually calculate sample number from error margin and condence level 2. Preprocess and schedule queries 3. Run queries with reduced sample count 4. Derive new error margin / condence level from adapted sample number 5. Take countermeasures if parameters lie below predened threshold The rst step is necessary in those cases where we demand for a specic error margin and condence level or condence interval and would normally rely on the integrated model checker to calculate the corresponding sample number which it expects to require for a sucient result. In our adjustment routine, we need this value even before starting the simulation runs, so we calculate them using one of the statistical functions explained in Chapter 2. After we determined the unadapted sample amounts for all queries, we move on to the second step, and apply the pre-processing routine to the set of sample counts, which we cover in more detail in the next section. The third step consists of the actual query evaluation performed by the underlying model checker, which - compared to the original and unaltered queries - now meets the timing constraint given by the chosen verication interval. To allow for a nal interpretation of the new parameters, the error margin and/or condence level need to be recalculated in the fourth step, as at least one of them changed due to the sample count adaption. At last, we may want to initiate custom countermeasures in the fth step in case that the new parameters lie outside bounds that we need to set in advance, which would mean that the query does not provide a sucient level of certainty regarding its result anymore. Such a case may need to be treated similar to a complete certain but negative query result, especially if we decide to focus on the worst case of possible model states at any time Query Preprocessor and Scheduler The query pre-processing and optional scheduling routine forms the core part of our timing solution for the verication procedure. The steps of this routine are depicted 2

31 4. Approaching Verication Timing Figure 4.1.: The query pre-processing steps (right) within the verication cycle in Figure 4.1, starting with the gathering of all involved query objects to extract their data and settings. After the verier engine of the selected underlying model checker is initialised, which depends on the implementation of the particular engine and may need to be adapted for a new model checker, we perform a xed but small number of sample runs for each query. As we previously pointed out, the utilised model checkers do not support the execution of sample runs for a xed time interval, so we need to tackle the problem from the other direction by measuring the consumed time of a small set of samples, and approximate the number of samples that would t into the time interval. Based on the duration of a single sample run of each query as well as the desired sample number provided with every query object, we can then estimate the total execution time needed for all queries in their unaltered forms. The estimation may still deviate from the actual execution time because of inuencing factors such as the varying process execution intervals caused by the scheduler of the operating system, and the possible case that relatively more often computationally intensive sample paths could be taken compared to the limited amount of test samples due to the stochastic and hence unpredictable nature of the model in view. For that reason, it is necessary to introduce a security factor for the calculation, providing room for deviation without exceeding the actual real-time scope. Based on multiple query tests, a security factor between 2 and 4 turned out to be extensive enough to prevent conicts with the real-time constraint in our case. The comparison of the expected execution time to the time interval we are allowed to use may indicate that one or multiple queries need to be adapted to meet the timing constraints. For that case, we implemented multiple adaption strategies as shown in Figure 4.2. One type of strategies, which EqualDivide and EqualScale belong to, aects the sample counts of all queries, while another type including SimpleSequential among others may only aect a part of the queries. If the query objects are assigned with priority weights beforehand, it is also possible to include this information when the time interval is divided among the queries. Furthermore, we may want to execute a query with a higher priority before ones with lower priority values when we choose SimpleSequential 21

32 4. Approaching Verication Timing as strategy, as all remaining queries beyond the time scope will be aborted or not even started. After all query objects have been updated with their particular sample count adaptions, the set of query data is forwarded to the verier engine, leading to the third step of the general routine described in the previous section Scheduler Evaluation At this point, we showed that it is generally possible to use the sample- and simulationbased verication approach to meet the real-time constraints during the query evaluation of the SOMC cycle. By reducing the number of sample runs for each query based on a suitable strategy, we are able to distribute the provided time frame among the queries, and are given a certain degree of exibility and control by including additional data such as priority values and security factors. While the general problem - in the scope of the verication routine and the capabilities of Uppaal and Prism - is solved with that approach, there are several open problems that need to be addressed in future work as long as the model checking tools themselves do not provide the feature to abort the verication process after a set amount of time, returning their intermediate simulation results. In any case, there is always the need to reduce the desired query parameters to t into a certain time frame, bringing about the question for the most reasonable sample count reduction strategy in each individual experiment. As the creator of an experiment, we have to think about if we really demand to have at least some results for each of the dened queries (e.g., in case that all queries have the same priority value), even though it may lead to the situation in which the probabilistic uncertainty and probability interval widths of all queries may lie beyond the desired thresholds due to a too low sample count for each query. For that scenario, the worst case of all propositions would have to be assumed. As an alternative, we could decide to pick only a few of the queries for a more extensive and therefore signicant sample count, and verify the remaining ones only if additional time is left. This option in turn becomes critical if all queries are security relevant and need to be evaluated. Only the particular experiment and its frame conditions can provide the information about which strategy is more suitable then, and about an appropriate measure for query priorities. Figure 4.2.: The structure of the pre-processing strategy pattern 22

33 4. Approaching Verication Timing As long as we have to estimate the query duration time by a number of test samples, we also have to further investigate how to choose that sample count. A number which is too low leads to an increasing deviation during extrapolation, while a high number may take up a too great part of the already limited time frame. The latter one would be preferable if we decide to use the result data from that test simulation as well, which would require us to manually gather and combine the data from multiple evaluations of the same query during one verication step. Building up on this additional routine, it would also be possible to split the query into multiple 'sub-queries', each covering a part of the desired sample count, providing intermediate results even if the overall evaluation of that particular query is eventually aborted due to timing reasons. The downside of this solution is that each query evaluation call for the underlying model checker takes up an additional amount of time, which again may sum up. All these timing considerations nally raise the question about how to properly choose the security factor, so that even in case of any of these deviations, the execution will still comply with the timing constraints. A satisfying solution is still to be found for these problems. Taking all that into consideration, we get an understanding of the problems arising from the restriction of the model checkers, executed in a non-rtos environment. Preferably, we would like to have access to the intermediate simulation results at any time, but this capability needs to be directly integrated into the model checker, and therefore cannot be used for our current work. Within the scope of this thesis, the presented solution provides for a sucient tolerance to test the developed concepts. 23

34

35 5. Framework Adaptability Extensions 5. Framework Adaptability Extensions From a structural point of view, the original OMC framework was partly limited in terms of its extensibility and adaptability, as it was tailored to the model checking solution Uppaal, directly utilising its API specic methods. For the SOMC approach, we want to not only rely on the recently integrated SMC add-on functionality of Uppaal, but also want to be able to resort to dierent model checkers that are built up on the statistical concept ab initio such as Prism. To enable that integration with as few code additions as possible, we need to change the framework implementation at multiple points, which are the subjects of this chapter. At rst, we describe the changes to the framework that are needed to complement the modular structure of its components in Section 5.1. Afterwards, we introduce an intermediate model layer in Section 5.2 that provides a form of model abstraction, allowing for a generalised interface access to recurring elements like locations, transitions, modules and states, which is necessary for the implementation of an overall state space reconstruction routine for any arbitrary model checker. Section 5.3 will then cover the implementation of the verier and simulator interface, permitting us to access the underlying verier engine with standardised query representations, and to access the simulator routines with a dened set of methods, leading to the nal goal of being able to integrate the verication, simulation and state space reconstruction part in a modular and generalised manner. We nish the chapter with a short summary and evaluation of the described extensions in Section Modular Structure For the generalised interface structure to be implemented, it is necessary that the framework components follow a modular concept, so that we can leave as much of the initial code unaltered and reuse it with multiple model checker implementations. Especially the mathematical tools (e.g. DBMs, manual clock and transition evaluation methods, and statistical functions) should be completely independent from their application scenario, while the state reconstruction routine, the verication and simulation routines as well as any kind of model manipulation procedures are supposed to be independent from each other, and only communicate with other framework components and the model checker via common interfaces. The new complete modular structure that fullls our requirements is depicted in Figure 5.1. The core-package, providing all necessary interfaces for the model components and checker integration as well as the implementation of basic data objects (e.g. queries and verication results), and the components-package with its implementations of consumer, processor and provider objects together with their adapter-classes as well as the general TA-System and OMC-Application classes form the basis of the framework. The solutions of the custom math-package, containing all graph-, transition-, reduction- and DBM-related functions, can be utilised as needed. A fundamental change of the framework structure is given with the separate checker-packages, which implement the core interfaces referring to a particular model checker, whose API functions are imported as 25

36 5. Framework Adaptability Extensions Figure 5.1.: The complete modular framework structure with component dependencies 26

37 5. Framework Adaptability Extensions external library modules. The checker classes can then be used by the experiment editor and model viewer of the guiplugin-package to perform a verication query using the underlying model checker. The experiments that are needed to describe the behaviour of the real systems to be analysed can either be dened through a graphical experiment editor (see Chapter 6) or as individual app-package modules via a code-based experiment editor. In both cases, the verication part of the experiment can be pre-processed and scheduled via the correspondent classes of the backend-package. Finally, the additional help functions regarding XML-, string-, script- and le-handling are located in the supportfunctions-package. Building up on this framework structure, it is now possible to integrate a new model checker as needed by only altering the actual interface implementations of the corresponding checker-package, which we will discuss in the next section Intermediate Model Layer After clearing up the modular structure of the framework in general, the next step consists of showing how a new model checker can be integrated into the workow as opposed to the original framework design based on a xed model checker. The way in which the original OMC framework handled the external Uppaal library is depicted in Figure 5.2. As there was no need to consider alternative model checking engines when the framework was developed, all manipulation and access actions performed by the reconstructor, simulator and verier were straightly applied to the objects (e.g., locations, edges, and templates) of the Uppaal model representation. While this attempt would work for our purpose as well, it would require us to copy the complete set of checker classes and to change every single call to any of the Uppaal API functions, resulting in an unnecessary amount of doubled code with error-prone modications. For that reason, we decided to add an intermediate model layer between the framework core methods and the model checker API, consisting of an abstract and a concrete part, as shown in Figure 5.3. The Abstract Intermediate Layer is formed by abstract class implementations of the previously mentioned core component interfaces of Section 5.1, and it already implements the methods that all model representations have in common, independent from the particular model checker (e.g., regarding the name, connected components, custom attributes, and all getter-methods). The GeneralModelSystem plays a special role in that construct, as it does not only represent the topmost model system object, but also because of its map- Figure 5.2.: The original model checker integration solution with direct object access 27

38 5. Framework Adaptability Extensions Figure 5.3.: The new model checker integration with an abstract and concrete intermediate layer based on interfaces ping functionality between the objects of the model checker, and the generalised objects which the framework performs its actions on. Each time a model component instance of the underlying model checker is accessed, the general model system is supposed to check whether a link between that object and some corresponding framework-internal object exists, and - if it is not the case - to create that link by an entry to one if its maps. This way, it is ensured that there will never exist multiple internal representation objects for the same model component, and that these representations can always be located. The objects of the Concrete Intermediate Layer nally extend the former abstract model components to implement the remaining interface functions with model checker specic routines and API calls. Additionally, these objects implement the IModelObject- Container-interface to provide a standardised way to get and set the underlying model checker object, allowing us to locate that underlying object via its framework-internal representation as well. Optionally, the objects of this layer can also implement the IGraphicalComponentData-interface, which is used to handle the GUI data that is either bound to the checker-internal object or separately dened. With that layer structure, the framework classes do not need to directly access the external checker libraries via their API anymore, but can rely on the set of predened interface methods that all model checkers have in common by their concept. By that, 28

39 5. Framework Adaptability Extensions only the classes of the concrete intermediate layer need to be duplicated and adapted to a particular checker library, minimising the number of necessary modications and allowing the framework to switch between dierent model checkers more dynamically Simulator and Verier Integration As pointed out before, the SOMC framework consists of four basic parts: Firstly, the state space reconstruction routine which initialises the model in accordance with its previous state after environment changes occurred. Secondly, the model adaption procedures that are used to modify the model structure during the reconstruction phase. Thirdly, a simulator which is coupled with the simulation routine of the underlying model checker and thereby handles the traversal of a specic path through the model. And fourthly, a verier part that calls the verication methods of the model checker and organises the incoming queries and resulting data. Based on the system concept of the previous section, we can already use the standardised internal model representation and its interface structure as a basis for model adaptions and hence for any model changes necessary to reach a formerly determined and later reconstructed state space. For the nal part of our framework adaption considerations, we now take a look at the latter two parts - the concrete integration of the simulator and verier components for a particular model checker. The simulator component handles the calls to the system model of the underlying model checker that aim at either its automatic simulation or execution of dened or random transitions, as well as the scheduling of internal data variable changes and the initiation of the reconstruction routine. The necessary methods that we demand from such a checker system to be compatible with our framework are dened by the ISimulator-interface, and its concrete implementation is handled by the corresponding simulator class of the model checker integration module. An overview of the included methods is given in Table 5.1, and as an example, the concrete implementation of the load-method for both Uppaal and Prism is shown in Figure 5.4. We can see that while the Uppaal integration only needs the Method void load(string le) / void save(string le) void setrealtime(long realtime) / long getrealtime() void start() / void stop() / void pause() boolean isrunning() / boolean ispaused() void execute(string name) void reconstruct() void add / removeverierhook(iverierhook hook) void scheduledatachange(string varname, Double varvalue) IVerier getverier() Exception getexception() ITASystem gettasystem() Description load a model for simulation / save snapshot set / get the realtime ratio setting start / stop / pause the simulation check if simulation is running and/or paused force the execution of a transition initiate the state space reconstruction handle registered hook objects schedule variable data for change return the connected verier return the most previously occured exception return the TA system Table 5.1.: Methods declared by the ISimulator-interface 29

40 5. Framework Adaptability Extensions (a) Snippet from UppaalSimulator (b) Snippet from PrismSimulator Figure 5.4.: Comparison of the Load-method implemented for Uppaal and Prism PrototypeDocument of the loaded model, the Prism integration needs to be provided with further type parameters in addition to the parsed model le, and the model representation needs to be loaded by the Prism system manually afterwards. The peculiarities of the individual systems are thus all handled in the concrete implementations of the common simulator interface. In a similar manner, we deal with the verier part of the SOMC system. Two interfaces provide the necessary methods for the communication with the framework: The IVeri- er-interface, which contains the methods for setting up checker-specic and vericationinterval-relevant options and also initiates the building and delegation of query strings based on the data provided by the corresponding QueryData-object, and the IStandardQueries-interface, which is implemented to concretise the building process of query strings suitable for the model checker. Again, an overview of the interface methods in provided in Table 5.2, and the code in Figure 5.5 shows the dierences in their implementation that can occur for dierent model checkers without aecting the called method itself, using the example of the query-method. As the Uppaal verier engine automatically distinguishes between queries that need a mathematical symbolic deduction of the result, and those requiring a sample-based simulation approach, both kinds of verication are initiated by the same API call. In contrast to that, the Prism model checker requires some further steps during the process, such as parsing the properties Method void setoption(string option, String value) void setscope(long scope) void setnumberofruns(int runs) void setperiodstarttime(long periodstarttime) void setperiod(long period) void add / removeverierhook(iverierhook hook) String[] query(querydata querydata, boolean timebound) String buildquery(querydata querydata) void stopquery() Description set checker specic options set the time scope set the desired sample count set start time for time-bound verication set the time period length handle registered hook objects execute the query build a string from a query object interrupt the running query Table 5.2.: Methods declared by the IVerier-interface 3

41 5. Framework Adaptability Extensions (a) Snippet from UppaalVerier (b) Snippet from PrismVerier Figure 5.5.: Comparison of the Query-method implemented for Uppaal and Prism of the modules le, and dening which of the three statistical main parameters is not given and needs to be calculated by the engine (via the CIiterations-, CIcondence- or CIwidth-method). Additionally, we have to manually call either the modelcheck- or the modelchecksimulator-method to distinguish between a symbolic and simulation-based verication approach Extensions Summary and Evaluation Starting from a framework that was strictly bound to a single model checking solution, we can now consider and compare the results of multiple model checkers in the scope of this work, which can be incorporated with the new modular and interface-based structure without the need to further adapt any of the core-routines of the framework. The conditions that a model checker needs to fulll for the integration are limited to an API providing the basic functionality of initiating and executing the simulation and verication routines (cf. Section 5.3), and an internal model structure that either corresponds to the framework-internal concept of locations, edges, modules and documents, or can at least be transformed into that structure. Both the API calls and potential model transformation steps are handled in the concrete implementations of the interfaces for each model checker. While this solution already contributed to the exibility of the framework, it can be further enhanced in the future by adding the feature to transform the framework-internal model representation into representations that can be handled by an individual checker. Combined with the contrary process which is dened in the implementation of our interfaces, it would be possible to use dierent model checkers without the need to manually adapt the model for that particular system. At the moment, we still need to provide a 31

42 5. Framework Adaptability Extensions new, adapted model le for each of the checker engines that we want to use, which results in an additional and - due to the internal representation which is standardised anyway - also unnecessary eort. 32

43 6. GUI Solutions 6. GUI Solutions Before we move on to the theoretical core of this work regarding the adapted reconstruction routine for models including stochastic components, we use this chapter to give an overview of the changes to the GUI which is deployed with the SOMC framework. After a brief introduction to the GUI in Section 6.1, we present the dierent components that form the complete cycle from experiment creation to the simulation and verication of its system model, containing the graphical experiment view in Section 6.2, the code-based experiment view in Section 6.3, and nally the graphical model view in Section GUI Introduction Based on the GUI solution displayed in Figure 6.1, which is distributed with the original OMC framework, we aimed for an extended user interface that bundles the dierent steps of the framework application cycle without the need to leave the GUI environment or extend the package structure of the framework itself (the original version we worked with required the concrete experiment classes to be located at a xed path inside the framework package structure). To provide the functionality for both a new graphical and the former code-based experiment creation in a single environment, combined with an OSGi-specication-based, extensible module structure of the framework, we decided to port the existing pieces of code from the Abstract Window Toolkit (AWT) to the Standard Widget Toolkit (SWT), and enable that code - paired with our own additions - to be inserted into the Eclipse IDE, which is based on SWT as well. In this way, new Figure 6.1.: The GUI distributed with of the original OMC framework 33

44 6. GUI Solutions Figure 6.2.: The graphical experiment view of the GUI modules containing either the interface implementation of a new model checker, or the description of an experiment, can be developed right within the environment or obtained from external sources, and can then be installed into the complete system dynamically. The GUI views that enable the experiment creation and verication features are described in more detail in the following Graphical Experiment View The graphical experiment view was created as an alternative to the already existing code-based solution which utilised self-contained classes to describe the experiment and its concrete system objects. The graphical view consists of a node-based system, which uses nodes to represent instances of a script-function object, and edges to connect their inputs and outputs to preceding and subsequent nodes of other functions. In this manner, we can create complex function chains that can simulate the behaviour of a virtual system, control the variable data of the veried system, and analyse the verication data to complement the cycle by adapting the verication parameters of dierent queries. As an example, some components of a sample experiment are shown in Figure 6.2. The Trigger-nodes frequently output a signal for a single simulation step which is used to activate the query- and model-related nodes. Following the ow of the components, the QueryScheduler-node is activated at rst, followed by the assignment of specic data to some probability variables inside the model by the ModelProbability-node. Finally, a query verication is triggered by the QueryStill-node, which then outputs the query results for a further use in the experiment system. Independent from that, the nodes at the bottom are used to stream data samples from a le (CSVDataStream), buer that data (DataBuer) and nally split the samples into column groups (MotionDataRowSplitter). 34

45 6. GUI Solutions 6.3. Code-Based Experiment View The code-based experiment view oers the experiment creation functionality of the original OMC framework, which allows the user to extend the OMCApplication-class and implement the functions of the virtual system, the handling of sensor data, and the desired verication queries. The view consists of the code input area, which is directly connected to the Java editor of the Eclipse IDE, and an overview section of the existing experiments, as shown in Figure 6.3. After the creation of a new experiment, the corresponding module can be installed into the system, and can then be manually loaded for execution within the graphical model view, which disables the data output of the graphical experiment view until reactivation Graphical Model View The last of our three GUI main parts is the graphical model view shown in Figure 6.4, which combines the GUI capabilities of the OMC framework with the newly added query denition tool. The sensor and variable sub-views display all incoming and model-internal data, and the model sub-view illustrates the modules of a model in a graphical representation (in case that graphical data is provided by the model) and its current activation status in real-time. These components are further covered in the original work by Rinast [6]. Based on that view, we contributed an additional section for a more dynamic query denition, which is located below the model sub-view. While it was formerly necessary to dene the query strings and initiate their execution within the concrete experiment application class, we can use this view to create and monitor queries without the need to leave the environment and to restart it after the experiment code was adapted. Besides that, we are now given an overview about the desired and actual statistical query parameters before and after the application of the pre-processor and scheduler, which can additionally help to supervise the system during simulation. For each of these queries, it Figure 6.3.: The code-based experiment view of the GUI 35

46 6. GUI Solutions Figure 6.4.: The graphical model view of the GUI is nally possible to create a node-object in the graphical experiment view, which allows us to use the verication results for further parameter adaptions, and which closes the cycle of model updating, simulation and verication. 36

47 7. Reconstruction Routine Extension 7. Reconstruction Routine Extension After we nished the handling of the verication timing behaviour, the framework restructuring and the GUI extension for enhanced usability, we want to concentrate on the core feature that allows the framework to set the model variables to up-to-date system observations and reconstruct the original state space of the model simulation together with these variable changes. This reconstruction routine was originally designed for deterministic models by Rinast [6], and needs to be adapted to handle the non-deterministic components of stochastic models. We start with the current situation of the reconstruction process in Section 7.1, where we describe the capabilities and the general problem of the routine. Afterwards, we elaborate the necessary changes and conditions for a statistical application in Section 7.2. The concrete concept that covers the stages of state determination, tracing and verication in a stochastic environment is presented in Section 7.3. We nally evaluate the capabilities of the new reconstruction approach in Section 7.4, explain its advantages and limitations, and describe its planned structure for a future case study application Starting Point Our considerations are built up on the state space reconstruction routine of the original framework for deterministic, non-stochastic models. In that concept, the following state is always uniquely dened, provided that a valid one exists. Based on the current state of the system, two dierent cases can occur during the verication of the model: The following state, which is determined by newly measured variable data of the real-world environment, is either reachable within the measured time span, indicating that the model is still valid, or the model became invalid for the new set of measured environment data, which then requires the adaption of model parameters or a complete exchange of the system model. As a result, it is possible to run a real-time simulation of the model in parallel to the observed real system without further adaptions, as the model path is clearly determined. (a) Assumed Path (b) Real Path Figure 7.1.: The assumed and real paths of a potential simulation scenario 37

48 7. Reconstruction Routine Extension This approach reaches its limits as soon as stochastic components are introduced in the model. With the use of probabilistic branches or exponential state activity rates, a situation can occur in which the simulator is in a valid current state, and a following state derived from new system observations may be technically reachable, but the verication regarding the reachability of that next state can still evaluate to false. Figure 7.1 illustrates the problem using the example of a probabilistic branch: The dierent outgoing edges are weighted with specic probabilities, and one of the edges is selected randomly, based on this probability distribution. Due to that random decision, it is possible that the system model selects one certain path, while the real-world system might have changed according to one of the other edges. The following verication result does therefore not identify the model to be invalid, but rather indicates that either the model or the current path decision does not conform to the observed system. For that reason, it is necessary to keep track of multiple, possibly correct model states of the simulation as long as a clear decision on one of the paths is not possible based on the current system evidence. During this process, it can be possible to frequently discard property-violating traces until only a single one is nally left. We can see that for a reliable decision on the probability of each trace, it is necessary to determine that a new potential state is actually reachable. If this reachability of a measured real-world state can be ruled out for a possible path through the model, we can conclude that either the involved stochastic decisions within that path did not match the real system, or that the system parameters were incorrectly chosen. In both cases, it allows us to discard that specic trace and concentrate on the remaining ones. Within the framework, we can generally use two dierent approaches to determine the reachability, which indicates that the next state lies inside the feasible region dened by the current system constraints. Symbolic Model Checking can be the right choice if we are supposed to identify the validity of a path with complete certainty. As we already pointed out in the Conceptual Foundations chapter, the great problem with this approach is the timing aspect due to a possible state space explosion during verication. The alternative solution would be to rely on Statistical Model Checking with the online extension for that purpose as well, deriving a temporally bounded solution. The problem here is the possibility to miss all valid paths to the new state with the limited amount of sample runs, which would return a wrong conclusion. It is then possible to get into the situation in which all remaining traces turn out to lead to an invalid state in the future, while the correct path was already discarded in the past. For that reason, we take a closer look at the feasibility of reversions and simulation repetitions, as well as the case of a highly improbable, but still possible path in Section Necessary Changes and Conditions For the realisation of any of the concepts developed in this chapter, certain structural changes need to be applied to the reconstruction routine, and a dened set of conditions 38

49 7. Reconstruction Routine Extension needs to hold for a model to allow for its eective analysis. Firstly, we need to adapt the reconstruction routine to support the automatic detection of stochastic components. This can be achieved by analysing both the potential destination locations of the involved transitions in terms of either statistical parameters such as exponential rates or their concrete classes (e.g., the BranchPoint-class in Uppaal), and the probability values of the edges covered by each transition. The probability values at the times when a branching point is taken also need to be stored internally, to allow for a merging of paths as described in the next section. In terms of generated model traces, the reconstructor needs to back-up all simultaneously simulated state sequences as well as the propositions that needed to hold during these sequences. The latter information is needed if a state determination concept including state reversion is used, as the series of propositions derived from the corresponding system observations have to be checked against the alternative path as well. Finally, the reconstruction routine needs to automatically initiate the execution and progression of all parallel simulations, starting from the set of possible current states. For the actually simulated model, the requirements are basically identical to the ones formulated for the original reconstruction process. The model is supposed to be nonzeno, meaning that it is not possible to execute an innite amount of actions in a nite amount of time. This attribute is especially important to limit the state space for the considered time scope. The value range of clock variables needs to be limited as well. For that reason, we demand that all involved clocks are reset at some point on each possible path. Otherwise, the state space would increasingly grow due to the fact that the clock variables always increment their values, which would create a new state on each step even when the remaining parameters are equal to the ones found in a previous state State Determination, Tracing and Verication In general, we can distinguish between three dierent cases that can occur when we check for the validity of future states based on the paths we take into consideration, featuring the current state s c : 1. A single valid next state s n exists a) A single valid path from s c to s n exists b) Multiple valid paths from s c to s n exist 2. Multiple valid next states s n,i exist 3. No valid next state exists a) All potentially valid states s n,i dier only in one clock variable value b) All potentially valid states s n,i dier in one parameter value c) All potentially valid states s n,i dier in multiple clock and/or parameter values 39

50 7. Reconstruction Routine Extension Figure 7.2.: The model simulation trace considering one path at a time In the rst case with only one valid following state, we can further distinguish between two scenarios: In the optimal case, we only identify a single path between the current and the next state. The process is then equal to the deterministic case, in which the path through the model is clear. A dierent case is given when we nd multiple paths which all connect the current and the next state, which can occur if the state in question is not entirely dened regarding all system variables. Under that circumstance, we have to determine to which degree these paths dier, and whether these dierences might have an impact on critical parameter assignments of future states or not. As soon as the aected variables of all determined paths are nally set to a common value during further progress, these paths can either be merged, or a random path among them can be selected as the valid one. The second case of multiple valid next states requires a more in-depth analysis. The most fundamental part of a reconstruction routine to support the inclusion of stochastic components is the determination of the set of model states which correspond to the current real system state and which are possible to reach via the chain of previously determined states. Various search strategies can be applied for that purpose, and we want to focus on three of them: A Single Path concept which considers one path at a time, a Multiple Paths strategy which takes all possible paths into consideration, and a Probable Paths strategy which considers paths based on their probability. Single Path Strategy The Single Path strategy is shaped by the concept that the most likely path in the model is also most probably taken by the observed system, and thus it is solely considered as long as no property violations occur, in which case a reversion step to the most recent probabilistic decision is executed. The essential steps of this strategy are shown in Figure 7.2. In the rst step, the system model is initialised with a well-dened and valid 4

51 7. Reconstruction Routine Extension state A. As long as the states follow a deterministic path - as given between state A and the branching point - the state space reconstruction routine remains unaltered. With the occurrence of the branching point in the second step, the system now chooses the most probable transition based on the currently assigned probabilities, leading to the state E in the third step, which is at this point still considered as the true one provided that its parameters conform to the observed state. In the fourth step, the system now encounters a state that does not satisfy the propositions dened by the system observations. For that reason, the state space reconstruction routine performs a reversion action to the last probabilistic component in the fth step, which is the branching point in this case. From that restored position, the system then needs to choose the second-most probable transition, and advances until it reaches a state that temporally corresponds to the real system - which is state I in the last step - while checking that each intermediate state satises the backed-up propositions at their specic points of time. In the case that all possible traces starting from a stochastic component are nally considered invalid, the system reverts to the second-most previous component. The benet of this solution is the fact that as long as a path can be considered as valid, the system only needs to handle the workload of one single path during simulation, which can be advantageous for our real-time constraints. The revert action though can cause a critical timing situation, as the formerly unconsidered path needs to be fully simulated until it catches up with the real system. This can become even more severe when multiple reverting steps are necessary until a valid and temporally equivalent state is found. Multiple Paths Strategy An alternative way to deal with several valid traces can be a Multiple Paths strategy, which is illustrated in Figure 7.3. In contrast to the single path concept, this approach tries to consider all possible paths when the simulator encounters a stochastic element. We see that after a probabilistic branch is reached in the second step, the reconstruction routine would then handle all paths as possibly valid, which is shown in the third step, and would initialise the system model individually for every possible state during each simulation step. Similar to the last step of the single path strategy, the fourth step indicates the occurrence of an invalid state on the sub-path D E F, which is thus Figure 7.3.: The model simulation trace considering all possible paths 41

52 7. Reconstruction Routine Extension Figure 7.4.: The model simulation trace considering paths above a probability threshold discarded, while the second, still valid path is further followed. This strategy allows for the most exible reaction to dierent state situations, as it considers all possible scenarios at each point of time. Nevertheless, it may be not recommended for a real-time application, as the amount of parallel traces can be potentially unbounded, resulting in a likewise unbounded time consumption during simulation. Probable Paths Strategy A third approach to our reconstruction problem is a Probable Paths strategy, which can be viewed as a combination of aspects from the two previous strategies, choosing the paths to consider based on the individual path probability in comparison to a certain threshold value. The basic steps of this concept are shown in Figure 7.4. After the initialisation of the system model with the state A in the rst step, the simulation encounters the rst stochastic component, which is again a probabilistic branching point in the second step. The probabilities of the possible, following paths are.7 and.3, which both lie above the selected threshold of.1. For that reason, both paths are considered as possibly valid, and simulated in parallel from that point on. The third step illustrates the encounter with another branching point for each individual path. We can see that the paths after E are taken with probabilities of.8 and.2, resulting in the complete path probabilities of.7.8 =.56 and.7.2 =.14, respectively, which are both above the threshold of.1. On the contrary to that, the probabilities of the complete paths containing the second path are.3.8 =.24 and.3.2 =.6, at which the second probability lies below the threshold value. The rst three paths are therefore executed in parallel now, while the last one is excluded until evidence requires to discard all other parallel paths starting from that individual branching point. This case is shown in the last step, where the state M was identied to be invalid, so that its alternative sub-path K N O in now activated as a replacement. As a growing amount of potential paths lead to an increasingly smaller ratio of each relative path probability, using a xed probability threshold can result in the temporal suspending of all paths, which now under-run the 42

53 7. Reconstruction Routine Extension threshold value. Therefore, it may be reasonable to either scale the probability bound according to the number of possible traces, or to dene a minimum number of paths until which new paths are added even if their probability values lie below the threshold. With this approach, we can try to nd a balance between actively considered and temporarily ignored paths. Nevertheless, we need to keep in mind that even with this compromise, it is not guaranteed that a valid path can always be determined within the limited time frame of a run-time environment, and it is questionable if such a statespace-limited solution for probabilistic systems can actually be found at all. The number of simultaneously simulated paths or necessary reversions heavily depends on the amount of stochastic components, its traversal frequency, the extent of deterministic sections between them, and the distinguishability of resulting paths. As a last case, it is possible that no conforming next state exists. If a valid model state and its corresponding path cannot be identied for any of the considered traces, the standard routines which were already used for the original framework can be applied, which consist of an adaption of the model variables, including clocks and frame parameters. All possible traces then need to be simulated again for each new assignment of parameters, until a suitable state is nally found. A manageable case is given when only one clock variable or parameter is aected, while the remaining ones correspond to the desired state. An adaption of the temporal constraints such as delays might then already be sucient to conform the model to the observed system. A more critical situation occurs when more than one parameter is aected. In the worst case, it is possible that both decision adaption and parameter adaption do not lead to a valid state. In this case, we can consider either of the following actions: 1. Cancel the experiment and initiate an appropriate countermeasure routine 2. Continue with the state that is closest to a valid conguration The concrete decision should be based on the safety constraints of the eld of application. If a wrongly identied state can lead to a severe damaging of the involved devices or persons, it is more reasonable to stop the simultaneously executed model simulation and verication, based on well-dened shutdown routines. In other cases, we may deal with a system that either exhibits a dangerous behaviour on an immediate shutdown, or does not react to slight deviations in a critical way. For these systems, it can also be reasonable to temporarily assume the most suitable invalid state to be valid, which may eventually lead to another truly valid state, and thus continue the experiment. Path Merging and Verication With the alternatives of reconstruction process adaptions being claried, we still have to investigate how to treat parallel traces (as applied in the Multiple Paths and Probable Paths strategy) in the actual verication process, especially when we want to determine the probability with which a certain property holds, or when we perform a query for the value bounds of selected parameters. In these cases, we will obtain probability values 43

54 7. Reconstruction Routine Extension for each path, which need to be merged afterward by scaling them with their individual weights. These weights are the probability values dened by the sequence of involved stochastic decisions within the path. The sum of these weighted probabilities or value bounds nally represents the single verication result which an optional countermeasure evaluation can then be based on. An example for this weighting is shown in Figure 7.5. Based on the path that was chosen by the simulator, and assuming that a transition is taken for every time unit, the variable x is increased by a xed value on each step, which leads to dierent nal values of x after an exemplary scope of 5 time units. By scaling each outcome with its corresponding probability, we get the values.2 5 = 1, = 1, = 2 and.6 15 = 9, which sum up to an average value of Routine Evaluation In the previous sections, we described the foundation of the original model reconstruction routine, formulated the concept and model requirements for the introduction of an adapted approach, and specied dierent techniques to deal with the emerging problems involved by multiple potentially valid model paths and states, for which we explained the simulation process and the handling of partial verication results. For the scope of this thesis, we rely on the simulation and verication systems of the underlying third-party engine used by the selected model checker. The determination of the state reachability could otherwise be improved via a combination of systematic symbolic model checking and a sample-based approach, by which certain invalid states and their corresponding paths could be directly identied based on system constraints given by concrete variable assignments and clock resets on specic edges, while the remaining paths are then checked with simulation runs. Paired with the monotonicity behaviour of performed variable modications, it may be possible to restrict the set of considered paths even further. But even with the application of these improvements, and by choosing one of the Figure 7.5.: An example for the determination of average variable values spanning multiple traces 44

55 7. Reconstruction Routine Extension introduced model traversal approaches, the major problem of a potentially unbounded state space distributed among multiple traces still remains. In such a case, it can only be attempted to observe additional environment parameters, which may allow for an earlier clear determination of the correct state. Besides that, we still need to think about a suitable measure for the conformance of dierent traces, as the type of measure heavily aects the eectiveness of the path merging step in the case that parallel paths eventually lead to a similar, following state. Then again, it will always be a critical point to decide whether traces can be merged without simultaneously preventing a valid future state to be reached, which might only be reachable by the state sequence of one of the original paths involved in the merge step. In addition to that, an extended solution might also consider the direction of its calculation. Simultaneously to the real-time model simulation, the valid value intervals of the current and following states can be calculated ahead of time, allowing for a faster evaluation of valid paths when a new state conguration is extracted from system observations. The problem here is to determine how far into the future the calculation needs to take place, as clock variables may also be subject to model adaptions and thus do not express exactly how many time units are supposed to be consumed. An alternative solution would be to wait for a new data sample, roughly determine the possible model states, and calculate backwards until reaching a given previous path. With this method, we would in turn have to deal with a delay in the data calculation, so that both approaches have their individual limitations. For the following case study, featuring a model with only few states that represent each pattern which are rather distinctive in their variable modication behaviour, we could apply the state space reconstruction routine without the described optimisation eorts. As the unique nature of the patterns should allow for a fast decision on the currently active state, we would choose the foremost introduced Single Path strategy, as we expect the model to only require few reversion steps in case of the encounter with an invalid state. Nevertheless, due to the implications of the adapted approach and partial incompatibilities of the original and the new concept, we leave the extended reconstruction routine in its theoretical state without applying it in a case study before further feasibility considerations. 45

56

57 8. Head Motion Case Study 8. Head Motion Case Study Using the introduced concepts of the previous chapters, we want to demonstrate the framework by applying it to a case study in the medical and technical domain, which deals in particular with the tracking and identication of head motion patterns. In Section 8.1, we provide an overview of the case study concept and its general scope. Afterwards, we present the motion protocol that our experiments are based on in more detail in Section 8.2, and describe the composition of the measured data sets. In Section 8.3, we present and discuss the model concepts and realisations, and the specic attributes that demonstrate its potential suitability for our reconstruction concept. Finally, we explain the dierent stages of the simulation-verication-cycle in the concrete application, and specically how they contribute to the nal result, in Section Concept of the Case Study The core of this case study consists of the real-time analysis of dierent head motion patterns via a system model that describes the possible motions with a sequence of states paired with subject-related, adaptable parameters. Three dierent types of motion are considered: 1. A steady head position that is characterised by only marginal translation and rotation 2. A fast refocus motion during which the test person changes the point of focus from one object to another at a dierent angle 3. A typical motion pattern of falling asleep in a sitting position To illustrate the dierent motion patterns in terms of their inuence on the head translation and rotation, Figure 8.1 shows an extract from the measured data for each movement phase. We can see that the steady head position causes a minor dithering eect in all translational and rotational dimensions combined with occasional artifacts due to unconscious reactions. The refocus process in mostly dened by a translation on the xz-plane and a central rotation around the y-axis, and the motion sequence of falling asleep mainly shows a translation on the xy-plane and a rotation around the x-axis. A useful attribute of this experiment is the clear distinction between the dierent motion patterns, making it possible to decide on one of the possible schemes after a short time, which would also helps to reduce the simulation state space of the experiment in the long term in case that a reconstruction routine is applied Motion Experiment Protocol In order to allow for a direct comparison of the verication results for several test persons, a standard motion protocol is introduced for the experiment, which contains multiple 47

58 Rotation [degree] Rotation [degree] Rotation [degree] Translation [units] Translation [units] Translation [units] 8. Head Motion Case Study repetitions of each pattern together with changes to the motion parameters within each pattern. The complete experiment procedure is dened as follows: 1. 3 seconds in the initial steady head position 2. In each case, 3 times a double refocus motion to an object - with 1 seconds of a steady head position in between - to an object a) at an angle of 25 counter-clockwise and back, b) at an angle of 45 counter-clockwise and back, c) at an angle of 45 clockwise and back 3. 2 seconds in the initial steady head position 4. 3 times a motion of falling asleep, in which a) the rst two are uently performed b) the third one ends with a rather startled awakening motion seconds in the initial steady head position The rst 3 seconds of steady head posture are included to establish a xed reference point in the measured data for each experiment run, and the nal 12 seconds are used to determine if a probability increase for occurring motion artifacts is noticeable when the head is kept in a xed position for a longer period of time Time [s] x y z Time [s] x y z Time [s] x y z (a) Steady (translation) (c) Refocus (translation) (e) Sleep (translation) rotx roty rotz 2 rotx roty rotz 2 rotx roty rotz Time [s] Time [s] Time [s] (b) Steady (rotation) (d) Refocus (rotation) (f) Sleep (rotation) Figure 8.1.: The translation and rotation sequences of the three patterns steady position (left), fast refocus (center), and falling asleep (right) 48

59 2 Rotation [degree] Translation [units] 8. Head Motion Case Study x y z rotx roty rotz Time [s] 1 Rotation [degree] Translation [units] Time [s] x y z rotx roty rotz Rotation [degree] Translation [units] 2 x y z rotx roty rotz Time [s] Time [s] 1 Rotation [degree] Translation [units] 15 Time [s] Time [s] x y z rotx roty rotz Time [s] Time [s] (a) Translation Traces (b) Rotation Traces Figure 8.2.: The translation and rotation traces of 4 di erent test persons For the scope of this work, we base the application of the SOMC framework capabilities on a set of movement data measured for 4 di erent test persons. The complete trace of the translational and rotational components of each data set is shown in Figure 8.2. We can immediately notice that even though the general motion follows the protocol given above, there exist individual peculiarities in the way the single motions are executed. The tracking of these di erences in both the spatial movement and their timing behaviour during the experiment, as well as the overall determination of the current motion pattern, is subject to the SOMC process, which uses a frequent veri cation attempt to identify the pattern, and which could eventually predict the future value bounds of the motion parameters based on the obtained probability data of each motion scheme Model Structure And Attributes The parameters which are frequently derived from the measured data of the previous section are passed on to the formal model of the observed system, which is updated accordingly and then used to check the properties that are necessary to hold for the 49

60 8. Head Motion Case Study (b) StillStep (c) RefocusStep (a) HeadMotionMain (d) SleepStep Figure 8.3.: The components of the rst basic Uppaal head motion model model to be valid in relation to our observations. For the head motion case study, we use dierent model attempts to nd out which one is the most suitable for a fast and precise pattern determination and motion approximation. The rst attempt is shown in Figure 8.3. It is the most basic version of the model, which does not rely on the information about the state it was supposedly in at the previous execution step to determine the motion pattern, and thus does not depend on the state space reconstruction routine. The model consists of 4 templates: A main template in which the probabilistic weighting of the possible patterns as well as the frequent triggering of the single motion steps is handled (HeadMotionMain), and 3 dierent templates for each individual motion pattern, in which the value updates for the position and rotation variables are handled (StillStep, RefocusStep, and SleepStep). In our case, the motion patterns are provided with averaged velocity data gathered over a dened past time scope, and they use that information to linearly approximate the following motion steps. In the rst experiment with that model, we do not use value bounds for the velocity data in any pattern. For the exemplary case of a refocusing step, this means that it will always be identied as a refocus motion among others in case that the x and z values of the observed real data change according to the previously measured velocities velx and velz. In particular, this means that even a mostly steady head position is also considered as a potential refocus motion executed at a comparatively small speed. In the second experiment, we keep track of the global maximum and minimum velocity data of the observed system, and dynamically set the parameter bounds for the randomly selected velocity values of the simulated system in relation to the observed bounds. In this way, we make sure that a steady head position can be uniquely determined as such, as the refocus and sleep patterns now apply a minimum rate for the position change. 5

61 8. Head Motion Case Study While the model used in the two experiments is sucient to determine the motion patterns based on a linear approach, and even though our case study scenarios in the practical part of this thesis will only focus on these two variants, its stateless nature regarding the past does not allow for more complex pattern models which consist of multiple motion phases (e.g., a refocus motion that starts with a constant acceleration, afterward keeps the velocity at a certain level, and nally de-accelerates at a constant rate) or non-linear, time-dependent functions. In these cases, it is necessary to reconstruct the model after verication, restoring the last determined state it was in. One model that can be used for that purpose is shown in Figure 8.4. Compared to the previous model, it does not expect the simulation to remain in the once determined state for the scope of one verication step, but allows for transitions from any of the pattern states to any other, which are again weighted with dynamically changing probabilities. The individual states remain active for a certain amount of time, in which the dierent phases of the related motion pattern are executed. The reconstruction process then assures that the exact state of the model (i.e., both the active pattern state of the main model, and the progress of the active pattern model) is restored before another step of the simulation-verication-cycle is executed. In contrast to the rst model, this attempt can nally give information about how likely it is for each pattern to be followed by another pattern. As an example, the probability of the sleep pattern to occur directly after a fast refocus pattern without traversing the steady motion state is supposed to be quite low, and in the course of future experiments including the reconstruction routine, it is expected to be updated to a corresponding value. In order to compare the frequently acquired probability values and the time consumption of the verication process to another model checker, we recreated the previously discussed, rst Uppaal model in the Prism model language. The declaration section, the main component of the head motion model and one of the pattern implementations are Figure 8.4.: The main component of the state-based Uppaal head motion model 51

62 8. Head Motion Case Study (b) HeadMotionMain (a) Declarations (c) RefocusStep Figure 8.5.: Exemplary components of the Prism head motion model shown in Figure 8.5, and we can see that a few changes needed to be applied in comparison to the Uppaal solution. Even though Prism provides support for PTAs (Probabilistic Timed Automata) including clocks and state invariants in general, these models are neither supported by the Prism simulator nor by the statistical model checker which is built up on that simulator. As the fall-back to the supported explicit model checker would not be conformable to the real-time constraints which we require for the verication process, we decided to rely on an alternative model type, and remodelled the system as a DTMC (Discrete Time Markov Chain). In place of using clocks which are not supported in DTMCs, we manually increase the time variables in each state by the amount which the state is supposed to consume. So instead of utilising an internal temporary clock ti which is repeatedly reset after the time dt and initiates the pattern execution, we manually add dt to the global time t on each execution step. Additionally, as the Prism language does not support a random value generation by function, we dene the motion patterns by a series of modication transitions for each individual variable, and add the randomness by using the transition probabilities as shown for the y-variable in the Ref ocusstep-module in Figure 8.5. Finally, the Prism model supports double values only as constants, meaning that we need to handle the motion data as integer variables. The necessary pre-processing for that is done in the experiment model. 52

Detection of Zeno Sets in Hybrid Systems to Validate Modelica Simulations

Detection of Zeno Sets in Hybrid Systems to Validate Modelica Simulations Bachelor Thesis Detection of Zeno Sets in Hybrid Systems to Validate Modelica Simulations Marcel Gehrke July 20, 2012 supervised by: Prof. Dr. Sibylle Schupp Technische Universität Hamburg-Harburg Institute

More information

Change- and Precision-sensitive Widening for BDD-based Integer Sets

Change- and Precision-sensitive Widening for BDD-based Integer Sets Bachelor hesis elix Lublow Change- and Precision-sensitive Widening for BDD-based Integer Sets October 06, 2016 supervised by: Prof. Dr. Sibylle Schupp Sven Mattsen Hamburg University of echnology (UHH)

More information

PRISM An overview. automatic verification of systems with stochastic behaviour e.g. due to unreliability, uncertainty, randomisation,

PRISM An overview. automatic verification of systems with stochastic behaviour e.g. due to unreliability, uncertainty, randomisation, PRISM An overview PRISM is a probabilistic model checker automatic verification of systems with stochastic behaviour e.g. due to unreliability, uncertainty, randomisation, Construction/analysis of probabilistic

More information

Overview of Timed Automata and UPPAAL

Overview of Timed Automata and UPPAAL Overview of Timed Automata and UPPAAL Table of Contents Timed Automata Introduction Example The Query Language UPPAAL Introduction Example Editor Simulator Verifier Conclusions 2 Introduction to Timed

More information

INF672 Protocol Safety and Verification. Karthik Bhargavan Xavier Rival Thomas Clausen

INF672 Protocol Safety and Verification. Karthik Bhargavan Xavier Rival Thomas Clausen INF672 Protocol Safety and Verication Karthik Bhargavan Xavier Rival Thomas Clausen 1 Course Outline Lecture 1 [Today, Sep 15] Introduction, Motivating Examples Lectures 2-4 [Sep 22,29, Oct 6] Network

More information

A Boolean Expression. Reachability Analysis or Bisimulation. Equation Solver. Boolean. equations.

A Boolean Expression. Reachability Analysis or Bisimulation. Equation Solver. Boolean. equations. A Framework for Embedded Real-time System Design? Jin-Young Choi 1, Hee-Hwan Kwak 2, and Insup Lee 2 1 Department of Computer Science and Engineering, Korea Univerity choi@formal.korea.ac.kr 2 Department

More information

Probabilistic Model Checking. Mohammad Roohitavaf

Probabilistic Model Checking. Mohammad Roohitavaf Probabilistic Model Checking Mohammad Roohitavaf Index! Introduction! Probabilistic Systems! Probabilistic Logics! PRISM! Performance Evaluation! Model Checking and Performance Evaluation! Challenges Introduction!

More information

The UPPAAL Model Checker. Julián Proenza Systems, Robotics and Vision Group. UIB. SPAIN

The UPPAAL Model Checker. Julián Proenza Systems, Robotics and Vision Group. UIB. SPAIN The UPPAAL Model Checker Julián Proenza Systems, Robotics and Vision Group. UIB. SPAIN The aim of this presentation Introduce the basic concepts of model checking from a practical perspective Describe

More information

UNIVERSITÄT PADERBORN. ComponentTools

UNIVERSITÄT PADERBORN. ComponentTools UNIVERSITÄT PADERBORN ComponentTools Component Library Concept Project Group ComponentTools pg-components@uni-paderborn.de Alexander Gepting, Joel Greenyer, Andreas Maas, Sebastian Munkelt, Csaba Pales,

More information

Research Collection. Formal background and algorithms. Other Conference Item. ETH Library. Author(s): Biere, Armin. Publication Date: 2001

Research Collection. Formal background and algorithms. Other Conference Item. ETH Library. Author(s): Biere, Armin. Publication Date: 2001 Research Collection Other Conference Item Formal background and algorithms Author(s): Biere, Armin Publication Date: 2001 Permanent Link: https://doi.org/10.3929/ethz-a-004239730 Rights / License: In Copyright

More information

SAMOS: an Active Object{Oriented Database System. Stella Gatziu, Klaus R. Dittrich. Database Technology Research Group

SAMOS: an Active Object{Oriented Database System. Stella Gatziu, Klaus R. Dittrich. Database Technology Research Group SAMOS: an Active Object{Oriented Database System Stella Gatziu, Klaus R. Dittrich Database Technology Research Group Institut fur Informatik, Universitat Zurich fgatziu, dittrichg@ifi.unizh.ch to appear

More information

Overview. Discrete Event Systems - Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for?

Overview. Discrete Event Systems - Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for? Computer Engineering and Networks Overview Discrete Event Systems - Verification of Finite Automata Lothar Thiele Introduction Binary Decision Diagrams Representation of Boolean Functions Comparing two

More information

size, runs an existing induction algorithm on the rst subset to obtain a rst set of rules, and then processes each of the remaining data subsets at a

size, runs an existing induction algorithm on the rst subset to obtain a rst set of rules, and then processes each of the remaining data subsets at a Multi-Layer Incremental Induction Xindong Wu and William H.W. Lo School of Computer Science and Software Ebgineering Monash University 900 Dandenong Road Melbourne, VIC 3145, Australia Email: xindong@computer.org

More information

Parameterized Complexity - an Overview

Parameterized Complexity - an Overview Parameterized Complexity - an Overview 1 / 30 Parameterized Complexity - an Overview Ue Flarup 1 flarup@imada.sdu.dk 1 Department of Mathematics and Computer Science University of Southern Denmark, Odense,

More information

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo Two-Stage Service Provision by Branch and Bound Shane Dye Department ofmanagement University of Canterbury Christchurch, New Zealand s.dye@mang.canterbury.ac.nz Asgeir Tomasgard SINTEF, Trondheim, Norway

More information

Verifying Periodic Task-Control Systems. Vlad Rusu? Abstract. This paper deals with the automated verication of a class

Verifying Periodic Task-Control Systems. Vlad Rusu? Abstract. This paper deals with the automated verication of a class Verifying Periodic Task-Control Systems Vlad Rusu? Abstract. This paper deals with the automated verication of a class of task-control systems with periods, durations, and scheduling specications. Such

More information

Finding a winning strategy in variations of Kayles

Finding a winning strategy in variations of Kayles Finding a winning strategy in variations of Kayles Simon Prins ICA-3582809 Utrecht University, The Netherlands July 15, 2015 Abstract Kayles is a two player game played on a graph. The game can be dened

More information

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for Comparison of Two Image-Space Subdivision Algorithms for Direct Volume Rendering on Distributed-Memory Multicomputers Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc Dept. of Computer Eng. and

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Proposed Registration Procedure

Proposed Registration Procedure Chapter 6 Proposed Registration Procedure The proposed registration procedure is explained in this chapter. This registration is seen as if it were done on a new subject from a statistical sample of skull

More information

Proc. XVIII Conf. Latinoamericana de Informatica, PANEL'92, pages , August Timed automata have been proposed in [1, 8] to model nite-s

Proc. XVIII Conf. Latinoamericana de Informatica, PANEL'92, pages , August Timed automata have been proposed in [1, 8] to model nite-s Proc. XVIII Conf. Latinoamericana de Informatica, PANEL'92, pages 1243 1250, August 1992 1 Compiling Timed Algebras into Timed Automata Sergio Yovine VERIMAG Centre Equation, 2 Ave de Vignate, 38610 Gieres,

More information

Multiplication of BDD-Based Integer Sets for Abstract Interpretation of Executables

Multiplication of BDD-Based Integer Sets for Abstract Interpretation of Executables Bachelor hesis Johannes Müller Multiplication of BDD-Based Integer Sets for Abstract Interpretation of Executables March 19, 2017 supervised by: Prof. Dr. Sibylle Schupp Sven Mattsen Hamburg University

More information

CIS 1.5 Course Objectives. a. Understand the concept of a program (i.e., a computer following a series of instructions)

CIS 1.5 Course Objectives. a. Understand the concept of a program (i.e., a computer following a series of instructions) By the end of this course, students should CIS 1.5 Course Objectives a. Understand the concept of a program (i.e., a computer following a series of instructions) b. Understand the concept of a variable

More information

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA A taxonomy of race conditions. D. P. Helmbold, C. E. McDowell UCSC-CRL-94-34 September 28, 1994 Board of Studies in Computer and Information Sciences University of California, Santa Cruz Santa Cruz, CA

More information

Lecture 2: Intro to Concurrent Processing. A Model of Concurrent Programming

Lecture 2: Intro to Concurrent Processing. A Model of Concurrent Programming Lecture 2: Intro to Concurrent Processing The SR Language. Correctness and Concurrency. Mutual Exclusion & Critical Sections. Software Solutions to Mutual Exclusion. Dekker s Algorithm. The Bakery Algorithm.

More information

MDP Routing in ATM Networks. Using the Virtual Path Concept 1. Department of Computer Science Department of Computer Science

MDP Routing in ATM Networks. Using the Virtual Path Concept 1. Department of Computer Science Department of Computer Science MDP Routing in ATM Networks Using the Virtual Path Concept 1 Ren-Hung Hwang, James F. Kurose, and Don Towsley Department of Computer Science Department of Computer Science & Information Engineering University

More information

Model Checking VHDL with CV

Model Checking VHDL with CV Model Checking VHDL with CV David Déharbe 1, Subash Shankar 2, and Edmund M. Clarke 2 1 Universidade Federal do Rio Grande do Norte, Natal, Brazil david@dimap.ufrn.br 2 Carnegie Mellon University, Pittsburgh,

More information

Network. Department of Statistics. University of California, Berkeley. January, Abstract

Network. Department of Statistics. University of California, Berkeley. January, Abstract Parallelizing CART Using a Workstation Network Phil Spector Leo Breiman Department of Statistics University of California, Berkeley January, 1995 Abstract The CART (Classication and Regression Trees) program,

More information

Directed Model Checking for PROMELA with Relaxation-Based Distance Functions

Directed Model Checking for PROMELA with Relaxation-Based Distance Functions Directed Model Checking for PROMELA with Relaxation-Based Distance Functions Ahmad Siyar Andisha and Martin Wehrle 2 and Bernd Westphal Albert-Ludwigs-Universität Freiburg, Germany {andishaa,westphal}@informatik.uni-freiburg.de

More information

such internal data dependencies can be formally specied. A possible approach to specify

such internal data dependencies can be formally specied. A possible approach to specify Chapter 6 Specication and generation of valid data unit instantiations In this chapter, we discuss the problem of generating valid data unit instantiations. As valid data unit instantiations must adhere

More information

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population.

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population. An Experimental Comparison of Genetic Programming and Inductive Logic Programming on Learning Recursive List Functions Lappoon R. Tang Mary Elaine Cali Raymond J. Mooney Department of Computer Sciences

More information

Outline. Computer Science 331. Information Hiding. What This Lecture is About. Data Structures, Abstract Data Types, and Their Implementations

Outline. Computer Science 331. Information Hiding. What This Lecture is About. Data Structures, Abstract Data Types, and Their Implementations Outline Computer Science 331 Data Structures, Abstract Data Types, and Their Implementations Mike Jacobson 1 Overview 2 ADTs as Interfaces Department of Computer Science University of Calgary Lecture #8

More information

E-Companion: On Styles in Product Design: An Analysis of US. Design Patents

E-Companion: On Styles in Product Design: An Analysis of US. Design Patents E-Companion: On Styles in Product Design: An Analysis of US Design Patents 1 PART A: FORMALIZING THE DEFINITION OF STYLES A.1 Styles as categories of designs of similar form Our task involves categorizing

More information

COMP 763. Eugene Syriani. Ph.D. Student in the Modelling, Simulation and Design Lab School of Computer Science. McGill University

COMP 763. Eugene Syriani. Ph.D. Student in the Modelling, Simulation and Design Lab School of Computer Science. McGill University Eugene Syriani Ph.D. Student in the Modelling, Simulation and Design Lab School of Computer Science McGill University 1 OVERVIEW In the context In Theory: Timed Automata The language: Definitions and Semantics

More information

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanfordedu) February 6, 2018 Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 In the

More information

Dynamic Logic David Harel, The Weizmann Institute Dexter Kozen, Cornell University Jerzy Tiuryn, University of Warsaw The MIT Press, Cambridge, Massac

Dynamic Logic David Harel, The Weizmann Institute Dexter Kozen, Cornell University Jerzy Tiuryn, University of Warsaw The MIT Press, Cambridge, Massac Dynamic Logic David Harel, The Weizmann Institute Dexter Kozen, Cornell University Jerzy Tiuryn, University of Warsaw The MIT Press, Cambridge, Massachusetts, 2000 Among the many approaches to formal reasoning

More information

Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

More information

Research Thesis. Doctor of Philosophy

Research Thesis. Doctor of Philosophy Exploiting Syntactic Structure for Automatic Verication Research Thesis Submitted in partial fulllment of the requirements for the degree of Doctor of Philosophy Karen Yorav Submitted to the senate of

More information

CONTROLLER DEPENDABILITY ANALYSIS BY PROBABILISTIC MODEL CHECKING. Marta Kwiatkowska, Gethin Norman and David Parker

CONTROLLER DEPENDABILITY ANALYSIS BY PROBABILISTIC MODEL CHECKING. Marta Kwiatkowska, Gethin Norman and David Parker CONTROLLER DEPENDABILITY ANALYSIS BY PROBABILISTIC MODEL CHECKING Marta Kwiatkowska, Gethin Norman and David Parker School of Computer Science, University of Birmingham, Birmingham, B15 2TT, United Kingdom

More information

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes On the Complexity of the Policy Improvement Algorithm for Markov Decision Processes Mary Melekopoglou Anne Condon Computer Sciences Department University of Wisconsin - Madison 0 West Dayton Street Madison,

More information

Theory of 3-4 Heap. Examining Committee Prof. Tadao Takaoka Supervisor

Theory of 3-4 Heap. Examining Committee Prof. Tadao Takaoka Supervisor Theory of 3-4 Heap A thesis submitted in partial fulfilment of the requirements for the Degree of Master of Science in the University of Canterbury by Tobias Bethlehem Examining Committee Prof. Tadao Takaoka

More information

The temporal explorer who returns to the base 1

The temporal explorer who returns to the base 1 The temporal explorer who returns to the base 1 Eleni C. Akrida, George B. Mertzios, and Paul G. Spirakis, Department of Computer Science, University of Liverpool, UK Department of Computer Science, Durham

More information

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the Heap-on-Top Priority Queues Boris V. Cherkassky Central Economics and Mathematics Institute Krasikova St. 32 117418, Moscow, Russia cher@cemi.msk.su Andrew V. Goldberg NEC Research Institute 4 Independence

More information

Handout 9: Imperative Programs and State

Handout 9: Imperative Programs and State 06-02552 Princ. of Progr. Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 9: Imperative Programs and State Imperative

More information

A novel approach to automatic model-based test case generation

A novel approach to automatic model-based test case generation Scientia Iranica D (2017) 24(6), 3132{3147 Sharif University of Technology Scientia Iranica Transactions D: Computer Science & Engineering and Electrical Engineering www.scientiairanica.com A novel approach

More information

want turn==me wait req2==0

want turn==me wait req2==0 Uppaal2k: Small Tutorial Λ 16 October 2002 1 Introduction This document is intended to be used by new comers to Uppaal and verification. Students or engineers with little background in formal methods should

More information

Don't Cares in Multi-Level Network Optimization. Hamid Savoj. Abstract

Don't Cares in Multi-Level Network Optimization. Hamid Savoj. Abstract Don't Cares in Multi-Level Network Optimization Hamid Savoj University of California Berkeley, California Department of Electrical Engineering and Computer Sciences Abstract An important factor in the

More information

There have been many wide-ranging applications of hybrid dynamic systems, most notably in manufacturing systems and network protocols. To date, they h

There have been many wide-ranging applications of hybrid dynamic systems, most notably in manufacturing systems and network protocols. To date, they h Proceedings of SYROCO '97, Nantes France ROBUST DISCRETE EVENT CONTROLLER SYNTHESIS FOR CONSTRAINED MOTION SYSTEMS David J. Austin, Brenan J. McCarragher Department of Engineering Faculty of Engineering

More information

Cooperative Planning of Independent Agents. through Prototype Evaluation. E.-E. Doberkat W. Hasselbring C. Pahl. University ofdortmund

Cooperative Planning of Independent Agents. through Prototype Evaluation. E.-E. Doberkat W. Hasselbring C. Pahl. University ofdortmund Investigating Strategies for Cooperative Planning of Independent Agents through Prototype Evaluation E.-E. Doberkat W. Hasselbring C. Pahl University ofdortmund Dept. of Computer Science, Informatik 10

More information

12 The PEPA Plug-in for Eclipse

12 The PEPA Plug-in for Eclipse 12 The PEPA Plug-in for Eclipse In this lecture note we introduce the tool support which is available when modelling with PEPA. Undertaking modelling studies of any reasonable size is only possible if

More information

Conflict based Backjumping for Constraints Optimization Problems

Conflict based Backjumping for Constraints Optimization Problems Conflict based Backjumping for Constraints Optimization Problems Roie Zivan and Amnon Meisels {zivanr,am}@cs.bgu.ac.il Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105,

More information

of m clauses, each containing the disjunction of boolean variables from a nite set V = fv 1 ; : : : ; vng of size n [8]. Each variable occurrence with

of m clauses, each containing the disjunction of boolean variables from a nite set V = fv 1 ; : : : ; vng of size n [8]. Each variable occurrence with A Hybridised 3-SAT Algorithm Andrew Slater Automated Reasoning Project, Computer Sciences Laboratory, RSISE, Australian National University, 0200, Canberra Andrew.Slater@anu.edu.au April 9, 1999 1 Introduction

More information

Transient Analysis Of Stochastic Petri Nets With Interval Decision Diagrams

Transient Analysis Of Stochastic Petri Nets With Interval Decision Diagrams Transient Analysis Of Stochastic Petri Nets With Interval Decision Diagrams Martin Schwarick ms@informatik.tu-cottbus.de Brandenburg University of Technology Cottbus, Germany Abstract. This paper presents

More information

Selecting Attributes for Automated Clustering of Disassembled Code from Embedded Systems

Selecting Attributes for Automated Clustering of Disassembled Code from Embedded Systems Bachelor Thesis Immanuel Wietreich Selecting Attributes for Automated Clustering of Disassembled Code from Embedded Systems February 14, 2014 supervised by: Prof. Dr. Sibylle Schupp Dipl. Ing. Arne Wichmann

More information

2 J. Karvo et al. / Blocking of dynamic multicast connections Figure 1. Point to point (top) vs. point to multipoint, or multicast connections (bottom

2 J. Karvo et al. / Blocking of dynamic multicast connections Figure 1. Point to point (top) vs. point to multipoint, or multicast connections (bottom Telecommunication Systems 0 (1998)?? 1 Blocking of dynamic multicast connections Jouni Karvo a;, Jorma Virtamo b, Samuli Aalto b and Olli Martikainen a a Helsinki University of Technology, Laboratory of

More information

Model checking pushdown systems

Model checking pushdown systems Model checking pushdown systems R. Ramanujam Institute of Mathematical Sciences, Chennai jam@imsc.res.in Update Meeting, IIT-Guwahati, 4 July 2006 p. 1 Sources of unboundedness Data manipulation: integers,

More information

Under-Approximation Refinement for Timed Automata

Under-Approximation Refinement for Timed Automata Under-Approximation Refinement for Timed Automata Bachelor s thesis Natural Science Faculty of the University of Basel Department of Mathematics and Computer Science Artificial Intelligence http://ai.cs.unibas.ch/

More information

1 Recursion. 2 Recursive Algorithms. 2.1 Example: The Dictionary Search Problem. CSci 235 Software Design and Analysis II Introduction to Recursion

1 Recursion. 2 Recursive Algorithms. 2.1 Example: The Dictionary Search Problem. CSci 235 Software Design and Analysis II Introduction to Recursion 1 Recursion Recursion is a powerful tool for solving certain kinds of problems. Recursion breaks a problem into smaller problems that are identical to the original, in such a way that solving the smaller

More information

Finite State Verification. CSCE Lecture 14-02/25/2016

Finite State Verification. CSCE Lecture 14-02/25/2016 Finite State Verification CSCE 747 - Lecture 14-02/25/2016 So, You Want to Perform Verification... You have a property that you want your program to obey. Great! Let s write some tests! Does testing guarantee

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Part 2: Balanced Trees

Part 2: Balanced Trees Part 2: Balanced Trees 1 AVL Trees We could dene a perfectly balanced binary search tree with N nodes to be a complete binary search tree, one in which every level except the last is completely full. A

More information

would be included in is small: to be exact. Thus with probability1, the same partition n+1 n+1 would be produced regardless of whether p is in the inp

would be included in is small: to be exact. Thus with probability1, the same partition n+1 n+1 would be produced regardless of whether p is in the inp 1 Introduction 1.1 Parallel Randomized Algorihtms Using Sampling A fundamental strategy used in designing ecient algorithms is divide-and-conquer, where that input data is partitioned into several subproblems

More information

UPPAAL. Validation and Verication of Real Time Systems. Status & Developments y. Abstract

UPPAAL. Validation and Verication of Real Time Systems. Status & Developments y. Abstract UPPAAL Validation and Verication of Real Time Systems Status & Developments y Kim G Larsen z Paul Pettersson x Wang Yi x Abstract Uppaal is a tool box for validation (via graphical simulation) and verication

More information

Core Membership Computation for Succinct Representations of Coalitional Games

Core Membership Computation for Succinct Representations of Coalitional Games Core Membership Computation for Succinct Representations of Coalitional Games Xi Alice Gao May 11, 2009 Abstract In this paper, I compare and contrast two formal results on the computational complexity

More information

Solve the Data Flow Problem

Solve the Data Flow Problem Gaining Condence in Distributed Systems Gleb Naumovich, Lori A. Clarke, and Leon J. Osterweil University of Massachusetts, Amherst Computer Science Department University of Massachusetts Amherst, Massachusetts

More information

Towards a Reference Framework. Gianpaolo Cugola and Carlo Ghezzi. [cugola, P.za Leonardo da Vinci 32.

Towards a Reference Framework. Gianpaolo Cugola and Carlo Ghezzi. [cugola, P.za Leonardo da Vinci 32. Inconsistencies in Software Development: Towards a Reference Framework Gianpaolo Cugola and Carlo Ghezzi [cugola, ghezzi]@elet.polimi.it Dipartimento di Elettronica e Informazione Politecnico di Milano

More information

Routing and Ad-hoc Retrieval with the. Nikolaus Walczuch, Norbert Fuhr, Michael Pollmann, Birgit Sievers. University of Dortmund, Germany.

Routing and Ad-hoc Retrieval with the. Nikolaus Walczuch, Norbert Fuhr, Michael Pollmann, Birgit Sievers. University of Dortmund, Germany. Routing and Ad-hoc Retrieval with the TREC-3 Collection in a Distributed Loosely Federated Environment Nikolaus Walczuch, Norbert Fuhr, Michael Pollmann, Birgit Sievers University of Dortmund, Germany

More information

System Correctness. EEC 421/521: Software Engineering. System Correctness. The Problem at Hand. A system is correct when it meets its requirements

System Correctness. EEC 421/521: Software Engineering. System Correctness. The Problem at Hand. A system is correct when it meets its requirements System Correctness EEC 421/521: Software Engineering A Whirlwind Intro to Software Model Checking A system is correct when it meets its requirements a design without requirements cannot be right or wrong,

More information

Two Problems - Two Solutions: One System - ECLiPSe. Mark Wallace and Andre Veron. April 1993

Two Problems - Two Solutions: One System - ECLiPSe. Mark Wallace and Andre Veron. April 1993 Two Problems - Two Solutions: One System - ECLiPSe Mark Wallace and Andre Veron April 1993 1 Introduction The constraint logic programming system ECL i PS e [4] is the successor to the CHIP system [1].

More information

CS 267: Automated Verification. Lecture 13: Bounded Model Checking. Instructor: Tevfik Bultan

CS 267: Automated Verification. Lecture 13: Bounded Model Checking. Instructor: Tevfik Bultan CS 267: Automated Verification Lecture 13: Bounded Model Checking Instructor: Tevfik Bultan Remember Symbolic Model Checking Represent sets of states and the transition relation as Boolean logic formulas

More information

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy

More information

Computability and Complexity

Computability and Complexity Computability and Complexity Turing Machines CAS 705 Ryszard Janicki Department of Computing and Software McMaster University Hamilton, Ontario, Canada janicki@mcmaster.ca Ryszard Janicki Computability

More information

Finite State Verification. CSCE Lecture 21-03/28/2017

Finite State Verification. CSCE Lecture 21-03/28/2017 Finite State Verification CSCE 747 - Lecture 21-03/28/2017 So, You Want to Perform Verification... You have a property that you want your program to obey. Great! Let s write some tests! Does testing guarantee

More information

Autolink. A Tool for the Automatic and Semi-Automatic Test Generation

Autolink. A Tool for the Automatic and Semi-Automatic Test Generation Autolink A Tool for the Automatic and Semi-Automatic Test Generation Michael Schmitt, Beat Koch, Jens Grabowski and Dieter Hogrefe University of Lubeck, Institute for Telematics, Ratzeburger Allee 160,

More information

Computer Technology Institute. Patras, Greece. In this paper we present a user{friendly framework and a

Computer Technology Institute. Patras, Greece. In this paper we present a user{friendly framework and a MEASURING SOFTWARE COMPLEXITY USING SOFTWARE METRICS 1 2 Xenos M., Tsalidis C., Christodoulakis D. Computer Technology Institute Patras, Greece In this paper we present a user{friendly framework and a

More information

Formal semantics of loosely typed languages. Joep Verkoelen Vincent Driessen

Formal semantics of loosely typed languages. Joep Verkoelen Vincent Driessen Formal semantics of loosely typed languages Joep Verkoelen Vincent Driessen June, 2004 ii Contents 1 Introduction 3 2 Syntax 5 2.1 Formalities.............................. 5 2.2 Example language LooselyWhile.................

More information

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR Detecting Missing and Spurious Edges in Large, Dense Networks Using Parallel Computing Samuel Coolidge, sam.r.coolidge@gmail.com Dan Simon, des480@nyu.edu Dennis Shasha, shasha@cims.nyu.edu Technical Report

More information

Graphical Tool For SC Automata.

Graphical Tool For SC Automata. Graphical Tool For SC Automata. Honours Project: 2000 Dr. Padmanabhan Krishnan 1 Luke Haslett 1 Supervisor Abstract SC automata are a variation of timed automata which are closed under complementation.

More information

Performance Comparison Between AAL1, AAL2 and AAL5

Performance Comparison Between AAL1, AAL2 and AAL5 The University of Kansas Technical Report Performance Comparison Between AAL1, AAL2 and AAL5 Raghushankar R. Vatte and David W. Petr ITTC-FY1998-TR-13110-03 March 1998 Project Sponsor: Sprint Corporation

More information

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems * Journal of Global Optimization, 10, 1{40 (1997) c 1997 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Discrete Lagrangian-Based Global-Search Method for Solving Satisability Problems

More information

Scheduling with Bus Access Optimization for Distributed Embedded Systems

Scheduling with Bus Access Optimization for Distributed Embedded Systems 472 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 8, NO. 5, OCTOBER 2000 Scheduling with Bus Access Optimization for Distributed Embedded Systems Petru Eles, Member, IEEE, Alex

More information

Randomized Jumplists With Several Jump Pointers. Elisabeth Neumann

Randomized Jumplists With Several Jump Pointers. Elisabeth Neumann Randomized Jumplists With Several Jump Pointers Elisabeth Neumann Bachelor Thesis Randomized Jumplists With Several Jump Pointers submitted by Elisabeth Neumann 31. March 2015 TU Kaiserslautern Fachbereich

More information

SpeedDating: An Algorithmic Case Study involving Matching and Scheduling

SpeedDating: An Algorithmic Case Study involving Matching and Scheduling SpeedDating: An Algorithmic Case Study involving Matching and Scheduling Study Thesis of Ben Strasser At the faculty of Computer Science Institute of Theoretical Informatics Algorithmics I Advisors: Dr.

More information

Incremental Runtime Verification of Probabilistic Systems

Incremental Runtime Verification of Probabilistic Systems Incremental Runtime Verification of Probabilistic Systems Vojtěch Forejt 1, Marta Kwiatkowska 1, David Parker 2, Hongyang Qu 1, and Mateusz Ujma 1 1 Department of Computer Science, University of Oxford,

More information

Abstract formula. Net formula

Abstract formula. Net formula { PEP { More than a Petri Net Tool ABSTRACT Bernd Grahlmann and Eike Best The PEP system (Programming Environment based on Petri Nets) supports the most important tasks of a good net tool, including HL

More information

In a two-way contingency table, the null hypothesis of quasi-independence. (QI) usually arises for two main reasons: 1) some cells involve structural

In a two-way contingency table, the null hypothesis of quasi-independence. (QI) usually arises for two main reasons: 1) some cells involve structural Simulate and Reject Monte Carlo Exact Conditional Tests for Quasi-independence Peter W. F. Smith and John W. McDonald Department of Social Statistics, University of Southampton, Southampton, SO17 1BJ,

More information

Allowing Cycle-Stealing Direct Memory Access I/O. Concurrent with Hard-Real-Time Programs

Allowing Cycle-Stealing Direct Memory Access I/O. Concurrent with Hard-Real-Time Programs To appear in: Int. Conf. on Parallel and Distributed Systems, ICPADS'96, June 3-6, 1996, Tokyo Allowing Cycle-Stealing Direct Memory Access I/O Concurrent with Hard-Real-Time Programs Tai-Yi Huang, Jane

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX Towards an Adaptive Distributed Shared Memory (Preliminary Version ) Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX 77843-3 E-mail: fjhkim,vaidyag@cs.tamu.edu

More information

The divide and conquer strategy has three basic parts. For a given problem of size n,

The divide and conquer strategy has three basic parts. For a given problem of size n, 1 Divide & Conquer One strategy for designing efficient algorithms is the divide and conquer approach, which is also called, more simply, a recursive approach. The analysis of recursive algorithms often

More information

Extended Dataflow Model For Automated Parallel Execution Of Algorithms

Extended Dataflow Model For Automated Parallel Execution Of Algorithms Extended Dataflow Model For Automated Parallel Execution Of Algorithms Maik Schumann, Jörg Bargenda, Edgar Reetz and Gerhard Linß Department of Quality Assurance and Industrial Image Processing Ilmenau

More information

Formal Verification: Practical Exercise Model Checking with NuSMV

Formal Verification: Practical Exercise Model Checking with NuSMV Formal Verification: Practical Exercise Model Checking with NuSMV Jacques Fleuriot Daniel Raggi Semester 2, 2017 This is the first non-assessed practical exercise for the Formal Verification course. You

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/22891 holds various files of this Leiden University dissertation Author: Gouw, Stijn de Title: Combining monitoring with run-time assertion checking Issue

More information

The Boundary-Restricted Coherence Protocol for Rennes Cedex Riverside, CA Telephone: [33] Telephone: [1] (909) 787{7206

The Boundary-Restricted Coherence Protocol for Rennes Cedex Riverside, CA Telephone: [33] Telephone: [1] (909) 787{7206 The Boundary-Restricted Coherence Protocol for Scalable and Highly Available Distributed Shared Memory Systems Oliver E. Theel Brett D. Fleisch y INRIA/IRISA Rennes Campus Universitaire de Beaulieu University

More information

A Lift Controller in Lustre. (a case study in developing a reactive system) Leszek Holenderski

A Lift Controller in Lustre. (a case study in developing a reactive system) Leszek Holenderski Presented at 5 th Nordic Workshop on Program Correctness, Turku, Finland, October 25{28, 1993. Published in Proc. of the 5 th Nordic Workshop on Program Correctness, ed. R.J.R. Back and K. Sere, Abo Akademi

More information

Distributed Systems Programming (F21DS1) Formal Verification

Distributed Systems Programming (F21DS1) Formal Verification Distributed Systems Programming (F21DS1) Formal Verification Andrew Ireland Department of Computer Science School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh Overview Focus on

More information

1 Introduction Testing seeks to reveal software faults by executing a program and comparing the output expected to the output produced. Exhaustive tes

1 Introduction Testing seeks to reveal software faults by executing a program and comparing the output expected to the output produced. Exhaustive tes Using Dynamic Sensitivity Analysis to Assess Testability Jerey Voas, Larry Morell y, Keith Miller z Abstract: This paper discusses sensitivity analysis and its relationship to random black box testing.

More information

6. Hoare Logic and Weakest Preconditions

6. Hoare Logic and Weakest Preconditions 6. Hoare Logic and Weakest Preconditions Program Verification ETH Zurich, Spring Semester 07 Alexander J. Summers 30 Program Correctness There are many notions of correctness properties for a given program

More information

Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest. Introduction to Algorithms

Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest. Introduction to Algorithms Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Introduction to Algorithms Preface xiii 1 Introduction 1 1.1 Algorithms 1 1.2 Analyzing algorithms 6 1.3 Designing algorithms 1 1 1.4 Summary 1 6

More information

Inequality Constraints On Synchronisation Counters

Inequality Constraints On Synchronisation Counters U N I V E R S I T A S S A R A V I E N I S S Saarland University Faculty of Natural Sciences and Technology I Department of Computer Science Bachelor s Thesis Inequality Constraints On Synchronisation Counters

More information