Combinatorial test case selection with Markovian usage models 1

Similar documents
MIST: MODELING INPUT SPACE FOR TESTING TOOL

Using Markov Chain Usage Models to Test Complex Systems

Automated Testing of C++ Templates

A Diversity Model for Multi-Version Safety-Critical I&C Systems

Statistical Testing of Software Based on a Usage Model

1. Introduction. Classic algorithms for Pairwise Testing. Andreas Rothmann Hochschule Offenburg November 2008.

Evaluation of Automated Testing Coverage: a Case Study Testing of Wireless Secure Connection Software

An Efficient Approach for Model Based Test Path Generation

Fault propagation in tabular expression-based specifications

Verification of Transport Protocol s Parallel Routing of a Vehicle Gateway System

The Constellation Project. Andrew W. Nash 14 November 2016

An Automated Testing Environment to support Operational Profiles of Software Intensive Systems

An Algorithm for Forward Reduction in Sequence-Based Software Specification

PUBLICATIONS. Journal Papers

Requirements Specifications

STUDY OF THE DEVELOPMENT OF THE STRUCTURE OF THE NETWORK OF SOFIA SUBWAY

Towards Testing Web Applications Using Functional Components

Energy Security: A Global Challenge

Demand fetching is commonly employed to bring the data

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116

OPTIMIZATION OF MONTE CARLO TRANSPORT SIMULATIONS IN STOCHASTIC MEDIA

Exact Template Matching using Graphs

3. G. G. Lemieux and S. D. Brown, ëa detailed router for allocating wire segments

String Vector based KNN for Text Categorization

A Modular Multiphase Heuristic Solver for Post Enrolment Course Timetabling

Hierarchical Multi level Approach to graph clustering

Generating edge covers of path graphs

CLASSIFICATION FOR SCALING METHODS IN DATA MINING

Enabling statistical testing for component-based systems

On Distributed Algorithms for Maximizing the Network Lifetime in Wireless Sensor Networks

Provably Efficient Non-Preemptive Task Scheduling with Cilk

PETERSEN GRAPHS. Anna Benini and Anita Pasotti

Distributed T-Way Test Suite Data Generation using Exhaustive Search Method with Map and Reduce Framework

An Efficient Clustering for Crime Analysis

An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data

Deepti Jaglan. Keywords - WSN, Criticalities, Issues, Architecture, Communication.

OPTIMIZATION OF MONTE CARLO TRANSPORT SIMULATIONS IN STOCHASTIC MEDIA

arxiv: v1 [cs.dm] 21 Dec 2015

MT2Way Interaction Algorithm for Pairwise Test Data Generation

CIM/E Oriented Graph Database Model Architecture and Parallel Network Topology Processing

RETRACTED ARTICLE. Web-Based Data Mining in System Design and Implementation. Open Access. Jianhu Gong 1* and Jianzhi Gong 2

BOOLEAN SPECIFICATION BASED TESTING TECHNIQUES: A SURVEY

Taibah University College of Computer Science & Engineering Course Title: Discrete Mathematics Code: CS 103. Chapter 2. Sets

A Modified Inertial Method for Loop-free Decomposition of Acyclic Directed Graphs

Applying Phonetic Hash Functions to Improve Record Linking in Student Enrollment Data

Key Features. Defect Rates. Traditional Unit testing: 25 faults / KLOC System testing: 25 / KLOC Inspections: / KLOC

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec.

Fault Evaluator Engine Expression DB Test DB Fault Generator Results Compact Windows Summary Window Detail Window Output Files CSV HTML Main Class Inp

A SIMPLE APPROXIMATION ALGORITHM FOR NONOVERLAPPING LOCAL ALIGNMENTS (WEIGHTED INDEPENDENT SETS OF AXIS PARALLEL RECTANGLES)

Enterprise Multimedia Integration and Search

Project Name. The Eclipse Integrated Computational Environment. Jay Jay Billings, ORNL Parent Project. None selected yet.

JOURNAL OF OBJECT TECHNOLOGY

Published in: Mobile Wireless Middleware, Operating Systems, and Applications - Workshops

Research Article Modeling and Simulation Based on the Hybrid System of Leasing Equipment Optimal Allocation

An Automatic Test Case Generator for Testing Safety-Critical Software Systems

Complexity Results on Graphs with Few Cliques

The K-Ring: a versatile model for the design of MIMD computer topology

Report on article The Travelling Salesman Problem: A Linear Programming Formulation

Harmonious and achromatic colorings of fragmentable hypergraphs

Hitachi-GE Nuclear Energy, Ltd. UK ABWR GENERIC DESIGN ASSESSMENT Resolution Plan for RO-ABWR-0027 Hardwired Back Up System

ARELAY network consists of a pair of source and destination

SNUMedinfo at TREC CDS track 2014: Medical case-based retrieval task

Performance Testing from UML Models with Resource Descriptions *

Partitioning of Public Transit Networks

IEEE P1564 Voltage Sag Indices Task Force Meeting

Construction of a transitive orientation using B-stable subgraphs

THE APPLICATION OF SEQUENCE ENUMERATION TO THE AXIOMATIC DESIGN PROCESS

UML2SAN: TOWARD A NEW SOFTWARE PERFORMANCE ENGINEERING APPROACH

Testability Analysis of Framework Based Software at Requirement Analysis Phase

PTMD: Pairwise Testing Based on Module Dependency

Dynamic Information Management and Exchange for Command and Control Applications

Space-Efficient Page-Level Incremental Checkpointing *

Enumerating Pseudo-Intents in a Partial Order

Modeling the Real World for Data Mining: Granular Computing Approach

Combining the Power of DAVE and SIMULINK

UTILIZATION OF 1st ERLANG FORMULA IN ASYNCHRONOUS NETWORKS

Association Rule Mining and Clustering

VERIFYING BPMN PROCESSES USING GENERALIZED NETS. Pavel Tcheshmedjiev

Micro physical simulation system of electric power systems

Automatic New Topic Identification in Search Engine Transaction Log Using Goal Programming

A Vertex Chain Code Approach for Image Recognition

An analysis of retransmission strategies for reliable multicast protocols

1.1 - Introduction to Sets

Some Cordial Labeling of Duplicate Graph of Ladder Graph

MAINE STATE LEGISLATURE

Similarity Joins in MapReduce

JMP GENOMICS VALIDATION USING COVERING ARRAYS AND EQUIVALENCE PARTITIONING WENJUN BAO AND JOSEPH MORGAN JMP DIVISION, SAS INC.

MONTE CARLO SIMULATION FOR RADIOTHERAPY IN A DISTRIBUTED COMPUTING ENVIRONMENT

The JINR Tier1 Site Simulation for Research and Development Purposes

Resource-Definition Policies for Autonomic Computing

Management Science Letters

A Comparison of Text-Categorization Methods applied to N-Gram Frequency Statistics

Formal Specification and Systematic Model-Driven Testing of Embedded Automotive Systems

Maintaining Footprint-Based Retrieval for Case Deletion

Combination Testing Strategies: A Survey

Characterization of Super Strongly Perfect Graphs in Chordal and Strongly Chordal Graphs

Grundlagen des Software Engineering Fundamentals of Software Engineering

Another abstraction of the Erdős-Szekeres Happy End Theorem

CHAD J. KIGER, p.e. EMC Engineering Manager EDUCATION & CERTIFICATIONS

Acyclic Subgraphs of Planar Digraphs

Transcription:

Fifth International Conference on Information Technology: New Generations Combinatorial test case selection with Markovian usage models 1 Sergiy A. Vilkomir, W. Thomas Swain and Jesse H. Poore Software Quality Research Laboratory, Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN 37996, USA {vilkomir, swain, poore}@eecs.utk.edu Abstract A method of using Markov chain techniques for combinatorial test case selection is presented. The method can be used for statistical and coverage testing of many software programs, in particular for scientific computational software. The central point of the approach is modeling of dependencies between input parameters. Several different types of such dependencies are considered and models for each situation are created. Based on these models, test cases can be automatically generated and executed. Results of using the JUMBL tool for analyzing models and generating test cases are described. Keywords: software testing, test case selection, test case generation, Markov chain, input parameters 1. Introduction One of the most important approaches to software testing is model-based testing. Of particular interest are usage models for representing all possible uses of software in terms of system stimuli and states of use. Markov chains are often used as a mathematical basis of usage models [13, 9]. A Markov usage chain can be represented as a graph where nodes are states of use and arcs describe transitions between states. Probabilities can be allocated to each arc to reflect the software usage likelihoods. A great advantage of this approach is the rich body of analytical results for Markov chains. Additionally there is existing tool support for analyzing Markov chain models and automatically generating test cases from the model. The Software Quality Research Laboratory (SQRL) at the University of Tennessee has wide experience in applying the JUMBL tool [5, 11] for constructing and analyzing models, automatically generating test cases, executing tests, and analyzing testing results. The tool has been applied to scientific application software in [7, 10]. The model-based approach is usually applied to software where transitions between recognizable states of use are induced by input stimuli. However, for some software programs, such as those developed for scientific computations, the usage pattern consists of only three stages: entering input parameters, calculation, and generating results. In this case, the testing challenge is posed not by the proliferation of stimulus sequences but by the set of static input combinations. Usually the number of possible parameter values is too large for testing all input combinations to be practical. Different combinatorial approaches have been suggested to deal with this situation [2, 3]. Combinatorial models are clearly applicable for such software. The main advantages of combinatorics together with Markov chains is the possibility of using existing Markov chain tools for automation of test cases selection, execution, and evaluation, as well as the ability to apply statistical analysis to the testing process. In this paper we proposed a method of using a Markov chain model for test case generation, while taking into account combinatorial dependencies between input parameters. The paper is structured as follows. Section 2 considers the use of the Markov chain model for test case generation in the case of independent input parameters. In Section 3, we propose a method for test selection involving dependent input parameters. We consider different types of dependencies between parameters and illustrate the approach in several examples. Section 4 presents results of automatic model analysis and test case generation using the JUMBL tool. Finally, Section 5 presents conclusions and directions for future work. 2. Independent input parameters 1 This research is supported by the University of Tennessee Computational Science Initiative in collaboration with the Computing and Computational Sciences Directorate of Oak Ridge National Laboratory. 978-0-7695-3099-4/08 $25.00 2008 IEEE DOI 10.1109/ITNG.2008.80 3

Fig. 1. Example of Markov chain model for input parameters selection The starting point of our consideration is a result from [7], where the Markov chain model was applied for test cases selection from independent input parameters. In this approach, each state of the model represents a specific value for each discrete input parameter and a transition to the state represents the fact of assigning this value to the parameter. Continuous parameters must be previously abstracted into discrete parameters by partitioning the parameter s domain. An example in Fig. 1 shows a model for three input parameters i1, i2 and i3 with domains I1= {a, b, c}, I2={d, e, f, g} and I3= {h, k, m, n} correspondingly. Each path through the model from Enter to Exit represent a test case, for example <b, d, m> The total number of different test cases (vectors) for this example is 3x4x4=48. For applications with many input parameters and large domains, the total number of possible test cases will, of course, pose a very challenging testing problem. The model can be used for automatic test case generation for different purposes, for example coverage testing to cover all arcs of the model (i.e., ensure that each parameter value is used at least once). Probabilities (not shown in Fig. 1) are typically assigned to reflect a relative frequency of use for each value, and then random tests can be automatically generated for statistical sampling and reliability evaluation. Test cases generation based on graph flow algorithms are also used. The approach in [7] has been used for real software projects, in particular for testing a cluster control utility. A test automation framework was implemented using Python and shell scripts. About 7,000 test cases were automatically generated, representing all paths through the usage model (since it contained no loops or cycles). The complete test set can be executed, and evaluated in about 4 hours. In this application, values of the different input parameters were independent (i.e., any combination of input values is valid as a test case). For many software programs, possible values of one input parameter depend on the value of another parameter. In these cases not all combinations of input values are meaningful or valid, and dependencies among input parameters should be systematically taken into account during modeling and test case generation. In this paper we present a systematic method for model construction and test case generation for software with dependent input parameters (Section 3 below). As a preliminary step, we modify the model in Fig. 1 to reduce the number of states (Fig. 2). In Fig. 2, a state represents the fact that the value of a parameter is fixed (without specifying a value) and an arc represents a specific input value. As in Fig. 1, a path (a sequence of arcs) through the model represents a test case. For the purpose of test case generation, the models in Fig. 1 and Fig. 2 are very different because graph algorithms are used. However, both are capable of generating the same set of test cases and the presentation in Fig. 2 is used because it simplifies visualization. Fig. 2. Model for input parameters selection with reduced numbers of states 4

3. Modeling dependency between parameters 3.1 Dependent input parameters A dependency between two input parameters exists when the value of the first parameter implies some restrictions on the value of the second parameter. It means that some input combinations are not allowed and the total space of input vectors can be partitioned into the allowed (valid) and not allowed. Consider a formal definition of a dependency between two parameters. Let x and y be input parameters with domains X and Y. In other words, x X (x takes values from X) and y Y (y takes values from Y). We define the dependency between x and y as a oneone function f: Q(X) C(Y) where Q(X) is a partition on X: Q(X)={X 1,, X k } is a set of non-empty subsets of X such that for all j, m with 1 j,m k, j m, X j = X and X j X m =. C(Y) is a cover of Y: C(Y)={Y 1,, Y k } is a set of subsets of Y (including possibly the empty set) such that for all j with 1 j k, Y j = Y (without restriction on intersection). Y j =f(x j ), x X j y f(x j ). It is important to note that we can consider an empty set among subsets of C(Y). If f(x j )= for some j with 1 j k, it means that when x X j parameter y is not used. When describing dependencies between parameters, we can describe it in terms of characteristic predicates P j (x) and D j (y) instead of subsets X j and Y j : P j (x) = true x X j ; D j (y)= true y Y j. To reflect the application dependency between input parameters in the Markov chain, we split a node for parameter x into k new nodes (x, P 1 (x)),, (x, P k (x)) as shown in Fig. 3. Node (x, P j (x)) represents the fact that the value of parameter x is fixed and this value satisfies predicate P j (x). Abbreviating node names, (x, P j (x)) to (x, P j ), there is one input arc to (x, P j ) for every value which satisfies P j (i.e. for every value from X j ) and one output arc from (x, P j ) for every value which satisfies D j (i.e. every value from Y j ). Now the application observes the Markov chain probability law and we can treat a new graph with split elements as a usual Markov chain. The general idea of splitting states to preserve the Markov chain probability law is not new (see, for example, [14]). Similar approaches have been also used in combinatorial testing. In particular, our approach is close to the conflict-free sub-models approach of [4]. However, there are two important differences: According to the conflict-free sub-models approach, new split parameters are created for every unique value of the split parameter. In our approach, the number of new states is significantly less and equals the number of blocks in the partition and cover used to describe the dependency between input parameters. According to the conflict-free sub-models approach, a new sub-model is created for each new state and then test cases from each submodel are combined. We manage split parameters in the framework of the initial model and so only one model is necessary to generate all test cases. Fig. 3. Dependency between input parameters 3.2 Classification of dependencies between parameters In Section 3.1, we formalized a dependency between two parameters. However, one dependency can include several parameters. Four different types of dependencies are possible: Dependency one-on-one. Dependency one-on-many. Dependency many-on-one. Dependency many-on-many. An example of a model for dependency one-on-one is considered in Section 3.3, and modeling dependency for the other three cases is considered in Section 3.4. The dependency may not only restrict a parameter s values, but may also affect its exit arc probability distribution. In other words, the probability that an input parameter has some specific value may depend on values of other parameters. This gives an additional classification of dependencies: values-on-values, when value ranges depend on values of other parameters (considered in Section 3.1). probabilities-on-values, when probabilities depend on values of other parameters. An example of a model with probability dependencies is considered in Section 3.5. 5

Fig. 4. Model of dependency one-on-one between inputs i1 and i2 3.3 Modeling dependency one-on-one The approach to modeling one-on-one dependency is described in Section 3.1. As an example, consider its application for input parameters shown in Fig. 2 and the following dependency between i2 and i1: if i1 equals (meaning takes values) a or b, then i2 equals d or e; but if i1 equals c, then i2 equals e, f, or g. Applying the notation from Section 3.1 for this dependency, we have the following: I1={a, b, c} domain of i1. I2={d, e, f, g} domain of i2. Q(I1)={{a, b},{c}}. C(I2) ={{d, e},{e, f, g}}. f({a, b})={d,e}, f({c})={e,f,g}. P1 characteristic predicate for set {a, b}. P2 characteristic predicate for set {c}. The application to this example is shown in Fig. 4. Input parameter i1 is split into two elements: (i1, P1) with values a and b and (i1, P2) with value c. Now there are no dependencies between (i1, P1) and i2 and between (i1, P2) and i2. 3.4 Modeling dependency between one and many parameters As an example of a one-on-many dependency, consider the model in Fig. 4 (with the dependency between i2 and i1 as above), and add an additional dependency between i3 and various combinations of the values of i1 and i2. We can manage this situation in two steps: First, we merge parameters i1 and i2 into one derived parameter i1-i2. The values of this new parameter are pairs from Cartesian product I1 I2 excluding pairs which are not valid because of dependency between i1 and i2. This merging is similar to the abstract parameter method in combinatorial testing [4]. Second, we split the new derived parameter according to the dependency (in our example according to predicates P3 and P4) as done in previous sections. The new model is shown in Fig. 5. There are no more dependencies between parameters in the new model and Markov chain techniques apply. Fig. 5. Model of dependency one-on-many between input i3 and inputs i1 and i2 The dependency can be described as following: P3 i3 {h, k, m}, where P3 is a characteristic predicate for set {(a, d), (b,e), (c,f)}. P4 i3 {n}, where P4 is a characteristic predicate for set {(a, e), (b,d), (c,e), (c,g)}. A similar approach can be used for modeling dependencies many-on-one and many-on-many. Merging many parameters into one derived parameter (or two derived parameters for many-onmany dependency), we bring the situation to one-onone and then split the parameter, whose values determine the set of ranges for the dependent parameter. 3.5 Modeling dependency probabilitieson-values Dependencies values-on-values determine which combination of input parameters are valid and which are invalid. In contrast, dependencies probabilities-on-values determine the likelihood of using some input combinations during long run. This likelihood is not used for coverage testing but is very important for statistical testing and reliability estimation. Dependencies values-on-values can be considered as a special case of dependencies probabilities-on-values when probabilities of some values equal zero. Both types of dependencies can be modeled using the same approach. As an example of dependency probabilities-onvalues, consider two input parameters: i4 with values q and r and i5 with values s and t (Fig. 6). Let 6

probabilities of the values of the first parameter be equal: Prob(i4=r)=Prob(i4=q)=0.5 generated for every test case (Fig. 10). These files are used as a basis of test automation. For coverage testing, the minimum number of test cases visiting every arc in a model can be generated. Fig. 6. Example of arc probabilities However, probabilities of the values of the second parameter depend on values of the first parameter: i4=r Prob(i5=s)=0.3 Prob(i5=t)=0.7 i4=q Prob(i5=s)=0.6 Prob(i5=t)=0.4 To model such dependency, the same technique of splitting parameters as in Fig. 4 can be used (Fig. 7). Parameter i4 is split. However, in contrast to the example in Fig. 4, the arcs from the split states for i4 (Fig. 7) represent the same parameter values but are labeled with different probabilities. Enter r, 0.5 q, 0.5 i4=r s, 0.6 s, 0.3 t, 0.7 i5 Exit Fig. 8. TML model i4=q t, 0.4 Fig. 7. Model of dependency probabilities-on-values 4. Test case generation based on models of dependencies After a model involving dependencies is created, tools for analysis and test case generation can be applied. This section describes the results of using the J Usage Model Builder Library (JUMBL) [5, 11]. For this example, the model in Fig. 4 is described for input to the JUMBL using The Model Language (TML)[6, 12] a simple notation for describing Markov chain usage models in terms of states, arcs, and probabilities (Fig. 8). The JUMBL computes model statistics (Fig. 9) including node, stimulus, and arc statistics for occupancy and long run probability of occurrence for every element of the model. For models of computational applications, node occupancy is relative frequency of occurrence of the associated parameter over many test cases. Arc occupancy is the relative frequency of occurrence for a particular parameter value. Probability of occurrence for nodes and arcs is the probability that the associated parameter or value will appear in a particular test case. This information can be used to validate the model and plan testing activities. The JUMBL provides several test generation methods for coverage and random testing. A separate text file can be Fig. 9. Model statistics Fig. 10. Example of a generated test case For the model in Fig. 4, the following five test cases cover all arcs: i1=c, i2=f, i3=m i1=a, i2=d, i3=h 7

i1=b, i2=e, i3=k i1=c, i2=g, i3=n i1=c, i2=e, i3=m For computational applications, this ensures that every parameter value occurs in at least one test case. Random test cases are generated based on the probabilities of the model. A report with the overall test case statistics (Fig. 11) and detailed statistics for every element of the model can be generated. Various reliability measures are calculated based on results of random testing. Fig. 11. Test case statistics When a model with dependencies is created, test case generation with JUMBL becomes a routine process. Our approach is especially useful when the number of input parameters and their values is large, a statistical approach must be taken, and it is laborious to select large numbers of test cases manually. 5. Conclusions This paper presents an approach to using Markov chain techniques for combinatorial test case selection. The central point of the approach is encoding dependencies between input parameters into a Markov chain model. We consider several different types of dependencies and show how to create a model for each situation. Based on these models, test cases can be automatically generated and executed. Some results of using the JUMBL tool for analyzing models and generating test cases are addressed. Major directions for further work include: Extending the approach to the case of simultaneous multiple dependencies among input parameters. Combining models of dependencies with other approaches [8] for evaluation of automated testing coverage. Application to more complex case studies. Such work is underway for NEWTRNX [1], a program for high-fidelity neutron transport computation being developed at Oak Ridge National Laboratory, and initial results are very promising. 6. References [1] K. Clarno, V. de Almeida, E. d Azevedo, C. de Oliveira, S. Hamilton, GNES-R: Global Nuclear Energy Simulator for Reactors Task 1: High-Fidelity Neutron Transport. Proceedings of PHYSOR 2006, American Nuclear Society Topical Meeting on Reactor Physics: Advances in Nuclear Analysis and Simulation, September 10 14, 2006, Vancouver, British Columbia, Canada. [2] D. M. Cohen, S. R. Dalal, M. L. Fredman, G. C. Patton, The AETG system: An approach to testing based on combinatorial design. IEEE Transactions on Software Engineering, 23(7), July 1997, pp. 437 444. [3] M. Grindal, J. Offutt, S.F. Andler, Combination testing strategies: A survey. Software Testing, Verification, and Reliability, 15(3), September 2005, pp. 167-199. [4] M. Grindal, J. Offutt, and J. Mellin, Handling Constraints in the Input Space when Using Combination Strategies for Software Testing. Technical Report HS-IKI-TR-06-001, 2006-01-27, University of Skovde, Sweden. [5] S. Prowell, JUMBL: A Tool for Model-Based Statistical Testing. Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03), January 6-9, 2003, Big Island, HI, USA. [6] S. Prowell, TML: A description language for Markov chain usage models. Information and Software Technology, Vol. 42, No. 12, September 2000, 835--844. [7] W. T. Swain, S. L. Scott, Model-Based Statistical Testing of a Cluster Utility. Proceedings of the 5th International Conference on Computational Science (ICCS 2005), Atlanta, GA, USA, May 22-25, 2005, Part I, Lecture Notes in Computer Science 3514, pp. 443-450. [8] S. Vilkomir, P. Tips, D. L. Parnas, J. Monahan, T. O'Connor. Evaluation of Automated Testing Coverage: a Case Study of Wireless Secure Connection Software Testing, Supplementary proceedings of the 16th IEEE International Symposium on Software Engineering Reliability (ISSRE 2005), November 8-11, 2005, Chicago, USA, pp. 3.123-3.134. [9] J. Poore, C. Trammell, Engineering practices for statistical testing, Crosstalk. The Journal of Defense Software Engineering, April 1998, pp. 24-28. [10] K. Sayre, J. Poore. Automated Testing of Generic Computational Science Libraries. Proceedings of the 40th Hawaii International Conference on Systems Science (HICSS-40), 3-6 January 2007, Waikoloa, Big Island, HI, USA. IEEE Computer Society 2007. [11] SQRL, University of Tennessee, About the JUMBL, http://www.cs.utk.edu/sqrl/esp/jumbl.html [12] SQRL, University of Tennessee, TML, http://www.cs.utk.edu/sqrl/esp/tml.html [13] J. Whittaker, J. Poore, Markov Analysis of Software Specifications. ACM Transactions on Software Engineering and Methodology, Vol. 2, No. 1, January 1993, pp. 93-106. [14] J. Whittaker, Stochastic software testing. Annals of Software Engineering, No 4, Jan 1997, pp. 115-131. 8