Using Model-Checkers for Mutation-Based Test-Case Generation, Coverage Analysis and Specification Analysis
|
|
- Esmond Ross
- 5 years ago
- Views:
Transcription
1 Using Model-Checkers for Mutation-Based Test-Case Generation, Coverage Analysis and Specification Analysis Gordon Fraser and Franz Wotawa Institute for Software Technology Graz University of Technology Inffeldgasse 16b/2 A-8010 Graz, Austria Abstract Automated software testing is an important measure to improve software quality and the efficiency of the software development process. We present a model-checker based approach to automated test-case generation applying mutation to behavioral models and requirements specifications. Unlike previous related approaches, the requirements specification is at the center of this process. A property coverage criterion is used to show that resulting test-cases sufficiently exercise all aspects of the specification. A test-suite derived from the specification can only be as good as the specification itself. We demonstrate that analysis of the testcase generation process reveals important details about the specification, such as vacuity and how much of the model it covers, without requiring additional costly computations. 1. Introduction Testing is essential in order to achieve sufficiently high software quality. The complexity of both software and the testing process itself yields the desire for automation. One direction to address this issue is model-based testing. There, test-cases are used to determine whether an implementation is a refinement of an abstract model. Many model-checker based test-case generation methods adhere to this idea. On the other hand, if a requirements specification is available, then testing should concentrate on showing that the implementation is correct with regard to the specification. Centering the testing process around the specification, a This work has been supported by the FIT-IT research project Systematic test case generation for safety-critical distributed embedded real time systems with different safety integrity levels (TeDES) ; the project is carried out in cooperation with Vienna University of Technology, Magna Steyr and TTTech. resulting test-suite can only be as good as the specification itself. If some behavior is not covered by the specification, then faults within the uncovered parts can escape the testsuite. It is therefore essential to ensure a sufficiently high quality of the specification in addition to the quality of the implementation. In this paper, we present a model-checker based approach integrating automated test-case generation and specification analysis. This approach is based on mutation of an abstract model and the specification. It is shown that analysis of the mutants that are used for test-case generation can reveal interesting information about the specification: How thoroughly is the specification tested? How much of the possible behaviors are covered by the specification? Does the specification lead to vacuous satisfaction? This paper is organized as follows: Section 2 introduces test-case generation with model-checkers and outlines our approach. Section 3 describes different approaches to coverage measurement of the resulting test-suites. Then, Section 4 elaborates on insights on the specification completeness and quality that can be gained from an analysis of the mutants. Section 5 lists empirical results. Finally, Section 6 concludes the paper with a discussion of the results. 2. Automated Test-Case Generation with Model-Checkers Model-checker based approaches to automated testing have been shown to be promising (e.g., [14]). In this section, we recall the principles of test-case generation with model-checkers, and describe our chosen approach. In general, a model-checker takes as input a state machine based model of a system and a property. It effectively
2 explores the state space of the model to verify whether the property holds in all states. If a property violation is encountered, a trace (counter-example) is calculated that consists of states leading from the initial state to a state violating the property. Such a trace can be used as a complete test-case, i.e., it consists of both the data to send to an implementation under test (IUT), and the expected results. In order to systematically exploit the trace generation mechanism for test-case generation, the model-checker needs to be challenged in appropriate ways. There are two main principles to accomplish this. One is based on the use of coverage criteria not for measuring coverage but for test-case generation. A coverage criterion can be used in order to automatically create a test-suite by creating a special property (trap property) for each coverable item. Trap properties are inconsistent with a correct model such that the resulting counter-example leads to a state that covers a certain coverage item. Gargantini and Heitmeyer [12] describe an approach where trap properties are created to allow branch coverage of an SCR specification. Related approaches are presented in [24, 13, 9]. Alternatively, traces can be generated using mutation. A mutation describes a syntactic change that may introduce inconsistency, and can be applied to the model or specification [3, 4, 23], or properties extracted from the model [7] in order to create test-cases. Unlike for program mutants, equality of model mutants can be decided efficiently. Our approach applies mutation to both the textual description of the behavioral model (e.g., its SMV source) and its consistent requirements specification. Mutations of the model potentially introduce erroneous behavior. If this behavior is inconsistent with the allowed behavior defined by the specification, it can be detected by running the modelchecker with the mutant model and the specification. A resulting counter-example illustrates how an erroneous implementation would behave, therefore a correct IUT is expected to behave differently when a resulting test-case is executed on it. Such traces can be used as negative testcases, i.e., a fault is detected if an IUT behaves identical. If a mutant does not violate the specification, no trace is returned. Note that the consistent mutant might still contain errors, only the specification is too weak to detect them. Mutation of the specification allows an exploration of all the aspects of its properties. Altering or removing parts from a specification can enforce behavior that is not fulfillable by the model, and can be used to produce traces. Some mutations might have no effect on the verification, and do not contribute to the test-case generation. Nevertheless, valuable insights on the specification can be drawn from such cases. Test-cases created from model-checking property mutants against the original model illustrate how the correct model behaves. When executed on an actual IUT, the IUT is expected to behave as described by the trace, therefore such test-cases are termed positive test-cases. Mutation is systematically applied by defining a set of mutation operators (refer to [8] for more details about mutation operators for models and specifications). A mutation operator describes a simple syntactic change. Every operator is applied to all possible parts of the model or specification, each application resulting in a mutant. The model mutants are then checked against the original specification, while the specification mutants are checked against the original model. A set of positive and negative test-cases can be extracted from the resulting traces. There possibly are redundant (e.g., identical or subsumed) test-cases, so postprocessing of the test-suite is recommended. Our prototype implementation has shown that the performance of this approach is acceptable, if the model is verifiable in realistic time. Due to recent abstraction techniques we think that this is a valid assumption. The quality of the resulting test-cases is good, although it clearly depends on the quality of the requirements specification. A test-suite covers all possible faults that can lead to a property violation through negative test-cases. In addition, it is ensured that all parts of the specification are tested through positive test-cases. However, in order to judge about the effective test-suite quality, it is necessary to analyze how much of the model it covers, and thus how good the specification is. 3. Coverage Analysis In general, one of the motivations for automated software test selection is that formal verification or exhaustive testing are not feasible. Due to the finite selection of representative test cases, testing cannot guarantee the absence of errors. Therefore, it is of interest to assess the quality and completeness of a test-suite. In general, coverage criteria are defined to measure quality. Code coverage criteria determine how much of certain aspects of the source code (e.g., statements, branches, etc.) are covered, and are measured during test-case execution. Coverage can also be measured with respect to abstract models or specifications Model Coverage As proposed by Ammann et al. [1], trap properties, as described in Section 2 in the context of test-case generation, can also be used to measure the coverage of a testsuite with the aid of a model-checker. The test-cases are converted to models suitable as inputs for a model-checker by adding a special variable State which is successively incremented. For each test case one corresponding model is created. The values of all other variables are determined only by the value of State. Due to space limitations we have to refer to Ammann et al. [1] for a more detailed description of this process.
3 When checking such a model against a trap property, the item represented by the property is covered if a counterexample is returned. As the state space of the test-case model is small compared to the original model, this is an efficient process. The overall coverage of a test-suite is evaluated by checking all test-cases converted to models against the trap-properties derived from the coverage criterion Mutation Score The number of mutants a test-suite can distinguish from the original is given as the mutation score. For mutants of properties (reflected transitions, property mutants), the method presented in the previous section can also be used in order to determine the mutation score. It is simply the number of mutant properties identified (killed), i.e., resulting in a counter-example when checked against a test-case model, divided by the number of mutants in total. The interpretation of checking a negative test-case model against a mutant property is not clear as it is influenced by behaviors the real model does not exhibit. Furthermore, when creating test-cases by checking against the requirements specification, mutation is applied to the original model and not reflected properties. In this case, test-cases need to be symbolically executed. A test-case model is combined with the system model against which the test-case should be executed (e.g., a mutant model) by defining the mutant model as a sub-module of the test-case, which can easily be done automatically. In the sub-module, input variables are changed to parameters. The sub-module is instantiated in the test-case model, and by using the input variables as parameters it is ensured that the mutant model uses the inputs provided by the test case. Execution is simulated by adding a property for each output variable that requires the outputs of the mutant model and the test-case to be equal. This equality is only required for the duration of the test-case as the tested model might still change values beyond the end of the test-case. When challenged with this combination, a model-checker returns a trace that shows how the mutant s behavior differs from the test-case using the input values of the test-case. If the mutant is not detected, then the model-checker claims consistency. The mutation score equals the number of mutants identified divided by the number of mutants in total Property Coverage It is also interesting to measure the extent to which a test-suite covers the specification. A property coverage criterion for temporal logic formulas was introduced by Tan et al. [21]. In their definition, a test case that covers a property should not pass on any model that does not satisfy this property. This coverage criterion is based on ideas of vacuity analysis [5]. Informally, a property is satisfied vacuously by a model, if part of the formula can be replaced without having an effect on the satisfaction. Tan et al. [21] suggest a method to generate test-suites fulfilling this coverage criterion using special property mutants. This method is subsumed by our property-mutation approach by including an appropriate mutation operator that replaces atomic propositions with true or false. We now introduce a method to measure property coverage. The test-cases are converted to models as described in Section 3.1. In order to determine the property-coverage, trap properties are required for each atomic proposition of every property. These trap properties are generated by replacing an occurrence of a sub-formula with true or false, depending on its polarity [20]. The property is altered so that it is checked only within the length of the test-case, and true is assumed else. This can simply be done by extending the property with an implication on the state number, which is an explicit variable in a test-case model. Property coverage of a test-suite can be determined by converting test-cases to models as was described in Section 3.1, and then checking these against trap properties for all sub-formulas of a property. A sub-formula is covered if the corresponding trap property results in a counterexample. Negative test-cases need to be converted to positive test-case first, as they per definition violate the specification. This conversion is easily done by symbolic execution against the correct model similarly to the method in Section 3.2. A positive test-case is created by extracting the outputs of the correct model during the execution. An important observation is that a test-suite can only cover those sub-formulas that are not vacuously satisfied. If a sub-formula is satisfied vacuously there exists no execution path usable as a test-case for it. Clearly, vacuity is a problem within the specification. The following section therefore takes a closer look at this problem. 4. Specification Analysis Test-cases generated from only an abstract model test whether an implementation conforms to the model. Including the requirements specification in the test-case generation process allows to create test-cases that determine whether the correct behavior has been implemented. However, the quality of a test-suite created this way depends on the completeness of the specification. Test-cases are only created for those parts that are covered by the specification. Errors in the implementation that are outside the scope of the specification might go undetected. It is therefore of interest to show which behaviors are covered by the specification. In addition, it is useful to identify parts of the specification that have no effect on an implementation (i.e., vacuous satisfaction).
4 4.1. Specification Vacuity Consider the following example CTL property: AG(c1 -> AF c2). This property states that in all paths, at all times, if c1 occurs, c2 must eventually be true. If a model satisfies this property, this can be either because c2 really always is true in some future state after c1, or because c1 never occurs. If c1 is never true, then the property is vacuously satisfied by the model. A vacuous pass of a property is an indication of a problem in either the model or the property. While this simple example could easily be rewritten to be only valid if c1 really occurs this is not as easy in the general case.the detection of vacuous passes has recently been considered by several researchers [5, 20]. Vacuity can be detected with the help of witness formulas, where occurrences of sub-formulas are replaced with true or false, depending on their polarity, similarly to Section 3.3. A sub-formula is satisfied vacuously iff the formula and its witness formula are both satisfied by a model. Witness formulas can be created with property mutation. Therefore, specification vacuity can be detected by analyzing the testcase generation from relevant mutants. Property coverage and specification vacuity are complementary. Our method creates test-cases for all properties that are not vacuously satisfied. The total number of atomic propositions comprises of the number of covered propositions and the number of vacuously satisfied propositions Specification Coverage A state of a model or implementation is covered by a test-case, if it is passed during execution. Similarly, it is of interest to know how much of a model is covered by its specification. However, when verifying a specification a model-checker visits all reachable states. Hence, a different notion of covered is necessary. In general, a part of a model is considered to be covered by a specification if it contributes to the result of the verification. Two different approaches to specification coverage based on FSMs were originally introduced: Katz et al. [18] describe a method that compares the FSM and a reduced tableau for the specification. In contrast, Hoskote et al. [15] apply mutations to the FSM and analyze the effect these changes have on the satisfaction of the specification. This idea is extended by Chockler et al. [10] and Jayakumar et al. [17]. These approaches are based on mutations that invert the values of observed signals in states of an FSM. A different approach that modifies the model-checker in order to determine coverage during verification is presented in [25]. The general idea of mutation based coverage measurement approaches is to apply mutations (invert signals) to the FSM and observe the effects. If a signal inversion has no effect on the outcome of the verification, then this signal is not covered in the state where it is inverted. The number of signals and states of the FSM can be very large. Therefore, specialized algorithms to determine the coverage are required. In contrast, we define a coverage criterion for the SMV description of a model. This allows the determination of the coverage of the specification through an analysis of the results of the test-case generation process, without additional computational costs. In order to determine the model coverage on top of the test-generation without further costly computations, we choose to measure conditions, i.e., atomic propositions. A condition is covered by a property, if negation of the condition leads to a different outcome of the verification. Negation of conditions is a typical mutation operation, and is therefore already included in the test-case generation. The coverage of a property is determined by looking at the results of model-checking each of the model mutants resulting from application of this operator against the property. If a mutant yields a counter-example, then the condition negated in this mutant is covered. The overall coverage of a specification is simply the union of the coverage of all properties contained in the specification. Compared to FSM-based criteria, the interpretation of condition coverage is more intuitive. Results of the coverage analysis can be directly presented to the developer, who can easily see which part of the model needs more attention. 5. Empirical Results The presented test-case generation approach has been implemented in a prototype. This section presents results of the described test-case generation and analyses. Example Size Mutants Valid/Total BDD Vars BDD Nodes Prop. Model Spec Example / /348 Example /85 174/326 Example / /1271 Example / /1268 Example / /1268 SIS / /1283 Cruise C / /1182 TCAS / /1305 Wiper / /4643 Table 1. Example Models and Mutation Results Our prototype is based on the input language of the model-checker NuSMV [11] for the model description, and Computation Tree Logic (CTL) for example properties.
5 Example Test-Cases Unique/Total/Avg. Length Negative Length Positive Length Example1 33/ / Example2 16/ / Example3 69/ / Example4 558/ / Example5 1173/ / SIS 281/ / Cruise C. 186/ / TCAS 43/ / Wiper 1416/ / Table 2. Test-cases Example Test Coverage Specification DC Trans. Prop. Coverage Vacuity Example1 100% 100% 100% 100% 0% Example2 100% 100% 100% 67% 0% Example3 80% 100% 84% 75% 16% Example4 93% 91% 70% 60% 30% Example5 92% 99% 65% 50% 35% SIS - 97% 64% 92% 36% Cruise C. 72% 70% 45% 100% 55% TCAS 87% 100% 81% 100% 19% Wiper 92% 100% 71% 67% 29% Table 3. Comparison of Code- and Model- Coverage of Test-Cases and Specification Mutation operators for both are essentially identical. The following mutation operators were used for models and specification (see [8] for details): StuckAt (replace atomic propositions with true/false), ExpressionNegation (negate atomic propositions), Remove (remove atomic propositions), LogicalOperatorReplacement, RelationalOperator- Replacement, ArithmeticOperatorReplacement, Number- Mutation (replaces numbers with border cases). All the generations, modifications and instrumentations described in this paper are performed automatically by the prototype. All it needs as inputs are a model and a specification. Table 1 lists statistics (BDD encoding, number of properties) of the example models. Example1 5 are simple models of a car control at varying levels of complexity. The Safety Injection System (SIS) example was introduced in [6] and has since been used frequently for studying automated test case generation. The Cruise Control model is based on [19] and has also been used several times for automated testing. TCAS is an aircraft collision avoidance system originating from Siemens Corporate Research [16], later modified by Rothermel and Harrold [26]. Finally, Wiper is the software of an actual windscreen wiper controller provided by Magna Steyr. Due to space limitations the examples cannot be presented in more detail here. Table 1 also contains details about the number of mutants created from each model and its specification. A mutant is considered valid if it is syntactically correct and inconsistent, such that a test-case can be extracted from it. Table 2 shows the results of the test-case generation. The number of unique test-cases is derived by removing duplicates or subsumed test-cases. Table 3 lists the results of the analysis. DC (decision coverage) is branch coverage determined during execution of the test-cases on implementations, where available, and was measured with the Bullseye Coverage Tool 1. Transition coverage is measured on the model using the method described in Section 3.1, with a trap property for 1 each transition. Property coverage is determined by analysis of the result of the property mutation, as are specification coverage and vacuity. Mutation score is not listed, as each inconsistent mutant results in one or many test-cases, and therefore all inconsistent mutants are detected with this approach. As can be seen, the coverage of resulting test-suites is related to the specification vacuity. For example, Cruise Control has high vacuity and therefore poor test coverage. 6. Conclusion In this paper, we have presented an integrated approach to automated test-case generation and analysis focusing on the requirements specification. Model mutants are used to create test-cases that check for possible errors in the implementation that violate the specification. Specification mutants lead to test-cases that explore all aspects of the specification properties. It is sometimes argued that a sufficient formal requirements specification might not be available. However, there are many conceivable scenarios where detailed specifications are necessary. E.g., safety related software for automotive or avionic systems is subject to rigorous standards that enforce detailed specifications. In addition to the test-case generation, we have described several possible analyses of resulting test-suites. We have extended previous approaches to apply to our method, and have shown that analysis of which mutants resulted in testcases can be used to gain additional insights, i.e., property coverage of test-cases, model-coverage of the specification and specification vacuity. This makes it possible to judge about the quality and completeness of the specification. Even though experiments indicate feasibility of the presented approach, performance problems are likely for bigger models. While all mutations and transformations described in this paper can be done efficiently, the performance bottle neck clearly is the model-checker. Previ-
6 ous case studies (e.g., [14, 2]) show that model-checker based testing scales well even for real-world examples. However, commonly models without numeric variables are used for such experiments. Therefore, as agreed by others researchers[22], abstraction might be necessary. References [1] P. Ammann and P. E. Black. A Specification-Based Coverage Metric to Evaluate Test Sets. In HASE, pages IEEE Computer Society, [2] P. Ammann, P. E. Black, and W. Ding. Model Checkers in Software Testing. Technical report, National Institute of Standards and Technology, [3] P. Ammann, P. E. Black, and W. Majurski. Using Model Checking to Generate Tests from Specifications. In ICFEM, [4] P. Ammann, W. Ding, and D. Xu. Using a Model Checker to Test Safety Properties. In Proceedings of the 7th International Conference on Engineering of Complex Computer Systems, pages , June [5] I. Beer, S. Ben-David, C. Eisner, and Y. Rodeh. Efficient detection of vacuity in actl formulaas. In Proceedings of the 9th International Conference on Computer Aided Verification, pages , London, UK, Springer-Verlag. [6] R. Bharadwaj and C. L. Heitmeyer. Model Checking Complete Requirements Specifications Using Abstraction. Automated Software Engineering, 6(1):37 68, [7] P. E. Black. Modeling and Marshaling: Making Tests From Model Checker Counterexamples. In Proceedings of the 19th Digital Avionics Systems Conferences (DASC), volume 1, pages 1B3/1 1B3/6, [8] P. E. Black, V. Okun, and Y. Yesha. Mutation Operators for Specifications. In Proceedings of the 15th IEEE International Conference on Automated Software Engineering, [9] J. Callahan, F. Schneider, and S. Easterbrook. Automated Software Testing Using Model-Checking. In Proceedings 1996 SPIN Workshop, August Also WVU Technical Report NASA-IVV [10] H. Chockler, O. Kupferman, and M. Y. Vardi. Coverage metrics for temporal logic model checking. In TACAS 2001: Proceedings of the 7th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pages , London, UK, Springer-Verlag. [11] A. Cimatti, E. M. Clarke, F. Giunchiglia, and M. Roveri. NUSMV: A New Symbolic Model Verifier. In CAV 99: Proceedings of the 11th International Conference on Computer Aided Verification, pages , London, UK, Springer-Verlag. [12] A. Gargantini and C. Heitmeyer. Using Model Checking to Generate Tests From Requirements Specifications. In 7th European Software Engineering Conference, Held Jointly with the 7th ACM SIGSOFT Symposium on the Foundations of Software Engineering, volume 1687, pages Springer-Verlag GmbH, September [13] G. Hamon, L. de Moura, and J. Rushby. Automated Test Generation with SAL. Technical report, Computer Science Laboratory, SRI International, [14] M. P. Heimdahl, S. Rayadurgam, W. Visser, G. Devaraj, and J. Gao. Auto-Generating Test Sequences using Model Checkers: A Case Study. In Third International International Workshop on Formal Approaches to Software Testing, volume 2931 of Lecture Notes in Computer Science, pages Springer Verlag, October [15] Y. Hoskote, T. Kam, P.-H. Ho, and X. Zhao. Coverage estimation for symbolic model checking. In Proceedings of the 36th ACM/IEEE conference on Design automation, pages , New York, NY, USA, ACM Press. [16] M. Hutchins, H. Foster, T. Goradia, and T. J. Ostrand. Experiments of the Effectiveness of Dataflow- and Controlflow- Based Test Adequacy Criteria. In ICSE, pages , [17] N. Jayakumar, M. Purandare, and F. Somenzi. Dos and don ts of CTL state coverage estimation. In DAC 03: Proceedings of the 40th conference on Design automation, pages , New York, NY, USA, ACM Press. [18] S. Katz, O. Grumberg, and D. Geist. Have I written enough Properties? - A Method of Comparison between Specification and Implementation. In Proceedings of the 10th IFIP WG 10.5 Advanced Research Working Conference on Correct Hardware Design and Verification Methods, pages , London, UK, Springer-Verlag. [19] J. Kirby. Example NRL/SCR Software Requirements for an Automobile Cruise Control and Monitoring System. Technical Report TR-87-07, Wang Institute of Graduate Studies, [20] O. Kupferman and M. Y. Vardi. Vacuity detection in temporal model checking. In Proceedings of the 10th IFIP WG 10.5 Advanced Research Working Conference on Correct Hardware Design and Verification Methods, pages 82 96, London, UK, [21] I. L. Li Tan, Oleg Sokolsky. Specification-based testing with linear temporal logic. In Proceedings of IEEE International Conference on Information Reuse and Integration, [22] V. Okun and P. E. Black. Issues in Software Testing with Model Checkers. In Proc International Conference on Dependable Systems and Networks (DSN-2003), June [23] V. Okun, P. E. Black, and Y. Yesha. Testing with Model Checker: Insuring Fault Visibility. In Proceedings of 2002 WSEAS International Conference on System Science, Applied Mathematics & Computer Science, and Power Engineering Systems, pages , [24] S. Rayadurgam and M. P. E. Heimdahl. Coverage Based Test-Case Generation Using Model Checkers. In Proceedings of the 8th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems, pages IEEE Computer Society, April [25] E. Rodríguez, M. B. Dwyer, J. Hatcliff, and Robby. A flexible framework for the estimation of coverage metrics in explicit state software model checking. In Proceedings of the International Workshop on Construction and Analysis of Safe, Secure and Interoperable Smart Devices, [26] G. Rothermel and M. J. Harrold. Empirical Studies of a Safe Regression Test Selection Technique. IEEE Trans. Software Eng., 24(6): , 1998.
Test-Case Generation and Coverage Analysis for Nondeterministic Systems Using Model-Checkers
Test-Case Generation and Coverage Analysis for Nondeterministic Systems Using Model-Checkers Gordon Fraser and Franz Wotawa Institute for Software Technology Graz University of Technology Inffeldgasse
More informationMutant Minimization for Model-Checker Based Test-Case Generation
Mutant Minimization for Model-Checker Based Test-Case Generation Gordon Fraser and Franz Wotawa Institute for Software Technology Graz University of Technology Inffeldgasse 16b/2 A-8010 Graz, Austria {fraser,wotawa}@ist.tugraz.at
More informationExperiments on the Test Case Length in Specification Based Test Case Generation
Experiments on the Test Case Length in Specification Based Test Case Generation Gordon Fraser Institute for Software Technology Graz University of Technology Inffeldgasse 16b/2 A-8010 Graz, Austria fraser@ist.tugraz.at
More informationSpecification Centered Testing
Specification Centered Testing Mats P. E. Heimdahl University of Minnesota 4-192 EE/CS Building Minneapolis, Minnesota 55455 heimdahl@cs.umn.edu Sanjai Rayadurgam University of Minnesota 4-192 EE/CS Building
More informationNondeterministic Testing with Linear Model-Checker Counterexamples (Research Paper)
Nondeterministic Testing with Linear Model-Checker Counterexamples (Research Paper) Gordon Fraser and Franz Wotawa Institute for Software Technology Graz University of Technology Inffeldgasse 16b/2 A-8010
More informationGenerating MC/DC Adequate Test Sequences Through Model Checking
Generating MC/DC Adequate Test Sequences Through Model Checking Sanjai Rayadurgam Computer Science and Engineering University of Minnesota Minneapolis, MN 55455 rsanjai@cs.umn.edu Mats P.E. Heimdahl Computer
More informationError Detection Rate of MC/DC for a Case Study from the Automotive Domain
Error Detection Rate of MC/DC for a Case Study from the Automotive Domain Susanne Kandl and Raimund Kirner Institute of Computer Engineering Vienna University of Technology, Austria {susanne,raimund}@vmars.tuwien.ac.at
More informationAuto-Generating Test Sequences for Web Applications *
Auto-Generating Test Sequences for Web Applications * Hongwei Zeng and Huaikou Miao School of Computer Engineering and Science, Shanghai University, 200072, China zenghongwei@shu.edu.cn, hkmiao@shu.edu.cn
More informationMutation of Model Checker Specifications for Test Generation and Evaluation
Mutation of Model Checker Specifications for Test Generation and Evaluation Paul E. Black National Institute of Standards and Technology Gaithersburg, MD 20899 paul.black@nist.gov Vadim Okun Yaacov Yesha
More informationTesting! Prof. Leon Osterweil! CS 520/620! Spring 2013!
Testing Prof. Leon Osterweil CS 520/620 Spring 2013 Relations and Analysis A software product consists of A collection of (types of) artifacts Related to each other by myriad Relations The relations are
More informationSCR*: A Toolset for Specifying and. Analyzing Software Requirements? Constance Heitmeyer, James Kirby, Bruce Labaw and Ramesh Bharadwaj
SCR*: A Toolset for Specifying and Analyzing Software Requirements? Constance Heitmeyer, James Kirby, Bruce Labaw and Ramesh Bharadwaj Naval Research Laboratory, Code 5546, Washington, DC 20375, USA Abstract.
More informationModel Checkers for Test Case Generation: An Experimental Study
Model Checkers for Test Case Generation: An Experimental Study Muralidhar Talupur Carnegie Mellon University Abstract. In this paper we study the performance of various model checkers in test case generation
More informationGenerating minimal fault detecting test suites for Boolean expressions
Generating minimal fault detecting test suites for Boolean expressions Gordon Fraser Software Engineering Chair Saarland University, Saarbrücken, Germany E-mail: fraser@cs.uni-saarland.de Angelo Gargantini
More informationLecture 2: Symbolic Model Checking With SAT
Lecture 2: Symbolic Model Checking With SAT Edmund M. Clarke, Jr. School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 (Joint work over several years with: A. Biere, A. Cimatti, Y.
More informationSystem safety property-oriented test sequences generating method based on model checking
Computers in Railways XII 747 System safety property-oriented test sequences generating method based on model checking Y. Zhang 1, X. Q. Zhao 1, W. Zheng 2 & T. Tang 1 1 State Key Laboratory of Rail Traffic
More informationSystem Correctness. EEC 421/521: Software Engineering. System Correctness. The Problem at Hand. A system is correct when it meets its requirements
System Correctness EEC 421/521: Software Engineering A Whirlwind Intro to Software Model Checking A system is correct when it meets its requirements a design without requirements cannot be right or wrong,
More informationCoverage Metrics for Formal Verification
Coverage Metrics for Formal Verification Hana Chockler, Orna Kupferman, and Moshe Y. Vardi Hebrew University, School of Engineering and Computer Science, Jerusalem 91904, Israel Email: hanac,orna @cs.huji.ac.il,
More informationOn the Role of Formal Methods in Software Certification: An Experience Report
Electronic Notes in Theoretical Computer Science 238 (2009) 3 9 www.elsevier.com/locate/entcs On the Role of Formal Methods in Software Certification: An Experience Report Constance L. Heitmeyer 1,2 Naval
More informationCombining Complementary Formal Verification Strategies to Improve Performance and Accuracy
Combining Complementary Formal Verification Strategies to Improve Performance and Accuracy David Owen June 15, 2007 2 Overview Four Key Ideas A Typical Formal Verification Strategy Complementary Verification
More informationModel Checking DSL-Generated C Source Code
Model Checking DSL-Generated C Source Code Martin Sulzmann and Axel Zechner Informatik Consulting Systems AG, Germany {martin.sulzmann,axel.zechner}@ics-ag.de Abstract. We report on the application of
More informationA Verification Method for Software Safety Requirement by Combining Model Checking and FTA Congcong Chen1,a, Fuping Zeng1,b, Minyan Lu1,c
International Industrial Informatics and Computer Engineering Conference (IIICEC 2015) A Verification Method for Software Safety Requirement by Combining Model Checking and FTA Congcong Chen1,a, Fuping
More informationLecture1: Symbolic Model Checking with BDDs. Edmund M. Clarke, Jr. Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213
Lecture: Symbolic Model Checking with BDDs Edmund M Clarke, Jr Computer Science Department Carnegie Mellon University Pittsburgh, PA 523 Temporal Logic Model Checking Specification Language: A propositional
More informationPart 5. Verification and Validation
Software Engineering Part 5. Verification and Validation - Verification and Validation - Software Testing Ver. 1.7 This lecture note is based on materials from Ian Sommerville 2006. Anyone can use this
More informationA Novel Approach for Software Property Validation
A Novel Approach for Software Property Validation Salamah Salamah Department of Computer and Software Engineering, Embry-Riddle Aeronautical University, salamahs@erau.edu. Irbis Gallegos, Omar Ochoa Computer
More informationFormal Verification of Control Software: A Case Study
Formal Verification of Control Software: A Case Study Andreas Griesmayer 1, Roderick Bloem 1, Martin Hautzendorfer 2, and Franz Wotawa 1 1 Graz University of Technology, Austria {agriesma,rbloem,fwotawa}@ist.tu-graz.ac.at
More informationUsing Model Checking to Generate Tests from Requirement Specifications
Using Model Checking to Generate Tests from Requirement Authors: A. Gargantini & C. Heitmeyer Presented By Dishant Langayan Overview Automate construction of test sequences from a SCR requirements specification
More informationVerification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1
Verification and Validation 1 Objectives To introduce software verification and validation and to discuss the distinction between them To describe the program inspection process and its role in V & V To
More informationCS 520 Theory and Practice of Software Engineering Fall 2018
Today CS 52 Theory and Practice of Software Engineering Fall 218 Software testing October 11, 218 Introduction to software testing Blackbox vs. whitebox testing Unit testing (vs. integration vs. system
More informationCoverage Criteria for Model-Based Testing using Property Patterns
Coverage Criteria for Model-Based Testing using Property Patterns Kalou Cabrera Castillos 1, Frédéric Dadeau 2, Jacques Julliand 2 1 LAAS Toulouse, France 2 FEMTO-ST Besançon, France MBT workshop April
More informationWhite Box Testing III
White Box Testing III Outline Today we continue our look at white box testing methods, with mutation testing We will look at : definition and role of mutation testing what is a mutation? how is mutation
More informationModel-Checking plus Testing: from Software Architecture Analysis to Code Testing
Model-Checking plus Testing: from Software Architecture Analysis to Code Testing A. Bucchiarone 1, H. Muccini 2, P. Pelliccione 2, and P. Pierini 1 1 Siemens C.N.X. S.p.A., R. & D. Strada Statale 17, L
More informationIntroduction to Dynamic Analysis
Introduction to Dynamic Analysis Reading assignment Gary T. Leavens, Yoonsik Cheon, "Design by Contract with JML," draft paper, http://www.eecs.ucf.edu/~leavens/jml//jmldbc.pdf G. Kudrjavets, N. Nagappan,
More informationIan Sommerville 2006 Software Engineering, 8th edition. Chapter 22 Slide 1
Verification and Validation Slide 1 Objectives To introduce software verification and validation and to discuss the distinction between them To describe the program inspection process and its role in V
More informationIntroduction to Software Testing Chapter 5.1 Syntax-based Testing
Introduction to Software Testing Chapter 5.1 Syntax-based Testing Paul Ammann & Jeff Offutt http://www.cs.gmu.edu/~offutt/ softwaretest/ Ch. 5 : Syntax Coverage Four Structures for Modeling Software Graphs
More informationBRANCH COVERAGE BASED TEST CASE PRIORITIZATION
BRANCH COVERAGE BASED TEST CASE PRIORITIZATION Arnaldo Marulitua Sinaga Department of Informatics, Faculty of Electronics and Informatics Engineering, Institut Teknologi Del, District Toba Samosir (Tobasa),
More informationModel Checking: Back and Forth Between Hardware and Software
Model Checking: Back and Forth Between Hardware and Software Edmund Clarke 1, Anubhav Gupta 1, Himanshu Jain 1, and Helmut Veith 2 1 School of Computer Science, Carnegie Mellon University {emc, anubhav,
More informationFinite State Verification. CSCE Lecture 21-03/28/2017
Finite State Verification CSCE 747 - Lecture 21-03/28/2017 So, You Want to Perform Verification... You have a property that you want your program to obey. Great! Let s write some tests! Does testing guarantee
More informationTesting Component-Based Software
Testing Component-Based Software Jerry Gao, Ph.D. San Jose State University One Washington Square San Jose, CA 95192-0180 Email:gaojerry@email.sjsu.edu 1 Abstract Today component engineering is gaining
More informationStructural Testing. (c) 2007 Mauro Pezzè & Michal Young Ch 12, slide 1
Structural Testing (c) 2007 Mauro Pezzè & Michal Young Ch 12, slide 1 Learning objectives Understand rationale for structural testing How structural (code-based or glass-box) testing complements functional
More informationSimulink Design Verifier vs. SPIN a Comparative Case Study
Simulink Design Verifier vs. SPIN a Comparative Case Study Florian Leitner and Stefan Leue Department of Computer and Information Science University of Konstanz, Germany {Florian.Leitner,Stefan.Leue}@uni-konstanz.de
More informationMSc Software Testing and Maintenance MSc Prófun og viðhald hugbúnaðar
MSc Software Testing and Maintenance MSc Prófun og viðhald hugbúnaðar Fyrirlestrar 31 & 32 Structural Testing White-box tests. 27/1/25 Dr Andy Brooks 1 Case Study Dæmisaga Reference Structural Testing
More informationOn the Generation of Test Cases for Embedded Software in Avionics or Overview of CESAR
1 / 16 On the Generation of Test Cases for Embedded Software in Avionics or Overview of CESAR Philipp Rümmer Oxford University, Computing Laboratory philr@comlab.ox.ac.uk 8th KeY Symposium May 19th 2009
More informationSoftware Engineering 2 A practical course in software engineering. Ekkart Kindler
Software Engineering 2 A practical course in software engineering Quality Management Main Message Planning phase Definition phase Design phase Implem. phase Acceptance phase Mainten. phase 3 1. Overview
More informationDon t Judge Software by Its (Code) Coverage
Author manuscript, published in "SAFECOMP 2013 - Workshop CARS (2nd Workshop on Critical Automotive applications : Robustness & Safety) of the 32nd International Conference on Computer Safety, Reliability
More informationDistributed Systems Programming (F21DS1) SPIN: Formal Analysis II
Distributed Systems Programming (F21DS1) SPIN: Formal Analysis II Andrew Ireland Department of Computer Science School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh Overview Introduce
More informationModel Checking with Automata An Overview
Model Checking with Automata An Overview Vanessa D Carson Control and Dynamical Systems, Caltech Doyle Group Presentation, 05/02/2008 VC 1 Contents Motivation Overview Software Verification Techniques
More informationModel Checking. Dragana Cvijanovic
Model Checking Dragana Cvijanovic d.cvijanovic@cs.ucl.ac.uk 1 Introduction Computerised systems pervade more and more our everyday lives. Digital technology is now used to supervise critical functions
More informationIn this Lecture you will Learn: Testing in Software Development Process. What is Software Testing. Static Testing vs.
In this Lecture you will Learn: Testing in Software Development Process Examine the verification and validation activities in software development process stage by stage Introduce some basic concepts of
More informationAerospace Software Engineering
16.35 Aerospace Software Engineering Verification & Validation Prof. Kristina Lundqvist Dept. of Aero/Astro, MIT Would You...... trust a completely-automated nuclear power plant?... trust a completely-automated
More informationPart I: Preliminaries 24
Contents Preface......................................... 15 Acknowledgements................................... 22 Part I: Preliminaries 24 1. Basics of Software Testing 25 1.1. Humans, errors, and testing.............................
More informationIncremental Runtime Verification of Probabilistic Systems
Incremental Runtime Verification of Probabilistic Systems Vojtěch Forejt 1, Marta Kwiatkowska 1, David Parker 2, Hongyang Qu 1, and Mateusz Ujma 1 1 Department of Computer Science, University of Oxford,
More informationScenario Graphs Applied to Security (Summary Paper)
Book Title Book Editors IOS Press, 2003 1 Scenario Graphs Applied to Security (Summary Paper) Jeannette M. Wing Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 US Abstract.
More informationFormal Verification: Practical Exercise Model Checking with NuSMV
Formal Verification: Practical Exercise Model Checking with NuSMV Jacques Fleuriot Daniel Raggi Semester 2, 2017 This is the first non-assessed practical exercise for the Formal Verification course. You
More informationVerification and Validation
Verification and Validation Assuring that a software system meets a user's needs Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 19 Slide 1 Objectives To introduce software verification
More informationComputer Science Technical Report
Computer Science Technical Report Feasibility of Stepwise Addition of Multitolerance to High Atomicity Programs Ali Ebnenasir and Sandeep S. Kulkarni Michigan Technological University Computer Science
More informationDeveloping Safety-Critical Systems: The Role of Formal Methods and Tools
Developing Safety-Critical Systems: The Role of Formal Methods and Tools Constance Heitmeyer Center for High Assurance Computer Systems Naval Research Laboratory Washington, DC 20375 Email: heitmeyer@itd.nrl.navy.mil
More informationIntroduction to Software Testing Chapter 3, Sec# 1 & 2 Logic Coverage
Introduction to Software Testing Chapter 3, Sec# 1 & 2 Logic Coverage Paul Ammann & Jeff Offutt http://www.cs.gmu.edu/~offutt/soft waretest/ Ch. 3 : Logic Coverage Four Structures for Modeling Software
More informationVerification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1
Verification and Validation Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1 Verification vs validation Verification: "Are we building the product right?. The software should
More informationCTL Model-checking for Systems with Unspecified Components
CTL Model-checking for Systems with Unspecified Components [Extended Abstract Gaoyan Xie and Zhe Dang School of Electrical Engineering and Computer Science Washington State University, Pullman, WA 99164,
More informationVerification and Validation
Lecturer: Sebastian Coope Ashton Building, Room G.18 E-mail: coopes@liverpool.ac.uk COMP 201 web-page: http://www.csc.liv.ac.uk/~coopes/comp201 Verification and Validation 1 Verification and Validation
More informationInternational Journal of Advanced Research in Computer Science and Software Engineering
Volume 3, Issue 4, April 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Testing Techniques
More informationOverview of SRI s. Lee Pike. June 3, 2005 Overview of SRI s. Symbolic Analysis Laboratory (SAL) Lee Pike
June 3, 2005 lee.s.pike@nasa.gov Model-Checking 101 Model-checking is a way automatically to verify hardware or software. For a property P, A Model-checking program checks to ensure that every state on
More informationAdministrivia. ECE/CS 5780/6780: Embedded System Design. Acknowledgements. What is verification?
Administrivia ECE/CS 5780/6780: Embedded System Design Scott R. Little Lab 8 status report. Set SCIBD = 52; (The Mclk rate is 16 MHz.) Lecture 18: Introduction to Hardware Verification Scott R. Little
More informationVerification of Bakery algorithm variants for two processes
Verification of Bakery algorithm variants for two processes David Dedi 1, Robert Meolic 2 1 Nova Vizija d.o.o., Vreerjeva ulica 8, SI-3310 Žalec 2 Faculty of Electrical Engineering and Computer Science,
More informationExpression Coverability Analysis: Improving code coverage with model checking
Expression Coverability Analysis: Improving code coverage with model checking Graeme D. Cunningham, Institute for System Level Integration, Livingston, Scotland Paul B. Jackson, Institute for System Level
More informationMSc Software Testing MSc Prófun hugbúnaðar
MSc Software Testing MSc Prófun hugbúnaðar Fyrirlestrar 7 & 8 Structural Testing White-box tests. 29/8/27 Dr Andy Brooks 1 Case Study Dæmisaga Reference Structural Testing of Programs, A Survey, A A Omar
More informationFinite State Verification. CSCE Lecture 14-02/25/2016
Finite State Verification CSCE 747 - Lecture 14-02/25/2016 So, You Want to Perform Verification... You have a property that you want your program to obey. Great! Let s write some tests! Does testing guarantee
More informationImplementing Sequential Consistency In Cache-Based Systems
To appear in the Proceedings of the 1990 International Conference on Parallel Processing Implementing Sequential Consistency In Cache-Based Systems Sarita V. Adve Mark D. Hill Computer Sciences Department
More informationMore on Verification and Model Checking
More on Verification and Model Checking Wednesday Oct 07, 2015 Philipp Rümmer Uppsala University Philipp.Ruemmer@it.uu.se 1/60 Course fair! 2/60 Exam st October 21, 8:00 13:00 If you want to participate,
More informationHarmonization of usability measurements in ISO9126 software engineering standards
Harmonization of usability measurements in ISO9126 software engineering standards Laila Cheikhi, Alain Abran and Witold Suryn École de Technologie Supérieure, 1100 Notre-Dame Ouest, Montréal, Canada laila.cheikhi.1@ens.etsmtl.ca,
More informationSpecification-based Testing 2
Specification-based Testing 2 Conrad Hughes School of Informatics Slides thanks to Stuart Anderson 26 January 2010 Software Testing: Lecture 5 1 Overview We consider issues in the generation of test cases
More informationValidation of models and tests for constrained combinatorial interaction testing
Validation of models and tests for constrained combinatorial interaction testing Paolo Arcaini Angelo Gargantini Paolo Vavassori University of Bergamo- Italy International Workshop on Combinatorial Testing
More informationLeveraging DTrace for runtime verification
Leveraging DTrace for runtime verification Carl Martin Rosenberg June 7th, 2016 Department of Informatics, University of Oslo Context: Runtime verification Desired properties System Every request gets
More informationA Case Study for CTL Model Update
A Case Study for CTL Model Update Yulin Ding and Yan Zhang School of Computing & Information Technology University of Western Sydney Kingswood, N.S.W. 1797, Australia email: {yding,yan}@cit.uws.edu.au
More informationJPF SE: A Symbolic Execution Extension to Java PathFinder
JPF SE: A Symbolic Execution Extension to Java PathFinder Saswat Anand 1,CorinaS.Păsăreanu 2, and Willem Visser 2 1 College of Computing, Georgia Institute of Technology saswat@cc.gatech.edu 2 QSS and
More informationIntroduction to SMV. Arie Gurfinkel (SEI/CMU) based on material by Prof. Clarke and others Carnegie Mellon University
Introduction to SMV Arie Gurfinkel (SEI/CMU) based on material by Prof. Clarke and others 2 Carnegie Mellon University Symbolic Model Verifier (SMV) Ken McMillan, Symbolic Model Checking: An Approach to
More informationSoftware Testing TEST CASE SELECTION AND ADEQUECY TEST EXECUTION
Software Testing TEST CASE SELECTION AND ADEQUECY TEST EXECUTION Overview, Test specification and cases, Adequacy criteria, comparing criteria, Overview of test execution, From test case specification
More informationAccurately Choosing Execution Runs for Software Fault Localization
Accurately Choosing Execution Runs for Software Fault Localization Liang Guo, Abhik Roychoudhury, and Tao Wang School of Computing, National University of Singapore, Singapore 117543 {guol, abhik, wangtao}@comp.nus.edu.sg
More informationTesting. ECE/CS 5780/6780: Embedded System Design. Why is testing so hard? Why do testing?
Testing ECE/CS 5780/6780: Embedded System Design Scott R. Little Lecture 24: Introduction to Software Testing and Verification What is software testing? Running a program in order to find bugs (faults,
More informationFacts About Testing. Cost/benefit. Reveal faults. Bottom-up. Testing takes more than 50% of the total cost of software development
Reveal faults Goals of testing Correctness Reliability Usability Robustness Performance Top-down/Bottom-up Bottom-up Lowest level modules tested first Don t depend on any other modules Driver Auxiliary
More informationClass-Component Testability Analysis
Class-Component Testability Analysis SUPAPORN KANSOMKEAT Faculty of Engineering, Chulalongkorn University Bangkok, 10330, THAILAND WANCHAI RIVEPIBOON Faculty of Engineering, Chulalongkorn University Bangkok,
More informationUtilizing Static Analysis for Programmable Logic Controllers
Sébastien Bornot Ralf Huuck Ben Lukoschus Lehrstuhl für Softwaretechnologie Universität Kiel Preußerstraße 1 9, D-24105 Kiel, Germany seb rhu bls @informatik.uni-kiel.de Yassine Lakhnech Verimag Centre
More informationA Toolbox for Counter-Example Analysis and Optimization
A Toolbox for Counter-Example Analysis and Optimization Alan Mishchenko Niklas Een Robert Brayton Department of EECS, University of California, Berkeley {alanmi, een, brayton}@eecs.berkeley.edu Abstract
More informationModeling and Verification of Marine Equipment Systems Using a Model Checker
Modeling and Verification of Marine Equipment Systems Using a Model Checker Shunsuke YAO Hiroaki AWANO Yasushi HIRAOKA Kazuko TAKAHASHI Abstract We discuss the modeling and verification of marine equipment
More informationCS/ECE 5780/6780: Embedded System Design
CS/ECE 5780/6780: Embedded System Design John Regehr Lecture 18: Introduction to Verification What is verification? Verification: A process that determines if the design conforms to the specification.
More informationΗΜΥ 317 Τεχνολογία Υπολογισμού
ΗΜΥ 317 Τεχνολογία Υπολογισμού Εαρινό Εξάμηνο 2008 ΙΑΛΕΞΕΙΣ 18-19: Έλεγχος και Πιστοποίηση Λειτουργίας ΧΑΡΗΣ ΘΕΟΧΑΡΙ ΗΣ Λέκτορας ΗΜΜΥ (ttheocharides@ucy.ac.cy) [Προσαρμογή από Ian Sommerville, Software
More informationClass 17. Discussion. Mutation analysis and testing. Problem Set 7 discuss Readings
Class 17 Questions/comments Graders for Problem Set 6 (4); Graders for Problem set 7 (2-3) (solutions for all); will be posted on T-square Regression testing, Instrumentation Final project presentations:
More informationGuidelines for deployment of MathWorks R2010a toolset within a DO-178B-compliant process
Guidelines for deployment of MathWorks R2010a toolset within a DO-178B-compliant process UK MathWorks Aerospace & Defence Industry Working Group Guidelines for deployment of MathWorks R2010a toolset within
More informationDouble Header. Two Lectures. Flying Boxes. Some Key Players: Model Checking Software Model Checking SLAM and BLAST
Model Checking #1 Double Header Two Lectures Model Checking Software Model Checking SLAM and BLAST Flying Boxes It is traditional to describe this stuff (especially SLAM and BLAST) with high-gloss animation
More informationReview of Regression Test Case Selection Techniques
Review of Regression Test Case Selection Manisha Rani CSE Department, DeenBandhuChhotu Ram University of Science and Technology, Murthal, Haryana, India Ajmer Singh CSE Department, DeenBandhuChhotu Ram
More informationCompatible Qualification Metrics for Formal Property Checking
Munich - November 18, 2013 Formal Property Checking Senior Staff Engineer Verification Infineon Technologies Page 1 Overview Motivation Goals Qualification Approaches Onespin s Coverage Feature Certitude
More informationTo be or not programmable Dimitri Papadimitriou, Bernard Sales Alcatel-Lucent April 2013 COPYRIGHT 2011 ALCATEL-LUCENT. ALL RIGHTS RESERVED.
To be or not programmable Dimitri Papadimitriou, Bernard Sales Alcatel-Lucent April 2013 Introduction SDN research directions as outlined in IRTF RG outlines i) need for more flexibility and programmability
More informationAn Eclipse Plug-in for Model Checking
An Eclipse Plug-in for Model Checking Dirk Beyer, Thomas A. Henzinger, Ranjit Jhala Electrical Engineering and Computer Sciences University of California, Berkeley, USA Rupak Majumdar Computer Science
More informationRequirements Specifications
ACM Transactions on Software Engineering and Methodology, 1996. Automated Consistency Checking of Requirements Specifications CONSTANCE L. HEITMEYER, RALPH D. JEFFORDS, BRUCE G. LABAW JUNBEOM YOO Dependable
More informationIntroduction to Software Testing
Introduction to Software Testing Software Testing This paper provides an introduction to software testing. It serves as a tutorial for developers who are new to formal testing of software, and as a reminder
More informationEECS 481 Software Engineering Exam #1. You have 1 hour and 20 minutes to work on the exam.
EECS 481 Software Engineering Exam #1 Write your name and UM uniqname on the exam. There are ten (10) pages in this exam (including this one) and seven (7) questions, each with multiple parts. Some questions
More informationAreas related to SW verif. Trends in Software Validation. Your Expertise. Research Trends High level. Research Trends - Ex 2. Research Trends Ex 1
Areas related to SW verif. Trends in Software Validation Abhik Roychoudhury CS 6214 Formal Methods Model based techniques Proof construction techniques Program Analysis Static Analysis Abstract Interpretation
More informationModel Transformation Impact on Test Artifacts: An Empirical Study
Model Transformation Impact on Test Artifacts: An Empirical Study Anders Eriksson Informatics Research Center University of Skövde, Sweden anders.eriksson@his.se Birgitta Lindström Sten F. Andler Informatics
More informationFormal Verification for safety critical requirements From Unit-Test to HIL
Formal Verification for safety critical requirements From Unit-Test to HIL Markus Gros Director Product Sales Europe & North America BTC Embedded Systems AG Berlin, Germany markus.gros@btc-es.de Hans Jürgen
More informationHaving a BLAST with SLAM
Announcements Having a BLAST with SLAM Meetings -, CSCI 7, Fall 00 Moodle problems? Blog problems? Looked at the syllabus on the website? in program analysis Microsoft uses and distributes the Static Driver
More information