Testing: Test design and testing process

Similar documents
Specification-based test design

Introduction. Easy to get started, based on description of the inputs

Part I: Preliminaries 24

MONIKA HEINER.

Quality Assurance in Software Development

CS 4387/5387 SOFTWARE V&V LECTURE 4 BLACK-BOX TESTING

Chapter 11, Testing. Using UML, Patterns, and Java. Object-Oriented Software Engineering

Programming Embedded Systems

Why testing and analysis. Software Testing. A framework for software testing. Outline. Software Qualities. Dependability Properties

Testing & Debugging TB-1

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

Computer Science and Software Engineering University of Wisconsin - Platteville 9-Software Testing, Verification and Validation

Terminology. There are many different types of errors and different ways how we can deal with them.

Topics in Software Testing

XVIII. Software Testing. Laurea Triennale in Informatica Corso di Ingegneria del Software I A.A. 2006/2007 Andrea Polini

Introduction To Software Testing. Brian Nielsen. Center of Embedded Software Systems Aalborg University, Denmark CSS

Testing. CMSC 433 Programming Language Technologies and Paradigms Spring A Real Testing Example. Example (Black Box)?

Verification Overview Testing Theory and Principles Testing in Practice. Verification. Miaoqing Huang University of Arkansas 1 / 80

Manuel Oriol, CHCRC-C, Software Testing ABB

CSE 403: Software Engineering, Fall courses.cs.washington.edu/courses/cse403/16au/ Unit Testing. Emina Torlak

Software testing. Ian Sommerville 2006 Software Engineering, 8th edition. Chapter 23 Slide 1

F. Tip and M. Weintraub FUNCTIONAL TESTING

Part 5. Verification and Validation

QUIZ #5 - Solutions (5pts each)

Test Design Techniques ISTQB (International Software Testing Qualifications Board)

Testing Theory. Agenda - What will you learn today? A Software Life-cycle Model Which part will we talk about today? Theory Lecture Plan

Chapter 9. Software Testing

Examination Questions Time allowed: 1 hour 15 minutes

Equivalence Class Partitioning. Equivalence Partitioning. Definition and Example. Example set of classes

EECS 4313 Software Engineering Testing. Topic 01: Limits and objectives of software testing Zhen Ming (Jack) Jiang

Facts About Testing. Cost/benefit. Reveal faults. Bottom-up. Testing takes more than 50% of the total cost of software development

Programming Embedded Systems

1 Visible deviation from the specification or expected behavior for end-user is called: a) an error b) a fault c) a failure d) a defect e) a mistake

[IT6004-SOFTWARE TESTING] UNIT 2

Introduction to Dynamic Analysis

linaye/gl.html

Advanced Software Testing Understanding Code Coverage

Topic: Software Verification, Validation and Testing Software Engineering. Faculty of Computing Universiti Teknologi Malaysia

Object-Oriented Software Engineering Conquering Complex and Changing Systems. Chapter 9, Testing

The testing process. Component testing. System testing

Verification and Validation. Assuring that a software system meets a user s needs. Verification vs Validation. The V & V Process

Test design techniques

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013!

10. Software Testing Fundamental Concepts

EECS 4313 Software Engineering Testing. Topic 05: Equivalence Class Testing Zhen Ming (Jack) Jiang

Lecture 15 Software Testing

(See related materials in textbook.) CSE 435: Software Engineering (slides adapted from Ghezzi et al & Stirewalt

SFWR ENG 3S03: Software Testing

Software Testing. Minsoo Ryu. Hanyang University. Real-Time Computing and Communications Lab., Hanyang University

MTAT : Software Testing

Testing. Unit, integration, regression, validation, system. OO Testing techniques Application of traditional techniques to OO software

Software Testing part II (white box) Lecturer: Giuseppe Santucci

Software Testing and Maintenance 1

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

Software Testing. Software Testing. in the textbook. Chapter 8. Verification and Validation. Verification Techniques

Software Verification and Validation (VIMMD052) Introduction. Istvan Majzik Budapest University of Technology and Economics

Lecture 26: Testing. Software Engineering ITCS 3155 Fall Dr. Jamie Payton

Ingegneria del Software Corso di Laurea in Informatica per il Management

Software Testing. Testing: Our Experiences

Quote by Bruce Sterling, from: A Software Testing Primer, Nick Jenkins

Software Engineering and Scientific Computing

Modern Methods in Software Engineering. Testing.

Software Engineering: Theory and Practice. Verification by Testing. Test Case Design. Tom Verhoeff

Testing & Continuous Integration. Kenneth M. Anderson University of Colorado, Boulder CSCI 5828 Lecture 20 03/19/2010

Darshan Institute of Engineering & Technology Unit : 9

Program Analysis. Program Analysis

Input Space Partitioning

Subject Software Testing Structural Testing

Verification Black-box Testing & Testing in the Large

Verification and Validation. Verification and validation

Software Testing Fundamentals. Software Testing Techniques. Information Flow in Testing. Testing Objectives

Software technology 7. Testing (2) BSc Course Dr. Katalin Balla

Introduction to Software Testing Chapter 4 Input Space Partition Testing

Testing. Outline. What is this? Terminology. Erroneous State ( Error ) Algorithmic Fault

Software Testing for Developer Development Testing. Duvan Luong, Ph.D. Operational Excellence Networks

Software Testing. Software Testing

Structural Testing. (c) 2007 Mauro Pezzè & Michal Young Ch 12, slide 1

Aerospace Software Engineering

Software Testing. Testing 1

By Matthew Noonan, Project Manager, Resource Group s Embedded Systems & Solutions

Testing Process and Methods. CS 490MT/5555, Spring 2017, Yongjie Zheng

Darshan Institute of Engineering & Technology for Diploma Studies

Testing. ECE/CS 5780/6780: Embedded System Design. Why is testing so hard? Why do testing?

MTAT : Software Testing

1 Black Box Test Data Generation Techniques

Software Testing. 1. Testing is the process of demonstrating that errors are not present.

An Introduction to Systematic Software Testing. Robert France CSU

Software Testing CS 408

CS 520 Theory and Practice of Software Engineering Fall 2018

Testing & Symbolic Execution

Write perfect C code to solve the three problems below.

People tell me that testing is

Software Engineering Theory. Lena Buffoni (slides by Kristian Sandahl/Mariam Kamkar) Department of Computer and Information Science

Software Quality Assurance & Testing

Functional Testing (Black Box Testing)

Types of Dynamic System Testing. Testing Practice. Lecture 9

6.0 ECTS/4.5h VU Programm- und Systemverifikation ( ) June 22, 2016

Program Verification! Goals of this Lecture! Words from the Wise! Testing!

In this Lecture you will Learn: Testing in Software Development Process. What is Software Testing. Static Testing vs.

Second assignment came out Monday evening. Find defects in Hnefetafl rules written by your classmates. Topic: Code Inspection and Testing

Transcription:

Testing: Test design and testing process Zoltán Micskei Based on István Majzik s slides Dept. of Measurement and Information Systems Budapest University of Technology and Economics Department of Measurement and Information Systems

Testing basics o Goals and definitions Test design Overview o Specification based (functional, black-box) testing o Structure based (white-box) testing Testing process o Module testing o Integration testing o System testing o Validation testing 2

Basic definitions What is the goal of testing? What are the costs of testing? What can be automated?

Definition of testing An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component. IEEE Std 829-2008 Lots of other, conflicting definitions! 6

Basic concepts Specification, requirements Test cases Test execution Verdicts Test case o a set of test inputs, execution conditions, and expected results developed for a particular objective Test suite o several test cases for a component or system under test Test oracle o A source to determine expected results to compare with the actual result Verdict: result (pass / fail /error ) 7

Testing!= Debugging Exhaustive testing: Remarks on testing Running the program in all possible ways (inputs) Hard to implement in practice Observations: o Dijkstra: Testing is able to show the presence of faults, but not able to show the absence of faults. o Hoare: Testing can be considered as part of an inductive proof: If the program runs correctly for a given input then it will run similarly correctly in case of similar inputs. 8

Practical aspects of testing Testing costs may reach 50% of the development costs! o Test data generation Typically o Test code implementation manual work o Running the tests May be automated o Evaluation of the results Testing embedded systems: o Cross-development (different platforms) o Platform related faults shall be considered (integration) o Performance and timing related testing are relevant Testing safety-critical systems: o Prescribed techniques o Prescribed test coverage metrics 10

Testing in the standards (here: EN 50128) Software design and implementation: Functional/black box testing (D3): 11

Testing in the standards (here: EN 50128) Performance testing (D6): 12

Test design How can be test data selected?

Test approaches I. Specification based (functional) testing o The system is considered as a black box o Only the external behaviour (functionality) is known (the internal behaviour is not) o Test goals: checking the existence of the specified functions and absence of extra functions M1 m1() m2() m3() II. Structure based testing The system is considered as a white box The internal structure (source) is known M1 A1 Test goals: coverage of the internal behaviour (e.g., program graph) A2 A4 A3 14

I. Specification based (functional) testing Goals: o Based on the functional specification, o find representative inputs (test data) for testing the functionality. Overview of techniques: 1. Equivalence partitioning 2. Boundary value analysis 3. Cause-effect analysis 4. Combinatorial techniques 15

1. Equivalence partitioning Input and output equivalence classes: Data that are expected to cover the same faults (cover the same part of the program) Goal: Each equivalence class is represented by a test input (selected test data); the correctness in case of the remaining inputs follows from the principle of induction Test data selection is a heuristic procedure: o Input data triggering the same service o Valid and invalid input data -> valid and invalid equivalence classes o Invalid data: Robustness testing 16

Equivalence classes (partitions) Classic example: Triangle characterization program o Inputs: Lengths of the sides (here 3 integers) o Outputs: Equilateral, isosceles, scalene Test data for equivalence classes o Equilateral: 3,3,3 o Isosceles: 5,5,2 Similarly for the other sides o Scalene: 5,6,7 o Not a triangle: 1,2,5 Similarly for the other sides o Just not a triangle: 1,2,3 o Invalid inputs Zero value: 0,1,1 Negative value: -3,-5,-3 Not an integer: 2,2, a Less inputs than needed: 3,4 How many tests are selected? o Beck: 6 tests, Binder: 65 tests, Jorgensen: 185 tests b a c 17

Valid/invalid equivalence classes Tests in case of several inputs: o Valid (normal) equivalence classes: test data should cover as much equivalence classes as possible o Invalid equivalence classes: first covering the each invalid equivalence class separately, then combining them systematically 18

2. Boundary value analysis Examining the boundaries of data partitions o Focusing on the boundaries of equivalence classes o Input and output partitions are also examined o Typical faults to be detected: Faulty relational operators, conditions in cycles, size of data structures, Typical test data: o A boundary requires 3 tests: h1 o A partition requires 5-7 tests: h1 h2 19

3. Cause-effect analysis Examining the relation of inputs and outputs (if it is simple, e.g., combinational) o Causes: input equivalence classes o Effects: output equivalence classes Boole-graph: relations of causes and effects o AND, OR relations o Invalid combinations Decision table: Covering the Boole-graph o Truth table based representation o Columns represent test data 20

Cause-effects analysis Inputs: Outputs: Owner ID Administrator ID 1 2 OR A AND B Invalid ID Access granted Authorization 3 C Authorization failed Inputs Outputs T1 T2 T3 1 0 1 0 2 1 0 0 3 1 1 1 A 0 0 1 B 1 1 0 C 0 0 0 21

4. Combinatorial techniques Several input parameters o Failures are caused by (specific) combinations o Testing all combinations requires too much test cases o Rare combinations may also cause failures Basic idea: N-wise testing o For each n parameters, testing all possible combinations of their potential values o Special case (n = 2): pairwise testing 22

Example: pair-wise testing Given input parameters and potential values: o OS: ecos, c/os o CPU: AVR Mega, ARM7 o Protocol: IPv4, IPv6 How many combinations are possible? How many test cases are needed for pairwise testing? A potential test suite: T1: ecos, AVR Mega, IPv4 T2: ecos, ARM7, IPv6 T3: c/os, AVR Mega, IPv6 T4: c/os, ARM7, IPv4 23

Additional techniques Finite automaton based testing o The specification is given as a finite automaton o Typical test goals: to cover each state, each transition, invalid transitions, Use case based testing o The specification is given as a set of use cases o Each use case shall be covered by the test suite Random testing o Easy to generate (but evaluation may be more difficult) o Low efficiency 24

Test approaches I. Specification based (functional) testing o The system is considered as a black box o Only the external behaviour (functionality) is known (the internal behaviour is not) o Test goals: checking the existence of the specified functions and absence of extra functions M1 m1() m2() m3() II. Structure based testing The system is considered as a white box The internal structure (source) is known M1 A1 Test goals: coverage of the internal behaviour (e.g., program graph) A2 A4 A3 25

II. Structure based testing Internal structure is known: o It has to be covered by the test suite Goals: There shall not remain such statement, decision, execution path in the program, which was not executed during testing 26

The internal structure Well-specified representation: o Model-based: state machine, activity diagram o Source code based: control flow graph (program graph) S A1 S e0 / a0 A2 e1 / a2 S1 e1 / a1 e2[ g ] / a1 S2 S3 A4 A3 e1 / a2 e2 e2[ g1 ] / a2 A5 E S4 27

The internal structure Well-specified representation: o Model-based: state machine, activity diagram o Source code based: control flow graph (program graph) Source code: Control flow graph: a: for (i=0; i<max; i++) { b: if (i==a) { c: n=n-i; } else { d: m=n-i; } e: printf( %d\n,n); } f: printf( Ready. ) Statement c Path a b e f d 28

Conditions and decisions Condition: a logical indivisible (atomic) expression Decision: a Boolean expression composed of conditions and zero or more Boolean operators Examples: o A decision with one condition: if (temp > 20) { } o A decision with several conditions: if (temp > 20 && (valveisopen p == HIGH)) { } 29

Test coverage metrics Characterizing the quality of the test suite: Which part of the testable elements were tested 1. Statements Statement coverage 2. Decisions Decision coverage 3. Conditions Condition coverage 4. Execution paths Path coverage This is not fault coverage! Standards require coverage (DO-178B, EN 50128,...) o 100% statements coverage is a basic requirement 30

1. Statement coverage Definition: Number of executed statements during testing Number of all statements Does not take into account branches without statements A1 k=0 A4 A2 A3 [a>0] k=1 [a<=0] m=1/k A5 Statement coverage: 80% Statement coverage: 100% 31

Definition: 2. Decision coverage Number of decisions reached during testing Number of all potential decisions Does not take into account all combinations of conditions! A2 A2 [safe(c) safe(b)] A4 A3 A4 A3 Decision coverage: 50% Decision coverage: 100% 32

3. Multiple condition coverage Definition: Number of condition combinations tried during testing Number of all condition combinations Strong, but complex: For n conditions 2 n test cases may be necessary! Number of test data Number of conditions In avionics systems there are programs with more than 30 conditions! 33

Other coverage criteria MC/DC: Modified Condition/Decision Coverage It is used in the standard DO-178B to ensure that Level A (Catastrophic) software is tested adequately During testing followings must be true: o Each entry and exit point has been invoked at least once, o every condition in a decision in the program has taken all possible outcomes at least once, o every decision in the program has taken all possible outcomes at least once, o each condition in a decision is shown to independently affect the outcome of the decision. 34

4. Path coverage Definition: Number of independent paths traversed during testing Number of all independent paths 100% path coverage implies: o 100% statement coverage, 100% decision coverage o 100% multiple condition coverage is not implied A1 A2 Path coverage: 80% Statement coverage: 100% A4 A3 A5 35

Summary of coverage criteria From: K. J. Hayhurst et al. A Practical Tutorial on Modified Condition/ Decision Coverage, NASA/TM-2001-210876 36

Testing process What are the typical phases of testing? How to test complex systems?

Testing and test design in the V-model Operation, maintenance Requirement analysis System val. design System validation System specification System test design System verification Architecture design Integration test design System integration Module design Module test design Module verification Module implementation 44

1. Module testing Modules: o Logically separated units o Well-defined interfaces o OO paradigm: Classes (packages, components) Module call hierarchy (in ideal case): A A A3 A31 A311 A312 A1 A2 A3 A31 A32 A33 A311 A312 A313 45

Lowest level testing Module testing o Integration phase is more efficient if the modules are already tested Modules can be tested separately o Handling complexity o Debugging is easier o Testing can be parallel for the modules Complementary techniques o Specification based and structure based testing 46

Isolated testing of modules Modules are tested separately, in isolation Test executor and test stubs are required Integration is not supported A Test executor A1 A2 A3 Module to be tested A31 A32 A33 A311 A312 A313 Test stub Test stub 47 Test stub

Regression testing Repeated execution of test cases: In case when the module is changed o Iterative software development, o Modified specification, o Corrections,... In case when the environment changes o Changing of the caller/called modules, o Changing of platform services,... Goals: o Repeatable, automated test execution o Identification of functions to be re-tested 48

2. Integration testing Testing the interactions of modules Motivation o The system-level interaction of modules may be incorrect despite the fact that all modules are correct Methods: o Functional testing: Testing scenarios Sometimes the scenarios are part of the specification o (Structure based testing at module level) Approaches: o Big bang testing: integration of all modules o Incremental testing: stepwise integration of modules 49

Big bang testing Integration of all modules and testing using the external interfaces of the integrated system External test executor Based of the functional specification of the system To be applied only in case of small systems A Tester1 B Tester2 C D Debugging is diffcult! 50

Top-down integration testing Modules are tested from the caller modules Stubs replace the lower-level modules that are called Requirement-oriented testing Module modification: modifies the testing of lower levels A Tested module: test executor A1 A2 A3 Module to be tested A31 A32 A33 A311 A312 A313 Test stub Test stub Test stub 51

Bottom-up integration testing Modules use already tested modules Test executor is needed Testing is performed in parallel with integration Module modification: modifies the testing of upper levels A Test executor A1 A2 A3 Module to be tested A31 A32 A33 A311 A312 A313 Tested module Tested module Tested module 52

Integration with the runtime environment Motivation: It is hard to construct stubs for the runtime environment o Platform services, RT-OS, task scheduler, Strategy: 1. Top-down integration of the application modules to the level of the runtime environment 2. Bottom-up testing of the runtime environment Isolation testing of functions (if necessary) Big bang testing with the lowest level of the application module hierarchy 3. Integration of the application with the runtime environment, finishing top-down integration 53

3. System testing Testing on the basis of the system level specification Characteristics: o Performed after hardware-software integration o Testing functional specification + testing extra-functional properties as well Testing aspects: o Data integrity o User profile (workload) o Checking application conditions of the system (resource usage, saturation) o Testing fault handling 54

Types of system tests Performance testing Real workload Response times Configuration testing Hardware and software settings Tester Concurrency testing Increasing the number of users Checking deadlock, livelock Stress testing Checking saturation effects Reliability testing Checking the effects of faults Failover testing Checking the redundancy Checking failover/failback 55

4. Validation testing Goal: Testing in real environment o User requirements are taken into account o Non-specified expectations come to light o Reaction to unexpected inputs/conditions is checked o Events of low probability may appear Timing aspects o Constraints and conditions of the real environment o Real-time testing and monitoring is needed Environment simulation o If given situations cannot be tested in a real environment (e.g., protection systems) o Simulators shall be validated somehow 56

Relation to the development process 1. Module testing o Isolation testing 2. Integration testing o Big bang testing o Top-down testing o Bottom-up testing o Integration with runtime environment 3. System testing o Software-hardware integration testing 4. Validation testing o Testing user requirements o Environment simulation 57

Summary Testing techniques o Specification based (functional, black-box) testing Equivalence partitioning Boundary value analysis Cause-effect analysis o Structure based (white-box) testing Coverage metrics and criteria Testing process o Module testing o Integration testing Top-down integration testing Bottom-up integration testing o System testing o Validation testing 58