Getting those Bugs Out --Software Testing

Similar documents
Topics in Software Testing

Software Testing. Massimo Felici IF

(See related materials in textbook.) CSE 435: Software Engineering (slides adapted from Ghezzi et al & Stirewalt

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

In this Lecture you will Learn: Testing in Software Development Process. What is Software Testing. Static Testing vs.

Lecture 20: SW Testing Presented by: Mohammad El-Ramly, PhD

Text Input and Conditionals

MTAT : Software Testing

CS 520 Theory and Practice of Software Engineering Fall 2018

White-Box Testing Techniques

Testing! The material for this lecture is drawn, in part, from! The Practice of Programming (Kernighan & Pike) Chapter 6!

Verification and Validation. Verification and validation

Lecture 15 Software Testing

CMPSCI 521/621 Homework 2 Solutions

Program Verification! Goals of this Lecture! Words from the Wise! Testing!

Testing! The material for this lecture is drawn, in part, from! The Practice of Programming (Kernighan & Pike) Chapter 6!

Integration Testing. Conrad Hughes School of Informatics. Slides thanks to Stuart Anderson

MTAT : Software Testing

Software Testing. Lecturer: Sebastian Coope Ashton Building, Room G.18

Computer Science and Software Engineering University of Wisconsin - Platteville 9-Software Testing, Verification and Validation

(Refer Slide Time: 05:25)

SFWR ENG 3S03: Software Testing

No Source Code. EEC 521: Software Engineering. Specification-Based Testing. Advantages

Chap 2. Introduction to Software Testing

Chapter 10. Testing and Quality Assurance

1 Visible deviation from the specification or expected behavior for end-user is called: a) an error b) a fault c) a failure d) a defect e) a mistake

Testing. UW CSE 160 Winter 2016

Black Box Testing. EEC 521: Software Engineering. Specification-Based Testing. No Source Code. Software Testing

People tell me that testing is

Integration Testing. Unit Test vs Integration Testing 1. Unit Testing vs Integration Testing 2. Unit testing: isolation, stub/mock objects

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams

Department of Electrical & Computer Engineering, University of Calgary. B.H. Far

Using the code to measure test adequacy (and derive test cases) Structural Testing

Software Testing. Software Testing

Black-box Testing Techniques

Testing & Debugging TB-1

MTAT : Software Testing

Fachgebiet Softwaretechnik, Heinz Nixdorf Institut, Universität Paderborn. 4. Testing

An Introduction to Systematic Software Testing. Robert France CSU

CS 424 Software Quality Assurance & Testing LECTURE 3 BASIC CONCEPTS OF SOFTWARE TESTING - I

Aerospace Software Engineering

The testing process. Component testing. System testing

Bridge Course On Software Testing

Testing. CMSC 433 Programming Language Technologies and Paradigms Spring A Real Testing Example. Example (Black Box)?

Software Testing part II (white box) Lecturer: Giuseppe Santucci

Test design techniques

UNIT-4 Black Box & White Box Testing

INTRODUCTION TO SOFTWARE ENGINEERING

Verification Overview Testing Theory and Principles Testing in Practice. Verification. Miaoqing Huang University of Arkansas 1 / 80

UNIT-4 Black Box & White Box Testing

Testing. So let s start at the beginning, shall we

CSE 403: Software Engineering, Fall courses.cs.washington.edu/courses/cse403/16au/ Unit Testing. Emina Torlak

Software testing. Ian Sommerville 2006 Software Engineering, 8th edition. Chapter 23 Slide 1

Verification and Validation

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013!

Second assignment came out Monday evening. Find defects in Hnefetafl rules written by your classmates. Topic: Code Inspection and Testing

Agile Software Development. Lecture 7: Software Testing

CSCA08 Winter Week 12: Exceptions & Testing. Marzieh Ahmadzadeh, Brian Harrington University of Toronto Scarborough

MONIKA HEINER.

The Importance of Test

Software Engineering 2 A practical course in software engineering. Ekkart Kindler

Recap Randomized Algorithms Comparing SLS Algorithms. Local Search. CPSC 322 CSPs 5. Textbook 4.8. Local Search CPSC 322 CSPs 5, Slide 1

MTAT : Software Testing

XVIII. Software Testing. Laurea Triennale in Informatica Corso di Ingegneria del Software I A.A. 2006/2007 Andrea Polini

Lecture 10: Introduction to Correctness

Lecture Transcript While and Do While Statements in C++

Software Engineering Testing and Debugging Testing

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Chapter 11, Testing. Using UML, Patterns, and Java. Object-Oriented Software Engineering

Software Testing Lecture 1. Justin Pearson

Using TLC to Check Inductive Invariance

Software Engineering and Scientific Computing

Unit Testing Framework for Tcl

Course Content. Objectives of Lecture 18 Black box testing and planned debugging. Outline of Lecture 18

INTRODUCTION TO SOFTWARE ENGINEERING

6.852: Distributed Algorithms Fall, Class 21

Jtest Tutorial. Tutorial

(From Glenford Myers: The Art of Software Testing)

Software Development and Usability Testing

Software Quality Assurance (SQA) Software Quality Assurance

Program Analysis. Program Analysis

Verification and Validation. Assuring that a software system meets a user s needs. Verification vs Validation. The V & V Process

Testing & Continuous Integration. Kenneth M. Anderson University of Colorado, Boulder CSCI 5828 Lecture 20 03/19/2010

Test s in i g n III Week 16

18-642: Unit Testing 9/18/ Philip Koopman

Classification: Feature Vectors

Benefits of object-orientationorientation

Item 18: Implement the Standard Dispose Pattern

Input Space Partitioning

Written exam TDDD04 Software Testing

Mathematics of Data. INFO-4604, Applied Machine Learning University of Colorado Boulder. September 5, 2017 Prof. Michael Paul

Software Testing TEST CASE SELECTION AND ADEQUECY TEST EXECUTION

Mobile Computing Professor Pushpendra Singh Indraprastha Institute of Information Technology Delhi Java Basics Lecture 02

Terminology. There are many different types of errors and different ways how we can deal with them.

Quote by Bruce Sterling, from: A Software Testing Primer, Nick Jenkins

18-642: Unit Testing 1/31/ Philip Koopman

Part 5. Verification and Validation

MTAT : Software Testing

Midterm Wednesday Oct. 27, 7pm, room 142

Software Engineering

Transcription:

1 1 Getting those Bugs Out --Software Testing 1. Issues in software testing and reliability 2. Test sets, test selection criteria, and ideal test sets. 3. Defect Testing 3.1 Black Box Testing 3.2 White Box Testing 4. System Test, Acceptance Test 1 1

2 2 Difficulties of testing software Input Domain Size The input space is enormous (typically infinite). Fish in the sea problem. We can never tell if faults remain in the system, only what faults have been found. Software Complexity Software has numerous internal states and paths (too numerous to test them all). High reliability requirements Very hard to test systems that have extremely stringent quality requirements. Time constraints When do you stop testing? PROBLEM: how do you pick test sets??? Requirements Clarity Sometimes it can be quite difficult to tell whether or not the system is performing correctly. Variable User Perception Users have different ideas of what they want the software to do. 2 2

3 3 Requirements Recap of Lifecycle Testing System Testing Design Integration Testing Implementation Unit Testing Testing relationship. Can mean 2 things: 1. Faults introduced in this step are detected by the corresponding testing phase. 2. Checks to make sure that the product conforms to the corresponding lifecycle document. This lecture: Mostly focused on Unit testing, although the same principles apply to other levels of testing. 3 3

4 4 Two Main kinds of testing. Testing Statistical Testing: Choose a very large test set from a population, in a manner which mimics normal use. Statistically estimate remaining faults. Defect Testing: Choose a very specific test set according to a particular criterion that is designed catch common kinds of errors. 4 4

5 5 /*returns x^(3/2) */ A Cautionary Tale float exp3over2pos(float inp) {/*Version 0.1 */ return (inp*inp/sqrt(inp)); }; Test Inputs: 3, 4, -5!!! float exp3over2pos(float inp) {/*Version 0.2 */ if (inp < 0) return(inp*inp/sqrt(-inp)); else return (inp*inp/sqrt(-inp)); } Test Inputs: -5, -3, -2 float exp3over2pos(float inp) {/*Version 0.3 */ }; if (0 == inp) throw ZeroDivideException; else if (inp < 0) return(inp*inp/sqrt(-inp)); else return (inp*inp/sqrt(inp)); Test Inputs: 0 Hello, System test: This function returns x ^(3/2) for float x. 5 5

6 6 Morals of this story: Regression Testing: after making any changes to the program, test on every input that previously worked with the the old version of the program. Picking test sets is HARD! Black Box testing: Using the specification as a guide, identify different categories of inputs, and select one or more inputs from each of those categories. White box testing: Using the structure of the program as a guide, identify categories of inputs that exercise different parts of a program s structure, and select one or more inputs from each category. No testing method is guaranteed to find faults, failures, errors etc. 6 6

7 7 Some Terminology Failure: Incorrect system behaviour--not what the customer wanted. Fault: the part of the program that caused this failure to happen. Input Domain: The set of all possible inputs to the program P, denoted by D. Test Set: a set T such that: T 2 D Test selection criterion TC: TC 2 D T satisfies TC if: T TC Ideal T: one that always reveals a fault. TC1 finer than TC2 if for every program P, every set T that is in TC1 there is a set t in TC2 such that t T 7 7

8 8 8 Functional or Black Box Testing Test selection Criterion: the program s specification (irrespective of implementation) Specification of a binary search routine: X is a sorted array of keys of size n > 1 k is an input key. If X has the key, return smallest j such that X[j] = k; else throw a notfound exception. How to identify input categories based on the specification. 1. Inputs which conform to the pre-conditions. 2. Inputs which don t. 3. Inputs which give rise to each type of answer. How to do this for binary search? What are the benefits of this approach? -Test creating process carefully looks at specification. -Testing true to specifications -Can be done by someone unfamiliar with the implementation of the system What are the disadvantages of the approach? -May not test for faults due to internal implementation problems. -May not test combinations of categories. -Usually informal, hard to verify satisfaction of criterion. 8

9 9 White Box Testing Test selection Criterion: the program s implementation (irrespective of specification) but based on some abstraction/approximation. 1. Statement Coverage Criterion: a test set T, such that by executing program P on T, every statement in the program P is executed. 2. Test Coverage measure: for any test T any criterion, how good is it (% of coverage). float myfun(float x) { /*I don t know what this program does */ float y; if (x == 0) throw xzeroex; else if (x >= 1) throw xoneex; if(x < 0) y= -x; return(y/sqrt(x*(1-x))); } Some Sample Test Sets: {1,-2} {0,-2}, {0,-2, 3}, {0, -1, 2} Do they satisfy criterion? If not, what s the coverage? 9 9

10 10 More Whitebox Testing Edge coverage criterion: every edge in a program s control flow graph is covered: float myfun(float x) { float y; if (x == 0) throw xzeroex; else if (x >= 1) throw xoneex; if(x < 0) y= -x; return(y/sqrt(x*(1-x))); } Some Sample Test Sets: {1,-2} {0,-2}, {0,-2, 3}, {0, -1, 2} {0, -1, 2, -2} {0, -2, 3, 0.8} {0,-1,2,0.7} Edge coverage criterion is finer than statement coverage criterion 10 10

11 11 Proof that edge coverage is finer that statement coverage. This means that for every program, every test case that gets E. cvg also gets S. cvg. Suppose not. Then there is some program P, for which a test set T achieves edge coverage, but fails to get S coverage, i.e., some statement S in P is not executed by T. But let s assume that P doesn t have dead Code. Then S must be reachable by some edge E. But then T must execute E (since T gets edge coverage). So S must be executable. So T must execute S. Reductio ad Absurdum. 11 For programs without dead code, edge coverage is finer than statement coverage 11

12 12 Status of White box testing Non-ideal: test sets that satisfy white box coverage criteria are not guaranteed to find faults. Can be automated: various tools are available which measure the level of test coverage. Well studied: Several different criteria, of varying fineness have been proposed, and the comparisons are well sorted out. Widely adopted: Many organizations use white box coverage measures to gauge testing effective. Experimentally borne out: Ostrand et al, Malaiya et al: 85% -95% edge coverage leads to significantly lower faults in the field. 12 12