Software technology 7. Testing (2) BSc Course Dr. Katalin Balla

Similar documents
Functional Testing (Black Box Testing)

1 Visible deviation from the specification or expected behavior for end-user is called: a) an error b) a fault c) a failure d) a defect e) a mistake

Quality Assurance in Software Development

Test design techniques

Part 5. Verification and Validation

Software Quality Assurance. David Janzen

Software Testing. 1. Testing is the process of demonstrating that errors are not present.

PESIT Bangalore South Campus

CSCE 747 Software Testing and Quality Assurance

PES INSTITUTE OF TECHNOLOGY- BANGALORE SOUTH CAMPUS

MONIKA HEINER.

Examination Questions Time allowed: 1 hour 15 minutes

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

CS 4387/5387 SOFTWARE V&V LECTURE 4 BLACK-BOX TESTING

Software Testing for Developer Development Testing. Duvan Luong, Ph.D. Operational Excellence Networks

EECS 4313 Software Engineering Testing. Topic 05: Equivalence Class Testing Zhen Ming (Jack) Jiang

[IT6004-SOFTWARE TESTING] UNIT 2

Sample Exam Syllabus

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

Verification and Validation. Assuring that a software system meets a user s needs. Verification vs Validation. The V & V Process

Computer Science and Software Engineering University of Wisconsin - Platteville 9-Software Testing, Verification and Validation

Software Development Methodologies

Software Engineering 2 A practical course in software engineering. Ekkart Kindler

Lecture 15 Software Testing

What is Structural Testing?

Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms

Software Testing. Software Testing

Quality Assurance = Testing? SOFTWARE QUALITY ASSURANCE. Meaning of Quality. How would you define software quality? Common Measures.

Sample Exam. Certified Tester Foundation Level

Testing Theory. Agenda - What will you learn today? A Software Life-cycle Model Which part will we talk about today? Theory Lecture Plan

CS 424 Software Quality Assurance & Testing LECTURE 3 BASIC CONCEPTS OF SOFTWARE TESTING - I

Chapter 10. Testing and Quality Assurance

Topic: Software Verification, Validation and Testing Software Engineering. Faculty of Computing Universiti Teknologi Malaysia

Programming Embedded Systems

Chapter 9 Quality and Change Management

In this Lecture you will Learn: Testing in Software Development Process. What is Software Testing. Static Testing vs.

Pearson Education 2007 Chapter 9 (RASD 3/e)

Software Engineering (CSC 4350/6350) Rao Casturi

TMap Suite Test Engineer

Darshan Institute of Engineering & Technology for Diploma Studies

Topics in Software Testing

Specification-based test design

Department of Electrical & Computer Engineering, University of Calgary. B.H. Far

Bridge Course On Software Testing

Modern Methods in Software Engineering. Testing.

Software Testing CS 408

Testing: Test design and testing process

Chapter 8 Software Testing. Chapter 8 Software testing

ASTQB Advance Test Analyst Sample Exam Answer Key and Rationale

Why testing and analysis. Software Testing. A framework for software testing. Outline. Software Qualities. Dependability Properties

Introduction To Software Testing. Brian Nielsen. Center of Embedded Software Systems Aalborg University, Denmark CSS

Software Testing part II (white box) Lecturer: Giuseppe Santucci

Introduction to Software Engineering

It is primarily checking of the code and/or manually reviewing the code or document to find errors This type of testing can be used by the developer

Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm. Rao Casturi 11/10/2015

Manuel Oriol, CHCRC-C, Software Testing ABB

(From Glenford Myers: The Art of Software Testing)

Testing. ECE/CS 5780/6780: Embedded System Design. Why is testing so hard? Why do testing?

People tell me that testing is

Software Engineering Fall 2014

Testing in the Agile World

VETRI VINAYAHA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Software Testing Interview Question and Answer

Standard Glossary of Terms used in Software Testing. Version 3.2. Beta - Foundation Terms

Darshan Institute of Engineering & Technology Unit : 9

Advanced Software Engineering: Software Testing

Black-box Testing Techniques

CSE 403: Software Engineering, Fall courses.cs.washington.edu/courses/cse403/16au/ Unit Testing. Emina Torlak

Software Design Models, Tools & Processes. Lecture 6: Transition Phase Cecilia Mascolo

Sample Question Paper. Software Testing (ETIT 414)

linaye/gl.html

QUIZ #5 - Solutions (5pts each)

Software Engineering Software Testing Techniques

Chapter 11, Testing. Using UML, Patterns, and Java. Object-Oriented Software Engineering

Question 1: What is a code walk-through, and how is it performed?

Testing in an Agile Environment Understanding Testing role and techniques in an Agile development environment. Just enough, just in time!

Verification and Validation

Dilbert Scott Adams. CSc 233 Spring 2012

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams

Introduction to Software Engineering

Testing & Continuous Integration. Kenneth M. Anderson University of Colorado, Boulder CSCI 5828 Lecture 20 03/19/2010

No Source Code. EEC 521: Software Engineering. Specification-Based Testing. Advantages

Chapter 9. Software Testing

Chapter 8. Achmad Benny Mutiara

Lecture 26: Testing. Software Engineering ITCS 3155 Fall Dr. Jamie Payton

Verification and Validation

Verification Overview Testing Theory and Principles Testing in Practice. Verification. Miaoqing Huang University of Arkansas 1 / 80

10. Software Testing Fundamental Concepts

Higher-order Testing. Stuart Anderson. Stuart Anderson Higher-order Testing c 2011

Facts About Testing. Cost/benefit. Reveal faults. Bottom-up. Testing takes more than 50% of the total cost of software development

Aerospace Software Engineering

Software Testing Techniques

Path Testing + Coverage. Chapter 8

Ingegneria del Software II academic year: Course Web-site: [

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CS SOFTWARE ENGINEERING

Part I: Preliminaries 24

Software Engineering: Theory and Practice. Verification by Testing. Test Case Design. Tom Verhoeff

Types of Software Testing: Different Testing Types with Details

Black Box Testing. EEC 521: Software Engineering. Specification-Based Testing. No Source Code. Software Testing

Static Analysis Techniques

Transcription:

Software technology 7. Testing (2) BSc Course Dr. Katalin Balla

Contents Testing techniques Static testing techniques Dynamic testing Black box testing White-box testing Testing in the agile environment 2

Testing techniques Help choosing the right way of testing / test cases Basic approaches : Studying the internal structure of the software (without running the program): static testing Studying software behavior while running it: dynamic testing 3

Testing techniques Balla K. 11/5/2017 4

Testing techniques The chosen technique helps designing the tests and identifying the test cases 11/5/2017 Balla K. 5

Identifying test cases for functional testing Specification Program Specification Program Test cases (method A) Test cases (method B) Question: which method is better? 6

Identifying test cases for structural testing Specification Program Specification Program Test cases (method A) Test cases (method B) Question: which method is better? 7

Identifying test cases Program behaviours S P Functional (black box) testing: Establishes confidence Structural (white box) testing: Seeks faults 8

Testing techniques The good testing method in each case will result from adequately combining static and dynamic (functional and structural) testing It is not easy to adequately mix testing techniques! Professional knowledge is required for it! 9

Static testing The way of testing when we do not run the software Static testing methods: code review, inspection, walkthrough All are based on human observation Can be applied to any work product It is very efficient 10

Static testing Basic idea: when testing the software by running it, we cannot cover the entire software. But inspecting the entire software is possible. Scope is not only to find bugs, but to prevent their further occurences (eg. by process improvement) 11

Static testing Static analysis manual automated inspection walkthrough Static review Symbolic execution Syntax analyzer 12

Static testing Static analysis techniques Basically in 3 ways: Analysing the internal structure of the program Checking program completeness and consistency based on predefined rules Comparing the program to its specification or its documentation 13

Static code analysis Analysis of the source code in order to understand how the software operates. Criteria for correctness are defined. There are more analysis techniques, some supported by tools Elements looking dangerous are searched Formal methods, describing program semanthic by mathematical means 14

Static analysis Static analysis applied in practice: a debugger, helping to identify run-time errors The result of the static analysis is not unambiguous: in fact, no method is able to predict whether there will be bugs while running the software 15

Walkthrough Hints: Divide the code into small parts. Each part should be connected to one Goal. The meaning, length of parts depends on the program. One part should execute something to what a goal can be associated. Identify / understand the meaning of each variable. Look for common bugs Eg. cycle counter in index < or <=? for (index = 0; index <= MAX_COUNT; index++) { array[index]=j; } Identify inputs of the walkthrough Choose inputs to the code / parts of the code Walk through the code Think as a computer works! Source: Find the Bug A Book of Incorrect Programs. By Adam Barr.Publisher: Addison Wesley Professional Pub Date: October 06, 2004ISBN: 0-321-22391-8 16

Inspection (*) Structured peer review... a formal evaluation technique in which software requirements, design, or code are examined in detail by a group other than the author to detect faults, violations of development standards, and other problems (IEEE Std. 729) Initially developed at IBM by M.E. Fagan enhanced, e.g. by: Gilb and Graham, Software Inspection, Addison-Wesley, 1993 Knight and Myers, An improved inspection technique, Communications of the ACM, Nov. 1993 Genuchten e.a., supporting inspections with an electronic meeting system, JMIS, vol. 14, no. 3, 1998 Objective: not just to find defects, but also, to find defects earlier in the life cycle and to remove the causes from the process (*: Rob Kusters, Jos Trienekens: Software Requirements Management. Course material. TUE.) 17

Inspection Fagan basic model for inspection, originally develped for IBM. Important is that it is structured. Has been refined by many people, in many ways. 18

Inspection Important element in Fagan-inspection is the defining the roles: Moderator schedules inspection, determines length, monitors / manages meeting Reader "Guides" inspection by reading aloud, line by line Author Provides overview, answers questions, performs rework Recorder Documents result of the inspection Inspectors do preparation, participate in inspection often specialised inspectors focus on a single aspect 19

Inspection - experiences Might reduce testing costs, up to 20-30 % of development cost Unit testing and integration testing might not be needed. Only system test is executed. Eg: inspection of system design against requirements, inspection of code against system design etc. (Few have the courage for this ) The team can learn to avoid the errors found in the inspection Fagan: IBM-report: 80-90% of the bugs can be found by inspection. Effort can be decreased by 25%. 20

Testing techniques Dynamic testing Supposes running of the system / module It is a trial of running the system, in a test environment, using test data Can be: Based on functional requirements: functional / back box testing. Test cases are identified based on the specification. Based on internal structure of the software: structural / white box testing. Test cases are identified based on the code. 21

Black-box testing Typical test design techniques: Boundary value analysis Equivalence partitioning Decision tables State transition testing Use case testing 11/5/2017 Balla K. 22

Boundary value analysis A black-box test design technique in which test cases are designed based on boundary values. 11/5/2017 Balla K. 23

Boundary value testing Y = F (x1, x2) When the function F is implemented as a program, the input variables x1 and x2 will have some (possibly unstated) boundaries. a x1 b c x2 d 24

Input domain of a function of two variables d x2 Set of legitimate inputs for function F c a b x1 25

Basic ideas of boundary value testing (1) Focus on the boundary of the input space to identify test cases. Reasoning: it is an experience that faults often appear on the boundary of the input domain. Eg: loop condition may test for < when they should test for, and counters are often off by one. Generating test cases by using input variable values at : their minimum (min), just above the minimum (min+), nominal value (nom), just below maximum (max-), maximum value (max) 26

Basic ideas of boundary value testing(2) Single fault assumption: failures are only rarely the result of simultaneous occurrence of two (or more) faults. Boundary value analysis test cases are obtained by holding the values of all but one variable at their nominal values, and letting that variable assume its extreme values. 27

Generating test cases using boundary value analysis d x2 Boundary value analysis test cases for a function of two variables c a b <x1 nom, x2 extreme value >, <X1 extreme value, X2 nom > x1 28

Generalizing boundary value analysis In two ways: 29 By increasing the number of variables If we have a function of n variables, we hold all but one at the nominal values and let the remaining variable assume the min, min+, nom, max-, max values. Thus, for a function of n variables, boundary value analysis will generate 4n+1 test cases, By kinds of ranges Depends on the nature of the variables themselves. (in some cases boundaries are set, in other cases we have to set them)

Generalizing boundary value analysis Generalizing ranges e.g.: Months: [1,12] 30 If the has discrete, bounded values, it is easy to determine min, min+, nom, max-, max. When no explicit bound are present, the tester has to create artificial bounds (e.g. length of triangle sides [1,200] or [1, the largest representable integer] Boundary value analysis does not make much sense for Boolean variables ( extremes: True and False, but what about the others???) need test based on decision tables

Robustness testing Extending boundary value testing: besides the 5 former values we test for the values just above maximum (max+) and just below minimum (min-) d x2 Robustness testing test cases for a function of two variables c a b x1 31

Worst case testing We reject the single-fault assumption (failures are rarely caused by more than one faults ) We are interested in what happens when more than one variable has an extreme value. For each variable, we start with the five-element set that contains (min, min+, nom, max, max-),, and we take the Cartesian product of these sets to generate test cases. (There will be 5 n test cases.) 32

Robust worst case testing x2 Robust Worst case test cases for a function of two variables d c a b x1 33

Limitations of boundary value testing Does not take into account the type of the function or the meaning of the variables Test cases generated by boundary value analysis are rudimentary: they are based on few information about the program Can be efficiently applied when the program is a function of several independent variables that represent bounded physical quantities. 34

Boundary value testingsummary The most rudimentary testing Assumes independent variables and physical quantities Single fault assumption Gives good results if used with robustness testing and worst case testing Robustness testing can be well used for testing internal variables Well usable for error message testing 35

Equivalence partitioning A black-box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle, test cases are designed to cover each partition at least once. 11/5/2017 Balla K. 36

Defining equivalence classes Why do we use it? Would like to ensure that testing is complete Would like to eliminate redundant test cases Equivalence classes Form a partition of a set, where partition refers to a collection of mutually disjoint subsets when the union is the entire set. Completeness of testing can be ensured by choosing one input value from each equivalence class. Disjointness ensures a form of nonredundancy. 37

Equivalence partitions Test cases are generated by choosing one input value from each equivalence class A1 A2 A4 A3 A5 A6 38

Using equivalence classes in testing Key: Equivalence classes (equivalence relation) has to be chosen wisely, well!!! Eg: triangle problem: can define equivalence classes based on the output (no use to test eg. 1,1,1 5,5,5 és 100,100,100 cases) We guess equivalence classes. There are many methods described in literature to define equivalence classes 39

Testing based on decision tables A black-box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. Most rigorous of all functional testing methods : decision tables enforce logical rigor. Decision tables have been used since early 1960s to represent and analyze complex logical relationships. Decision tables are ideal for describing situations in which a number of combinations of actions are taken under varying sets of conditions. 40

Decision table Example Rule 1 Rule 2 Rule 3 Rule 4 Rule 5 Rule 6 c1(cond) T T T F F F C2 T T F T T F C3 T F - T F - A1(action) X X X A2 X X A3 X X A4 X X C: condition A: action 41

Developing decision tables If Conditions: inputs (sometimes conditions end up referring to equivalence classes of inputs) Actions: outputs (sometimes actions refer to major functional processing portions of the item tested) The rules can be interpreted as test cases. Decision tables can mechanically be forced to be complete therefore we know we have a comprehensive set of test cases. 42

Developing decision tables If redundant entries (rules) have the same action associated, there is no problem In some cases redundant entries (same rule) have different associated actions. This type of decision table is inconsistent, as it is impossible to decide which rule to apply. Such cases have to be identified and eliminated by testers (look for don t care entries) 43

Using decision tables Recommended if: If-then-else logic exists There are logical relationships between inputs There are calculations involving subsets of the input variables Cause-and-effect relationship exists between inputs and outputs High cyclomatic complexity Decision tables are not easily used when many variables exist. Iteration helps! 44

State transition testing A system may exhibit different responses depending on curent condition and previous history. The system can be represented by a state transition table or diagram, usable in testing. 45

Use case testing Tests can be derived from use cases. 46

Other testing techniques Basically functional Almost all are experience-based Special value testing Random testing Exploratory testing Error guessing Fault attack Checklist-based testing 11/5/2017 Balla K. 47

Special value testing Widely used Most intuitive, least uniform Also called Ad-hoc testing Might be subjective however, it can provide valuable information Tester defines problematic parts by using own experience NextDate: test cases for February 28, 29 and leap years 48

Random testing Idea: inputs are generated using a random number generator (not min, min+, nom, max, max-) Question: how many randomly generated test cases are enough? Answer: define test coverage metrics Eg: triangle problem: we generate test cases until all possible triangle types will be among them 49

Experience based testing Error Guessing Error guessing is a technique used to anticipate the occurrence of defects based on the tester s knowledge, including: How the application has worked in the past What types of mistakes the developers tend to make Defects that have been found in other applications A structured approach to the error guessing technique is to generate a list of possible defects, and design tests that attack those defects. This approach is called fault attack. These defect and failure lists can be built based on experience, defect and failure data, and from common knowledge about why software fails. 50

Experience based testing Exploratory Testing Includes the nearly concurrent activities of designing tests, executing tests, logging test results, evaluating the results, and learning about the test object. The tester uses a test charter containing test objectives to guide the testing and performs the testing within a defined time-box. Exploratory testing is most useful when there are few or inadequate specifications, severe time pressure, or in order to augment or complement other, more formal, testing techniques. It can serve as a verification of the test process by helping to ensure that the most serious defects are found. 11/5/2017 Balla K. 51

Experience based testing Checklist-based Testing In checklist-based testing the tester creates a list of tasks or scenarios to test and then designs tests to exercise them. Such checklists can be built based on experience, defect and failure data, knowledge about what is important for the user and an understanding of why and how software fails. 11/5/2017 Balla K. 52

Functional testing - summary Program: mathematical function, that maps its inputs to outputs. Boundary value testing: boundary value analysis, robustness testing, worst case testing, robust worst case testing Equivalence class based testing: defining equivalence classes to reduce the number of test cases: weak normal, weak robust. Strong normal, strong robust Decision table based testing: takes into account logical dependences existing among the variables 53

Effort of functional testing Functional methods studied vary both in terms of number of test cases generated, and the effort to develop these test cases. Number of test cases high low Test case sophistication Boundary value Equivalence classes Decision tables 54

Effort of functional testing Effort, by technique Effort to develop test cases high low Boundary value Equivalence classes Decision tables Test case sophistication 55

Testing functional and non functional requirements Functional requirements of a system specify something (functions) that the system should do. They may be described in work products such as a requirements specification, use cases, or functional specifications, or they may be undocumented. The functions are what the system does. Functional tests are based on functions and features (described in documents or understood by the testers) and their interoperability with specific systems, and may be performed at all test levels (e.g., tests for components may be based on a component specification). 56

Testing functional and non functional requirements Non-functional requirements of a system specify criteria that can be used to verify the operation of a system, rather than specific behaviors. The term non-functional testing describes the tests required to measure characteristics of systems and software that can be quantified on a varying scale. E.g., response times of a system using performance testing. These tests can be referenced to a quality model such as the one defined in ISO/IEC 25010. It is the testing of how the system works. Non-functional testing considers the external behavior of the software and, in most cases, uses black- box test design techniques to accomplish that, and may be performed at all test levels. Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing. 11/5/2017 Balla K. 57

Structural testing Structural testing (also known as white box testing) derives tests from the knowledge of the system s internal structure or implementation. Internal structure may include knowledge of code, architecture, work and data flows within the system. Structural testing help measure the thoroughness of testing through assessment of coverage. Coverage is the extent that a structure has been exercised by a test suite, and is expressed as a percentage of the items being covered. For example, If coverage is not 100%, then more tests may be designed to test those items that were missed to increase coverage. 58

Structural testing At component level, coverage is based on the percentage of code that has been tested by a test suite and is called code coverage. At this level, tools can be used to measure the code coverage of statements or decisions within the code. Structural testing may also be based on the architecture of the system, such as a calling hierarchy. Structural testing approaches can be applied at system, system integration or acceptance testing levels (e.g., to business models or menu structures). 11/5/2017 Balla K. 59

Structural testing It is based on source code Relaying on absolute basics Uses precise definitions and mathematical analysis Starting from the years 70 Makes possible to use absolutely precise measurements 60

Structural testing Statement Testing and Coverage Statement testing exercises the executable statements in the code. Coverage is determined by the number of statement executed by the test divided by the total number of executable statements in the code under test. Decision Testing and Coverage Decision testing exercises the decisions in the code and tests the code that is executed based on the decision outcome. To do this, the tests follow the two control flows that occur from a decision point (i.e., the path for the True outcome and the path for the False outcome). Coverage is determined by the number of decision outcomes executed by the tests divided by the total number of decision outcomes in the code under test. 61

The Value of Statement and Decision Testing 100% coverage with statement testing will result in executing all executable statements in the code. This ensures there is no untested code in the system. 100% decision coverage executes all control flow branches, which includes testing the True outcome of an If statement and also the False outcome, even when there is no explicit Else statement. In the case of a loop construct, statement coverage requires that all statements within the loop are executed, but decision coverage requires both that the loop is executed and that it is bypassed. Decision coverage helps to find situations where there is a dependency on code being executed when it could be bypassed based on a decision outcome 100% decision coverage guarantees 100% statement coverage. 11/5/2017 Balla K. 62

Path testing A program graph is assigned to the program. If i and j are nodes in the graph, iff there is edge from i to j, if the statement or statement fragment assigned to the node j can be executed directly after the statement or statement fragment assigned to the node i. We call these graphs Control Flow Graph (CFG) 63

Example 1 Program triangle 2 Dim a,b,c, As Integer 3 Dim IsA Triangle As Boolean step1: get input 4 Output ( Enter 3 variables 5 Input (a,b,c) 6 Output ( side A:,a) 7 Output ( side B, b) 8 Output ( side C, c) step2: Is A Traiangle? 9 If (a <b+c) And (b <a+c) And (c<a+b) 10 then IsATriangle = True 11 else IsATriangle = False 12 Endif Step 3: Determine triangle type 13 If IsATriangle 14 Then If (a=b) And (b=c) 15 Then Output ( Equilateral ) 16 Else If (a # b) And (b # c) And (a # c) 17 Then Output ( Scalene ) 18 Else Output ( Isosceles ) 19 Endif 20 Endif 21 Else Output ( Not a triangle ) 22 Endif 23 End triangle 4 5 6 7 8 9 10 11 21 22 23 12 13 14 15 16 20 17 18 19 64

DD-Path (Decision to decision Path) Miller, 1977 Sequential statements with no branches If an statement is erroneous, the others will not be executed well DD-Path: a path, having different starting- and end-nodes, with internal nodes having indeg = 1 and outdeg = 1 The starting node is double connected with every nodes of the path and there are no nodes with 1 or 3 connections A path with 0 length is a DD-Path (no nodes and edges) 65

DD-Path (Decision to decision Path) The DD-Path is a chain in a program graph such that: 1. case: it consists of a single node with indeg=0 2. case: it consists of a single node with outdeg=0 3. case: consists of a single node with indeg 2 or outdeg 2 4.case: consists of a single node with indeg =1 and outdeg = 1 5. case: a maximal statement sequence, with length 1 66

DD-Path graph A DD-path graph of a program is a directed graph, in which nodes are the DD-paths of the program graph, and the edges are representing the flow amongst the sequential DD-paths The DD-path graph is a condensed graph 67

DD-Path (Decision to decision Path) 4 5 6 7 8 Nodes of the program graph DD-Path Definition case 4 first 1 first A 9 5-8 A 5 9 B 3 B 10 11 10 C 4 11 D 4 C D Program graph 21 22 23 12 13 14 15 16 20 17 18 19 12 E 3 13 F 3 14 H 3 15 I 4 16 J 3 17 K 4 18 L 4 19 M 3 20 N 3 21 G 4 22 O 3 23 last 2 E F G I N O last H J K DD-Path graph M L 68

Change-related testing Whenever change occurs in a system Regression test: we prove that all that was functioning in previous build is still functioning Experience: each fifth modification injects a new bug Progression testing: we assume that integration test has been successfully performed, and we test only the new functions 69

Maintenance testing Planned maintenance Perfective maintenance to add or improve nonfunctional (or functional) requirements Preventive maintenance to prevent future problem Adaptive maintenance to adjust the software to a new environment Corrective maintenance to fix known defects which have not yet caused a failure Unplanned maintenance Corrective maintenance: emergency fixes of defects resulting in a failure. 70

Test management The planning, estimating, monitoring and control of test activities, typically carried out by a test manager. It is important to analyse and understand test related data See: The fundamental tetsing process- őrevious lecture See: Measurement and analysis in lecture 10 71

Benefits of test management One can understand, for instance, the efficiency of the testing performed. 11/5/2017 Balla K. 72

Efficiency of testing Defect detection percentage (DDP): Nr of errors found in this test Total Nr of errors, including the errors to be found later X 100 this test can be: A test phase A test related to one function or subsystem All test related to a system 73 Dr. Balla Katalin Software testing - 10.

Efficiency of testing Estimating the nr of errors still in the system We know that in this phase DDP is usually e.g. 66% (found in this phase / (found in this phase + X)) x 100 = 66 X = (found in this phase x 100) / 66 found in this phase X1 = (20 x 100) / 66 20 = 10 X2 =(20 x 100) / 50 20 = 20 X3 =(20 x 100)/80 20 = 5 74 (Source: Dorothy Graham: Measuring the value of testing. Escom, 2. April 2001. London www.grove.co.uk) Dr. Balla Katalin Software testing - 10.

Testing in the agile environment In Agile development small iterations of design, build and test are happening on a continuous basis supported by on-going planning. So test activities are also happening on an iterative, continuous basis within this development approach 75

Agile testing Testing practice for a project using Agile software development methodologies, incorporating techniques and methods, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. 76

Testing in the agile environment Agile testing involves testing as early as possible in software development life cycle. It demands high customer involvement and testing code as soon as it becomes available. The code should be stable enough to take it to system testing. Extensive regression testing can be done to make sure that the bugs are fixed and tested. Communication between the teams makes agile testing success!!! 11/5/2017 Balla K. 77

Some Agile testing methods Agile testing quadrants Test driven development Continuous integration 11/5/2017 Balla K. 78

The Agile testing Quadrants https://www.guru99.com/agile-testing-a-beginner-s-guide.html 79

The Agile testing Quadrants Agile Quadrant I The internal code quality is the main focus in this quadrant, and it consists of test cases which are technology driven and are implemented to support the team, it includes 1. Unit Tests 2.Component Tests Agile Quadrant II It contains test cases that are business driven and are implemented to support the team. This Quadrant focuses on the requirements. The kind of test performed in this phase is 1. Testing of examples of possible scenarios and workflows 2. Testing of User experience such as prototypes 3. Pair testing 11/5/2017 Balla K. 80

The Agile testing Quadrants Agile Quadrant III This quadrant provide feedback to quadrants one and two. The test cases can be used as the basis to perform automation testing. In this quadrant, many rounds of iteration reviews are carried out which builds confidence in the product. The kind of testing done in this quadrant is 1 Usability Testing 2. Exploratory Testing 3. Pair testing with customers 4. Collaborative testing 5. User acceptance testing Agile Quadrant IV This quadrant concentrates on the non-functional requirements such as performance, security, stability, etc. With the help of this quadrant, the application is made to deliver the non-functional qualities and expected value. 1. Non-functional tests such as stress and performance testing 2. Security testing with respect to authentication and hacking 3. Infrastructure testing 4. Data migration testing 5. Scalability testing 6. Load testing 11/5/2017 Balla K. 81

Test Driven Development TDD is generally used in extreme programming and agile development The programmers write the tests first The tests will be unsuccessful first Gradually writes the code to the tests The tests are supplemented with newer elements 82 Balla K. 11/5/2017

Test Driven Development 11/5/2017 Balla K. 83

Continuous integration Delivery of a product increment requires reliable, working, integrated software at the end of every sprint. Continuous integration addresses this challenge by merging all changes made to the software and integrating all changed components regularly, at least once a day. Configuration management, compilation, software build, deployment, and testing are wrapped into a single, automated, repeatable process. Since developers integrate their work constantly, build constantly, and test constantly, defects in code are detected more quickly. 11/5/2017 Balla K. 84

Continuous integration Following the developers coding, debugging, and check-in of code into a shared source code repository, a continuous integration process consists of the following automated activities: Static code analysis: executing static code analysis and reporting results Compile: compiling and linking the code, generating the executable files Unit test: executing the unit tests, checking code coverage and reporting test results Deploy: installing the build into a test environment Integration test: executing the integration tests and reporting results Report (dashboard): posting the status of all these activities to a publicly visible location or e-mailing status to the team 11/5/2017 Balla K. 85

Continuous integration An automated build and test process takes place on a daily basis and detects integration errors early and quickly. Continuous integration allows Agile testers to run automated tests regularly, in some cases as part of the continuous integration process itself, and send quick feedback to the team on the quality of the code. These test results are visible to all team members, especially when automated reports are integrated into the process. 11/5/2017 Balla K. 86

Agile tester Works with the team right from the beginning Stays with the team during the entire project Acceptance Criteria Definition helps the customer Automating Implements test cases Testers are involved in release planning and especially add value in the following activities: Defining testable user stories, including acceptance criteria Participating in project and quality risk analyses Estimating testing effort associated with the user stories Defining the necessary test levels Planning the testing for the release 87

What we talked about Testing techniques Static testing techniques Dynamic testing Black box testing White-box testing Testing in the agile environment 88