The Importance of Test

Similar documents
Software Quality Assurance & Testing

Topic: Software Verification, Validation and Testing Software Engineering. Faculty of Computing Universiti Teknologi Malaysia

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

Software Testing. Software Testing. in the textbook. Chapter 8. Verification and Validation. Verification and Validation: Goals

Software Quality Assurance. David Janzen

Part 5. Verification and Validation

Software Engineering (CSC 4350/6350) Rao Casturi

Topics in Software Testing

Bridge Course On Software Testing

In this Lecture you will Learn: Testing in Software Development Process. What is Software Testing. Static Testing vs.

(See related materials in textbook.) CSE 435: Software Engineering (slides adapted from Ghezzi et al & Stirewalt

Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm. Rao Casturi 11/10/2015

Software Engineering Fall 2014

Software Testing CS 408

Chapter 8 Software Testing. Chapter 8 Software testing

Lecture 15 Software Testing

Software Testing Interview Question and Answer

1 Visible deviation from the specification or expected behavior for end-user is called: a) an error b) a fault c) a failure d) a defect e) a mistake

Chapter 9. Software Testing

Darshan Institute of Engineering & Technology Unit : 9

Examination Questions Time allowed: 1 hour 15 minutes

CSE 403: Software Engineering, Fall courses.cs.washington.edu/courses/cse403/16au/ Unit Testing. Emina Torlak

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

Software Testing. Software Testing. in the textbook. Chapter 8. Verification and Validation. Verification Techniques

Software Engineering Testing and Debugging Testing

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013!

Aerospace Software Engineering

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams

Sample Exam Syllabus

An Introduction to Systematic Software Testing. Robert France CSU

Chapter 11, Testing. Using UML, Patterns, and Java. Object-Oriented Software Engineering

Verification and Validation. Assuring that a software system meets a user s needs. Verification vs Validation. The V & V Process

People tell me that testing is

Darshan Institute of Engineering & Technology for Diploma Studies

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CS SOFTWARE ENGINEERING

Introduction to Software Engineering

Chap 2. Introduction to Software Testing

Software Testing. An Overview

Terminology. There are many different types of errors and different ways how we can deal with them.

Certified Tester Foundation Level(CTFL)

Fault, Error, and Failure

Software Engineering Software Testing Techniques

Sample Exam ISTQB Advanced Test Analyst Answer Rationale. Prepared By

10. Software Testing Fundamental Concepts

Types of Software Testing: Different Testing Types with Details

Software Testing Strategies. Slides copyright 1996, 2001, 2005, 2009, 2014 by Roger S. Pressman. For non-profit educational use only

Testing & Debugging TB-1

INTRODUCTION TO SOFTWARE ENGINEERING

Verification and Validation. Verification and validation

Software Quality. Richard Harris

Testing is executing a system in order to identify any gaps, errors, or missing requirements in contrary to the actual requirements.

Verification, Testing, and Bugs

VETRI VINAYAHA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Software testing. Ian Sommerville 2006 Software Engineering, 8th edition. Chapter 23 Slide 1

Chapter 10. Testing and Quality Assurance

Ingegneria del Software Corso di Laurea in Informatica per il Management

It is primarily checking of the code and/or manually reviewing the code or document to find errors This type of testing can be used by the developer

Testing. ECE/CS 5780/6780: Embedded System Design. Why is testing so hard? Why do testing?

Testing. Unit, integration, regression, validation, system. OO Testing techniques Application of traditional techniques to OO software

MONIKA HEINER.

Second assignment came out Monday evening. Find defects in Hnefetafl rules written by your classmates. Topic: Code Inspection and Testing

Higher-order Testing. Stuart Anderson. Stuart Anderson Higher-order Testing c 2011

Computational Systems COMP1209

Verification and Validation

Verification and Validation

Modern Methods in Software Engineering. Testing.

Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms

Faults, Errors, Failures

Lecture 20: SW Testing Presented by: Mohammad El-Ramly, PhD

Computer Science and Software Engineering University of Wisconsin - Platteville 9-Software Testing, Verification and Validation

Software Engineering

Software Engineering Theory. Lena Buffoni (slides by Kristian Sandahl/Mariam Kamkar) Department of Computer and Information Science

Verification Overview Testing Theory and Principles Testing in Practice. Verification. Miaoqing Huang University of Arkansas 1 / 80

(From Glenford Myers: The Art of Software Testing)

Objectives. Chapter 19. Verification vs. validation. Topics covered. Static and dynamic verification. The V&V process

UNIT-4 Black Box & White Box Testing

Software Testing. Hans-Petter Halvorsen, M.Sc.

Chapter 1: Principles of Programming and Software Engineering

Verification and Validation

Software Testing. Software Testing

Testing Theory. Agenda - What will you learn today? A Software Life-cycle Model Which part will we talk about today? Theory Lecture Plan

Introduction to Software Testing

Software Testing Fundamentals. Software Testing Techniques. Information Flow in Testing. Testing Objectives

Testing Objectives. Successful testing: discovers previously unknown errors

Sample Exam. Certified Tester Foundation Level

UNIT-4 Black Box & White Box Testing

SOFTWARE ENGINEERING SOFTWARE VERIFICATION AND VALIDATION. Saulius Ragaišis.

Test design techniques

Testing: Test design and testing process

Quality Assurance in Software Development

Basic Concepts of System Testing - A Beginners Guide.

Software Testing and Maintenance

The testing process. Component testing. System testing

Software Testing. Massimo Felici IF

Ian Sommerville 2006 Software Engineering, 8th edition. Chapter 22 Slide 1

Programming Embedded Systems

Introduction to Dynamic Analysis

Tool Selection and Implementation

Testing is the process of evaluating a system or its component(s) with the intent to find whether it satisfies the specified requirements or not.

Program Correctness and Efficiency. Chapter 2

Transcription:

Software Testing Mistake in coding is called error, Error found by tester is called defect, Defect accepted by development team is called bug, Product does not meet the requirements then it Is failure.

The Importance of Test There are hundreds of stories about failures of computer systems that have been attributed to errors in software There are many reasons why systems fail but the issue that stands out the most is the lack of adequate testing. Example Pepsi - $42 Billion Error Philippines -1992 Due to a software error, 800,000 bottle caps were produced with number 349 instead of one. It was equivalent to $32 billion in prize money instead of $40,000

Test Planning Test planning should be done throughout the development cycle especially early in the development cycle As soon as customer requirements analysis has completed, the test team should start writing black box test cases against that requirements document. 3

Plan for Testing Define units vs. non-units for testing Determine what types of testing will be performed Determine extent do not just test until time expires prioritize, so that important tests definitely performed Document individual s personal document set included? how / when to incorporate all types of testing? how / when to incorporate in formal documents? how / when to use tools / test utilities? Determine input sources individual engineer responsible for some (units)? How / when inspected by Qality Assurance? How / when designed & performed by third parties? Estimate resources use historical data if available Identify metrics to be collected Define, gather, use e.g., time, defect count, type, and source 4

Stopping Criteria for Testing When tester has not been able to find another defect in 5 (10? 30? 100?) minutes of testing When all nominal, boundary and out-of-bounds test examples show no defect When a given checklist of test types has been completed After completing a series of targeted coverage (e.g., branch coverage for unit testing) When testing runs out of its scheduled time 5

How to select input values? Scenario I - Random values - For each input parameter we randomly select the values - Tester Experience - For each input we use our experience to select relevant values to test - Domain knowledge - We use requirements information or domain knowledge information to identify relevant values for inputs 6

How to select input values? Scenario II Equivalence classes (for black-box testing) - We subdivide the input domain into a small number of sub-domains - The equivalence classes are created assuming that they exhibits the same behavior on all elements - Few values for each classes can be used for our testing - Boundary values (for black-box testing) Is a test selection technique that targets faults in applications at the boundaries of equivalence classes Experience indicates that programmers make mistakes in processing values at and near the boundaries of equivalence classes 7

Testing Goals Based on Test Process Testing Levels (Bezier) Level 0 There is no difference between testing and debugging Level 1 The purpose of testing is to show the correctness (software works). Level 2 The purpose of testing is to show that the software doesn t work.(the purpose is to show failures) Level 3 Testing shows the presence, not absence or failure. If we use software, we expose some risk. The risk may be small and unimportant, or the risk may be great and the consequences catastrophic, but risk is always there (The purpose of testing is to reduce the risk of software ; both testers and developers work together to reduce risk. Level 4 The purpose of testing is to improve the ability of the developers to produce high quality software. (Level 4 testing shows that testers and developers are on the same team)

Debugging and Testing Debugging is done in the development phase by the developer In debugging phase identified bugs are fixed Testing is done in the tester phase by the tester. In testing, locating and identifying of the bugs are included.

Software Bug A software bug is an error, flaw, mistake, failure, or fault in a computer program or system Bug is terminology of Tester Software bug produces an incorrect or unexpected result, or causes it to behave in unintended ways

Software Fault A static (physical)defect*, imperfection, flaw in the software. short between wires break in transistor infinite program loop A fault means that there is a problem within the code of a program which causes it to behave incorrectly. *defect: Mismatch between the requirements

Software Error Error is an incorrect internal state that is the demonstration of some fault. In other words; Error is a deviation from correctness or accuracy. Example Suppose a line is physically shortened to 0. (there is a fault). As long as the value on line is supposed to be 0, there is no error. Errors are usually associated with incorrect values in the system state.

Software Failure A failure means that the program has not performed as expected, and the actual performance has not met the minimum standards for success. External, incorrect behavior with respect to the requirements or other description of the expected behavior Example Suppose a circuit controls a lamp (0 = turn off, 1 = turn on) and the output is physically shortened to 0 (there is a fault). As long as the user wants the lamp off, there is no failure.

Example: Failure & Fault & Error Consider a medical doctor making a diagnosis for a patient. The patient enters the doctor s office with a list of failures (that is, symptoms). The doctor then must discover the fault, or root cause of the symptom. To aid in the diagnosis, a doctor may order tests that look for anomalous internal conditions. In our terminology, these anomalous internal conditions correspond to errors.

Cause-and-Effect Relationship Faults can result in errors. Errors can lead to system failures Errors are the effect of faults. Failures are the effect of errors Bug in a program is a fault. Possible incorrect values caused by this bug is an error. Possible crush of the operating system is a failure

class numzero { Program Example public static int numzero (int[] x) { // if x == null throw NullPointerException // else return the number of occurrences of 0 in x int count = 0; for (int i = 1; i < x.length; i++) { if (x[i] == 0) { count++; } } return count; } }

The Fault,Error and Failure in the Example The fault is that the program starts looking for zeroes at index 1 instead of index 0 For example, numzero ([2, 7, 0]) correctly results with 1, while numzero ([0, 7, 2]) incorrectly results with 0. In both of these cases the fault is executed. Both of the cases result in an error (because of next slide) But the first case results with correct value. Only the second case results in failure.

Error States in the Example To understand the error states, we need to identify the state for the program. The state for numzero consists of values for the variables x, count, i, program counter (PC). For the first case: The state at the if statement on the first iteration ( x = [2, 7, 0], count = 0, i = 1, PC = if) This state is in error because the value of i should be zero on the first iteration. But the value of count is correct, the error state does not propagate to the output, and the software does not fail. In other words, a state is in error if it is not the expected state, even if all of the values in the state are acceptable.

Error States in the Example In the second case the corresponding (error) state is (x = [0, 7, 2], count = 0, i = 1, PC = if). In this case, the error propagates to the variable count and is present in the return value of the method. Hence a failure results.

Distinguish Testing from Debugging The definitions of fault and failure allow us to distinguish testing from debugging. The difference is that debugging is conducted by a programmer and the programmer fix the errors during debugging phase. Tester never fixes the errors, but rather find them and return to programmer.

Testing versus Debugging Testing activity is carried down by a team of testers, in order to find the defect* in the software. Testers run their tests on the piece of software and if they encounter any defect,they report it to the development team. Testers also have to report at what point the defect occurred and what happened due the occurrence of that defect. All this information is used by development team to DEBUG the defect. After finding out the bug, he tries to modify that portion of code and then he rechecks if the defect has been removed After fixing the bug, developers send the software back to testers. *actual results don't match expected results

What is Software Quality? According to the IEEE Software Quality is: The degree to which a system, component, or process meets specified requirements. The degree to which a system, component, or process meets customer or user needs or expectations.

Software Quality Factors

Software Quality Assurance Verification are we building the product right? performed at the end of a phase to ensure that requirements established during previous phase have been met Validation are we building the right product? performed at the end of the development process to ensure compliance with product requirements

Verification & Validation Verification: The process of determining whether the products of a given phase of the software development process fulfill the requirements established during the previous phase. Validation: The process of evaluating software at the end of software development to ensure compliance with intended usage.

Difference Between Verification &Validation- I Verification is preventing mechanism to detect possible failures before the testing begin. It involves reviews, meetings, evaluating documents, plans, code, inspections, specifications etc. Validation occurs after verification and it's the actual testing to find defects against the functionality or the specifications

The Difference between Verification &Validation- II Verification is usually a more technical activity that uses knowledge about the individual software artifacts, requirements, and specifications. Validation usually depends on domain knowledge; that is, knowledge of the application for which the software is written. For example, validation of software for an airplane requires knowledge from aerospace engineers and pilots.

Testing Levels

Unit Testing Activities Unit.. Unit Module Build Each of testing activities is performed during the process of building an application.. Unit Module Interface testing.. Build.. Application on test bed system testing usability testing Application on final platform installation testing acceptance testing Unit.. Module integration testing Unit Regression testing throughout 29

Test Phases Acceptance Testing This checks if the overall system is functioning as required. Unit testing This is basically testing of a single function, procedure, class. Integration testing This checks that units tested in isolation work properly when put togehter. System testing The emphasis is to ensure that the whole system can cope with real data, monitor system performance, test the system s error handling and recovery routines. Regression Testing This checks that the system preserves its functionality after maintenance and/or evolution tasks. 30

Software Testing Types Black box testing : You don't need to know the internal design in detail or have a good knowledge about the code for this test. It's mainly based on functionality and specifications, requirements. White box testing : This test is based on detailed knowledge of the internal design and code. Tests are performed for specific code statements and coding styles.

Black- & White-box Testing Input determined by... Result requirements Black box Actual output compared with required output design elements White box Validation of expected behavior 32

White-Box Testing White -box tests are based on design and implementation If the car is built with a new design for its automatic transmission, we would be wise to use this knowledge White box testing is required

Levels of Software Testing

Regression Testing: What to Retest Suppose that C is a body of already-tested code in an application A. Suppose that A has been altered with new or changed code N. A If C is known to depend on N Perform regression testing on C C If C is reliably known to be completely independent of N There is no need to regression test C Otherwise Regression test C 35 N dfkbljfdklhvdabfds DHVDABFDSlkgt GKJGIURjskjgl snjfgkjfdkgjkfdjgkfdj kgjdfkbvjfdkjbkfdjbk fdjbkjdfklbjdflkbjkldf jbkfdjblkjfdklbjfdklbj klfdjbkldfjbkljdfkbljf dklbjoikhkgkjkdhvd ABFDSGKJGFJTIURjs kjgls,al40rgnsj2340 2- hhjk ghjhgj ghjkklj;ljko fghjhjgfj g;b,kdfgjfjfjg kfdjk

Regression Testing Opacity: Black- and white-box testing Specification: Any changed documentation, highlevel design Throughout all testing cycles, regression test cases are run. Regression testing is selective retesting of a system or component to verify that modifications have not caused unintended effects that the system or component still complies with its specified requirements

Unit Testing I Opacity: White box testing Specification: Low-level design and/or code structure Unit testing is the testing of individual hardware or software units Using white box testing techniques, testers (usually the developers creating the code implementation) verify that the code does what it is intended to do at a very low structural level. The tester will write some test code that will call a method with certain parameters and will ensure that the return value of this method is as expected. Unit testing is generally done within a class or a component

Unit Testing II Both white box and black box methods are utilized during unit testing White box unit tests focus on the internal code structure, testing each program statement, every decision point and each independent path Black box methods focus on testing the unit without using its internal structure Equivalence partitioning and boundary value analysis

Unit Testing versus Post-Unit testing Software applications consist of numerous parts. Therefore the soundness of the whole depends on the soundness of the parts Reliable part? Reliable? Reliable? Reliable? Reliable? Reliable? Reliable? Because of the dependence, we test software parts thoroughly before assembling them A process is called as unit testing 39

Types of Post-Unit Testing Interface testing Validates functions exposed by modules Integration combinations of modules System whole application Usability user satisfaction Regression changes did not create defects in existing code Acceptance customer agreement that contract satisfied Installation works as specified once installed on required platform Robustness ability to handle anomalies Performance fast enough; uses acceptable amount of memory 40

Functional and System Testing I Opacity: Black-box testing Specification: High-level design, requirements specification Using black box testing techniques, testers examine the high-level design The customer requirements specification to plan the test cases to ensure the code does what it is intended to do.

Functional and System Testing II Functional testing involves ensuring that the functionality specified in the requirement specification works. System testing involves putting the new program in many different environments to ensure the program works in typical customer environments Nonfunctional Requirements are tested Stress Testing Performance Testing Usability Testing

Stress Testing Stress testing conducted to evaluate a system or component at or beyond the limits of its specification or requirement If the team is developing software to run cash registers, a non-functional requirement might state that the server can handle up to 30 cash registers looking up prices simultaneously. Stress testing might occur in a room of 30 actual cash registers running automated test transactions repeatedly for 12 hours. There also might be a few more cash registers in the test lab to see if the system can exceed its stated requirements

Performance Testing Performance testing conducted to evaluate the compliance of a system or component with specified performance requirements A performance requirement might state that the price lookup must complete in less than 1 second. Performance testing evaluates whether the system can look up prices in less than 1 second (even if there are 30 cash registers running simultaneously).

Usability Testing Usability testing conducted to evaluate the extent a user can learn to operate, prepare inputs for, and interpret outputs of a system or component.

Acceptance Testing Opacity: Black-box testing Specification: requirements specification After functional and system testing, the product is delivered to a customer The customer runs black box acceptance tests based on their expectations of the functionality. Acceptance testing is formal testing conducted to determine whether or not a system satisfies its acceptance criteria to enable the customer to determine whether or not to accept the system

Acceptance Testing versus Unit Testing Acceptance Tests Written by Customer and Analyst. Written using an acceptance testing framework (also unit testing framework). (extreme programming) When acceptance tests pass, stop coding. The job is done. The motivation of acceptance testing is demonstrating working functionalities. Used to verify that the implementation is complete and correct. Used for Integration, System, and regression testing. Used to indicate the progress in the development phase. (Usually as %). Used as a contract. Used for documentation (high level) Written by developers. Unit Tests Written using a unit testing framework. (extreme programming) When unit tests pass, write another test that fails. The motivation of unit testing is finding faults. Used to find faults in individual modules or units (individual programs, functions, procedures, web pages, menus, classes, ) of source code. Used for documentation (low level) Written before the development and executedafter. Written and executed during the development. Starting point: User stories, User needs, Use Cases, Textual Requirements, Starting point: new capability (to add a new module/function or class/method). 47

Beta testing I Opacity: Black-box testing Specification: None The advantages of running beta tests are Identification of unexpected errors because the beta testers use the software in unexpected ways. A wider population search for errors in a variety of environments different operating systems with a variety of service releases and with a multitude of other applications running Low costs because the beta testers generally get free software but are not compensated.

Beta Testing II The disadvantages of beta testing Lack of systematic testing because each user uses the product in any manner they choose. Low quality error reports because the users may not actually report errors or may report errors without enough detail. Much effort is necessary to examine error reports particularly when there are many beta testers.

Integration testing Opacity: Black- and white-box testing Specification: Low- and high-level design Integration test is testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them Using both black and white box testing techniques, the tester (usually the software developer) verifies that units work together when they are integrated into a larger code base. The components work individually It doesn t mean that they all work together when assembled or integrated.

Black Box Testing Black box testing is also called functional testing Black box testing ignores the internal mechanism of a system or component Black box testing focuses solely on the outputs The outputs are generated in response to selected inputs and execution conditions. 51

Black-Box Testing Black-Box Testing does not take the manner in which the application was designed or implemented Example: Person «a» rents «the Matrix» on May5 Person «b» rents «Gone with the Wind» on May 6 Person «a» returns «the Matrix» on May 10 This is analogous to building an automobile and then testing it by driving it under various conditions.

Performing Black Box Testing Black box testing finds errors in the external behavior of the code incorrect or missing functionality interface errors errors in data structures used by interfaces behavior or performance errors initialization and termination errors. Through this testing, we can determine if the functions appear to work according to specifications. 53

Poor Specification of a Test Case 54

Preferred Specification of a Test Case 55

Requirements When a user lands on the Go to Jail cell, the player goes directly to jail, does not pass go, does not collect $200. On the next turn, the player must pay $50 to get out of jail and does not roll the dice or advance. If the player does not have enough money, he or she is out of the game. 56

Things to Test There are many things to test in this short requirement above Does the player get sent to jail after landing on Go to Jail? Does the player receive $200 if Go is between the current space and jail? Is $50 correctly decremented if the player has more than $50? Is the player out of the game if he or she has less than $50? 57

Test Plan #1 for the Jail Requirement 58

Equivalence Partitioning To keep down the testing costs, we don t want to write several test cases that test the same aspect of our program. A good test case uncovers a different class of errors These are incorrect processing of all data that has been uncovered by prior test cases. Equivalence partitioning is a strategy that can be used to reduce the number of test cases that need to be developed. Equivalence partitioning divides the input domain of a program into classes 59

Equivalence Classes for Player Money Less than $50 $50 or more 60

Test Plan #2 for the Jail Requirement 61

If input conditions specify a range of values, create one valid and one or two invalid equivalence classes. In the above example, this is (1) less than 50/invalid; 50 or more/valid. If input conditions specify a member of a set, create one valid and one invalid equivalence class. If an input condition is a Boolean, define one valid and one invalid class. 62

Boundary Value Analysis According to Boris Beizer, Bugs lurk in corners and congregate at boundaries. We need to focus testing at these boundaries. This is called Boundary Value Analysis (BVA) and guides you to create test cases at the edge of the equivalence classes. 63

Boundary Value Analysis Boundary value is defined as a data value It corresponds to a minimum or maximum input, internal, or output value specified for a system or component In the example, the boundary of the class is at 50 We should create test cases for the Player 1 having $49, $50, and $51. These test cases will help to find common off-by-one errors caused by errors like using >= when you mean to use >. 64

Boundary Value Analysis (BVA) Test cases should be created for the boundaries (arrows) between equivalence classes Less than $50 $50 or more 65

Creating BVA test case If input conditions have a range from a to b (such as a=100 to b=300), create test cases: immediately below a (99) at a (100) immediately above a (101) immediately below b (299) at b (300) immediately above b (301) 66

General Testing Levels 67

Traditional Testing Levels main Class P Acceptance testing : Is the software acceptable to the user? Class A method ma1() method ma2() Class B method mb1() method mb2() System testing : Test the overall functionality of the system Integration testing : Test how modules interact with each other Module testing (developer testing) : Test each class, file, module or component Unit testing (developer testing) : Test each unit (method) individually 68

Object-Oriented Testing Levels Inter-class testing : Test multiple classes together Class A method ma1() method ma2() Class B method mb1() method mb2() Intra-class testing : Test an entire class as sequences of calls Inter-method testing : Test pairs of methods in the same class Intra-method testing : Test each method individually 69

Coverage Criteria Even small programs have too many inputs to fully test them all private static computeaverage (int A, int B, int C) On a 32-bit machine, each variable has over 4 billion possible values Over 80 octillion possible tests!! Input space might as well be infinite Testers search a huge input space Trying to find the fewest inputs that will find the most problems Coverage criteria give structured, practical ways to search the input space Search the input space thoroughly Not much overlap in the tests 70

Graph Coverage for Source Code Graph : Usually the control flow graph (CFG) Node coverage : Execute every statement Edge coverage : Execute every branch Loops : Looping structures such as for loops, while loops, etc. Data flow coverage : Augment the (CFG) control flow graph Defs are statements that assign values to variables Uses are statements that use variables 71

Unit Test Methods White-Box Methods Statement coverage Test cases cause every line of code to be executed Branch coverage Test cases cause every decision point to execute Path coverage Test cases cause every independent code path to be executed Black-Box Methods Equivalence partitioning Divide input values into equivalent groups Boundary Value Analysis Test at boundary conditions 72