Darshan Institute of Engineering & Technology Unit : 9

Similar documents
Software Testing Fundamentals. Software Testing Techniques. Information Flow in Testing. Testing Objectives

Introduction to Software Engineering

SOFTWARE ENGINEERING IT 0301 Semester V B.Nithya,G.Lakshmi Priya Asst Professor SRM University, Kattankulathur

Software Engineering Software Testing Techniques

Aerospace Software Engineering

SE Notes Mr. D. K. Bhawnani, Lect (CSE) BIT

Chapter 9. Software Testing

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CS SOFTWARE ENGINEERING

Lecture 26: Testing. Software Engineering ITCS 3155 Fall Dr. Jamie Payton

Basic Concepts of Reliability

VETRI VINAYAHA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Topic: Software Verification, Validation and Testing Software Engineering. Faculty of Computing Universiti Teknologi Malaysia

Darshan Institute of Engineering & Technology for Diploma Studies

Software Testing. Minsoo Ryu. Hanyang University. Real-Time Computing and Communications Lab., Hanyang University

Verification and Validation

Bridge Course On Software Testing

Part 5. Verification and Validation

Software Testing Prof. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur. Lecture 13 Path Testing

Software Testing. 1. Testing is the process of demonstrating that errors are not present.

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013!

UNIT 1-SOFTWARE PROCESS AND PROJECT MANAGEMENT

Software Testing. Massimo Felici IF

Verification and Validation. Assuring that a software system meets a user s needs. Verification vs Validation. The V & V Process

UNIT-4 Black Box & White Box Testing

INTRODUCTION TO SOFTWARE ENGINEERING

SFWR ENG 3S03: Software Testing

UNIT-4 Black Box & White Box Testing

Software Testing for Developer Development Testing. Duvan Luong, Ph.D. Operational Excellence Networks

Sample Question Paper. Software Testing (ETIT 414)

Software Testing. Software Testing

Black-box Testing Techniques

Chapter 8 Software Testing. Chapter 8 Software testing

USTGlobal INNOVATION INFORMATION TECHNOLOGY. Using a Test Design Tool to become a Digital Organization

CS 4387/5387 SOFTWARE V&V LECTURE 4 BLACK-BOX TESTING

Examination Questions Time allowed: 1 hour 15 minutes

Chapter 14 Testing Tactics

Software Testing part II (white box) Lecturer: Giuseppe Santucci

People tell me that testing is

Software Testing TEST CASE SELECTION AND ADEQUECY TEST EXECUTION

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

Verification and Validation. Verification and validation

Lecture 14: Chapter 18!

Literature. CHAPTER 5 Testing. When to Test? The Unified Process. When to Test?

CS6403 SOFTWARE ENGINEERING Year / Sem : II / IV Sub. Code &Subject : CS6403 SOFTWARE ENGINEERING QUESTION BANKWITH ANSWERS

Software Testing CS 408

Lecture 15 Software Testing

MSc Software Testing and Maintenance MSc Prófun og viðhald hugbúnaðar

Higher-order Testing. Stuart Anderson. Stuart Anderson Higher-order Testing c 2011

Software Testing. Lecturer: Sebastian Coope Ashton Building, Room G.18

Sample Exam Syllabus

MSc Software Testing MSc Prófun hugbúnaðar

Introduction to Dynamic Analysis

Shree.Datta Polytechnic College,Dattanagar, Shirol. Class Test- I

Functional Testing (Black Box Testing)

Testing Theory. Agenda - What will you learn today? A Software Life-cycle Model Which part will we talk about today? Theory Lecture Plan

Topics in Software Testing

Software Testing Interview Question and Answer

Software Quality Assurance. David Janzen

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams

Chapter 8. Achmad Benny Mutiara

Coding and Unit Testing! The Coding Phase! Coding vs. Code! Coding! Overall Coding Language Trends!

The testing process. Component testing. System testing

Introduction to Software Testing

CS SOFTWARE ENGINEERING QUESTION BANK SIXTEEN MARKS

ASTQB Advance Test Analyst Sample Exam Answer Key and Rationale

Structural Testing. (c) 2007 Mauro Pezzè & Michal Young Ch 12, slide 1

Chapter 14 Software Testing Techniques

Software testing. Ian Sommerville 2006 Software Engineering, 8th edition. Chapter 23 Slide 1

Sample Exam. Advanced Test Automation - Engineer

The Importance of Test

Equivalence Class Partitioning and Boundary Value Analysis -Black Box Testing Techniques

Software Testing. Testing 1

Lecture 20: SW Testing Presented by: Mohammad El-Ramly, PhD

No Source Code. EEC 521: Software Engineering. Specification-Based Testing. Advantages

Software Engineering 2 A practical course in software engineering. Ekkart Kindler

(See related materials in textbook.) CSE 435: Software Engineering (slides adapted from Ghezzi et al & Stirewalt

Testing is executing a system in order to identify any gaps, errors, or missing requirements in contrary to the actual requirements.

Software Testing. Software Testing. in the textbook. Chapter 8. Verification and Validation. Verification and Validation: Goals

Why testing and analysis. Software Testing. A framework for software testing. Outline. Software Qualities. Dependability Properties

SOFTWARE PRODUCT QUALITY SOFTWARE ENGINEERING SOFTWARE QUALITY SOFTWARE QUALITIES - PRODUCT AND PROCESS SOFTWARE QUALITY - QUALITY COMPONENTS

! Is often referred to as verification and validation (V&V).

What s Next INF 117 Project in Software Engineering

Black Box Testing. EEC 521: Software Engineering. Specification-Based Testing. No Source Code. Software Testing

Software Quality. Chapter What is Quality?

CS 451 Software Engineering Winter 2009

Software Engineering (CSC 4350/6350) Rao Casturi

Test design techniques

Verification and Validation

Verification and Validation

7.0 Test Design Techniques & Dynamic Testing

Comparison Study of Software Testing Methods and Levels- A Review

CS 520 Theory and Practice of Software Engineering Fall 2018

Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm. Rao Casturi 11/10/2015

Certified Tester Foundation Level(CTFL)

International Journal of Computer Engineering and Applications, Volume XII, Special Issue, September 18, ISSN SOFTWARE TESTING

MONIKA HEINER.

Testing: Test design and testing process

Software Engineering Fall 2014

Transcription:

1) Explain software testing strategy for conventional software architecture. Draw the spiral diagram showing testing strategies with phases of software development. Software Testing: Once source code has been generated, software must be tested to uncover (and correct) as many errors as possible before delivery to the customer. Testing Objectives The rules that can serve well as testing objectives are, 1. Testing is a process of executing a program with the intent of finding an error. 2. A good test case is one that has a high probability of finding an as-yet-undiscovered error. 3. A successful test is one that uncovers an as-yet-undiscovered error. If testing is conducted successfully, it will uncover errors in the software. As a secondary benefit, testing demonstrates that software functions appear to be working according to specification, that behavioural and performance requirements appear to have been met. Testing Principles Before applying methods to design effective test cases, a software engineer must understand the basic principles that guide software testing. 1. All tests should be traceable to customer requirements. 2. Tests should be planned long before testing begins. 3. The Pareto principle1 applies to software testing. 4. Testing should begin in the small and progress toward testing in the large. 5. Exhaustive testing is not possible. 6. To be most effective, testing should be conducted by an independent third party. Testability In ideal circumstances, a software engineer designs a computer program, a system, or a product with testability in mind. This enables the individuals charged with testing to design effective test cases more easily. Operability. "The better it works, the more efficiently it can be tested." Operability. "The better it works, the more efficiently it can be tested." Controllability. "The better we can control the software, the more the testing can be automated and optimized." Prepared By : Chirag Patel Page 1

Decomposability. "By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting." Simplicity. "The less there is to test, the more quickly we can test it." Stability. "The fewer the changes, the fewer the disruptions to testing." Understand ability. "The more information we have, the smarter we will test." 2) Using example explain the basic path testing method. Basis path testing is a white-box testing technique first proposed by Tom McCabe. The basis path method enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. a) Flow Graph Notation Before the basis path method can be introduced, a simple notation for the representation of control flow, called a flow graph (or program graph) must be introduced. The flow graph depicts logical control flow using the notation illustrated in Figure To illustrate the use of a flow graph, we consider the procedural design representation in Figure B. Here, a flowchart is used to depict program control structure. Prepared By : Chirag Patel Page 2

Flow Graph Conditional Flow Figure B maps the flowchart into a corresponding flow graph (assuming that no compound conditions are contained in the decision diamonds of the flowchart). Referring to Figure B, each circle, called a flow graph node, represents one or more procedural statements. Prepared By : Chirag Patel Page 3

A sequence of process boxes and a decision diamond can map into a single node. The arrows on the flow graph, called edges or links, represent flow of control and are analogous to flowchart arrows. An edge must terminate at a node, even if the node does not represent any procedural statements (e.g., see the symbol for the if-then-else construct). Areas bounded by edges and nodes are called regions. When counting regions, we include the area outside the graph as a region. b) Cyclomatic complexity Independent path is any path through use of the program that introduces at least one new set of processing statements or a new condition. Cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity of a program. It defines number of independent paths in the basis of set of program and provides us with an upper bound for the number of tests that must be conducted to ensure all statements have been executed at least once. It can be computed 3 ways: The number of regions corresponds to cyclomatic complexity. Cyclomatic complexity V (G) can be defined as V (G) = E-N+2 Where E is number of flow graph edges and N is the number of flow graph nodes. Cyclomatic complexity V (G) can also be defined as V (G) = P + 1 Where P is the number of predicate nodes contained in the graph. c) Deriving Test Cases The basis path testing method can be applied to a procedural design or to source code. So, presenting basis path testing as a series of steps are to be considered here. Note here, the average, although an extremely simple algorithm contains compound conditions and loops. The following steps can be applied to derive the basis set: 1. Using the design or code as a foundation, draw a corresponding flow graph. 2. Determine the cyclomatic complexity of the resultant flow graph. 3. Determine a basis set of linearly independent paths. 4. Prepare test cases that will force execution of each path in the basis set. 3) Give test case design. Test cases are used to determine the presence of fault in the program. Sometimes even if there is some fault in our program the correct output can be obtained for some inputs Hence, it is necessary to exercise those set of inputs for which fault (if any) can be expressed. Executing test cases require money because i) Machine time is required to execute test cases Prepared By : Chirag Patel Page 4

ii) Human efforts are involved in executing test cases. Hence in the project testing minimum number of test cases should be there as far as possible. The testing activity should involve two goals i) Maximize the number of errors detected. ii) Minimize the number of test cases. The selection of test case should be such that faulty module or program segment must be exercised by at least one test case. Selection of test cases is determined by some criteria which is called as test selection criterion. Hence the test selection criterion T can be defined as the set of conditions that must be satisfied by the set of test cases. Testing criterion is based on two fundamental properties reliability and validity. A test criterion is reliable if all the test cases detected same set of errors. A test criterion is valid if for any error in the program there is some set which causes error in the program. For testing criteria there is an important theorem if testing criteria is valid and reliable if a set satisfying testing criterion succeeds then that means program contains no errors. Generating test cases to satisfy criteria is complex task. 4) Explain pairwise testing and state based testing There are many parameters that determine the behaviour of the software product. For some values this product behaves correctly for some another values this product does not behave correctly. Testing of such product must be done using different values in different conditions. That is if there are n parameter search having m different values then the test cases must be generated for the nth combinations. Testing all these combinations is complex. Hence a new approach of testing is designed. This approach is called pairwise testing. In this type of testing all the pair values must be exercised and tested. The objective of pairwise testing is to have a set of test cases that over all the pairs. For ex. Suppose we have three different set of values. (x1,x2,x3), (y1,y2,y3), (z1,z2,z3) then the test cases for pair wise testing will be Pairwise Testing The total number of pair wise combination will be 9 x 3 = 27. State Based Testing Prepared By : Chirag Patel Page 5

Sometimes the system behaves different based on the input and conditions. The different behaviour of the system can be represented by states. Any software that saves the states can be modelled as state machine. Every state model has four components. 1. States An entity which has some impact of the input. 2. Transitions Represents how one state changes to another on occurrence of some event. 3. Events Input to the system 4. Actions the outputs for the events. 5) What is software testing? What is the role of software tester? Compare: black box testing and white box testing. Software Testing: Once source code has been generated, software must be tested to uncover (and correct) as many errors as possible before delivery to the customer. Role of Software Tester : The basic goal of testing is to uncover as much errors as possible Hence the intent of testing is to show how the program does not work. A tester should find out the possibilities wherein the program does not work. Hence testing should be carried out with the intension of finding out as much errors as possible. Testing is a destructive process in which the tester has to treat the program as adversary and should find out the presence of errors. One of the reason why organization do not select the developer as tester is depended upon the human psychology. It is quite natural that the man who creates something does not easily dare to find out something wrong in it. Hence the person who have develops the code, would design such test cases that will show that the code works correctly; he will not select the test cases that show errors in his creativity. Hence an independent team is recruited as a tester Another reason for testing the program by independent person is that sometimes the programmer does not understand the specification clearly, and then testing the code by independent group will find out the bug easily. This approach towards testing reveals as much errors as possible. Black Box Testing It is done by separate Testing team. in this testers validates the developed application by execution of test cases and finding Bugs and send error reports. Generally black box testing will begin early in the software development i.e. in requirement gathering phase itself. We can use black testing strategy almost any size either it may be small or large. White Box testing It is done by developers. It typically involves in coding. In this testing developers concentrates on internal structure of the program and justifies whether that program is correct or not. But for white box testing approach one has to wait for the designing has to complete. But white box testing will be effective only for small lines of codes or piece of codes. Prepared By : Chirag Patel Page 6

But in Black box testing we can do it. In white box testing we cannot test Performance of the application. 6) Explain White box and Black box testing. Discuss all the testing strategies that are available. White Box Testing White-box testing, sometimes called glass-box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Using white-box testing methods, the software engineer can derive test cases that (1) Guarantee that all independent paths within a module have been exercised at least once, (2) Exercise all logical decisions on their true and false sides, (3) Execute all loops at their boundaries and within their operational bounds, and (4) Exercise internal data structures to ensure their validity. A reasonable question might be posed at this juncture: "Why spend time and energy worrying about (and testing) logical minutiae when we might better expend effort ensuring that program requirements have been met?" Stated another way, why don't we spend all of our energy on black-box tests? The answer lies in the nature of software defects: Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. Errors tend to creep into our work when we design and implement function, conditions, or controls that are out of the mainstream. Everyday processing tends to be well understood (and well scrutinized), while "special case" processing tends to fall into the cracks. We often believe that a logical path is not likely to be executed when, in fact, it may be executed on a regular basis. The logical flow of a program is some-times counterintuitive, meaning that our unconscious assumptions about flow of control and data may lead us to make design errors that are uncovered only once path testing commences. Typographical errors are random. When a program is translated into programming language source code, it is likely that some typing errors will occur. Many will be uncovered by syntax and type checking mechanisms, but others may go undetected until testing begins. It is as likely that a typo will exist on an obscure logical path as on a mainstream path. Each of these reasons provides an argument for conducting white-box tests. Black-box testing, no matter how thorough, may miss the kinds of errors noted here. White-box testing is far more likely to uncover them. Black-box Testing Black-box testing, also called behavioural testing, focuses on the functional requirements of the software. That is, black-box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary approach that is likely to uncover a different class of errors than white-box methods. Black-box testing attempts to find errors in the following categories: (1) Incorrect or missing functions, (2) Interface errors, (3) Errors in data structures or external data base access, Prepared By : Chirag Patel Page 7

(4) Behaviour or performance errors, and (5) Initialization and termination errors. a) Graph-Based Testing Methods : The software engineer begins by creating a graph a collection of nodes that represent objects; links that represent the relationships between objects; node weights that describe the properties of a node (e.g., a specific data value or state behaviour); and link weights that describe some characteristic of a link. The symbolic representation of a graph is shown in Figure 17.9A. Nodes are represented as circles connected by links that take a number of different forms. A directed link (represented by an arrow) indicates that a relationship moves in only one direction. A bidirectional link, also called a symmetric link, implies that the relationship applies in both directions. Parallel links are used when a number of different relationships are established between graph nodes. b) Equivalence Partitioning Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived. An ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing of all character data) that might otherwise require many cases to be executed before the general error is observed. Equivalence partitioning strives to define a test case that uncovers classes of errors, thereby reducing the total number of test cases that must be developed. Equivalence classes may be defined according to the following guidelines: 1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined. 2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. 3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined. 4. If an input condition is Boolean, one valid and one invalid class are defined. c) Boundary Value Analysis Boundary value analysis is a test case design technique that complements equivalence partitioning. Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases at the "edges" of the class. Rather than focusing solely on input conditions, BVA derives test cases from the output domain as well. Guidelines for BVA are similar in many respects to those provided for equivalence partitioning: 1 If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b and just above and just below a and b. 2 If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested. 3 Apply guidelines 1 and 2 to output conditions. For example, assume that a temperature vs. pressure table is required as output from an engineering analysis program. Test cases should be designed to Prepared By : Chirag Patel Page 8

create an output report that produces the maximum (and minimum) allowable number of table entries. 4 If internal program data structures have prescribed boundaries (e.g., an array has a defined limit of 100 entries), be certain to design a test case to exercise the data structure at its boundary. Comparison testing There are some situations (e.g., aircraft avionics, automobile braking systems) in which the reliability of software is absolutely critical. In such applications redundant hardware and software are often used to minimize the possibility of error. When redundant software is developed, separate software engineering teams develop independent versions of an application using the same specification. In such situations, each version can be tested with the same test data to ensure that all provide identical output. Then all versions are executed in parallel with real-time comparison of results to ensure consistency. Comparison testing is not fool proof. If the specification from which all versions have been developed is in error, all versions will likely reflect the error. In addition, if each of the independent versions produces identical but incorrect results, condition testing will fail to detect the error. Orthogonal Array Testing There are many applications in which the input domain is relatively limited. That is, the number of input parameters is small and the values that each of the parameters may take are clearly bounded. When these numbers are very small (e.g., three input parameters taking on three discrete values each), it is possible to consider every input permutation and exhaustively test processing of the input domain. However, as the number of input values grows and the number of discrete values for each data item increases, exhaustive testing becomes impractical or impossible. Orthogonal array testing can be applied to problems in which the input domain is relatively small but too large to accommodate exhaustive testing. The orthogonal array testing method is particularly useful in finding errors associated with region faults an error category associated with faulty logic within a software component. 7) Explain boundary value analysis or list set of guidelines for bva? Also explain merits and demerits of BVA. For reasons that are not completely clear, a greater number of errors tends to occur at the boundaries of the input domain rather than in the "centre." It is for this reason that boundary value analysis (BVA) has been developed as a testing technique. Boundary value analysis leads to a selection of test cases that exercise bounding values. Boundary value analysis is a test case design technique that complements equivalence partitioning. Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases at the "edges" of the class. Rather than focusing solely on input conditions, BVA derives test cases from the output domain as well. Guidelines for BVA are similar in many respects to those provided for equivalence partitioning: 1 If an input condition specifies a range bounded by values a and b, test cases should be designed with Prepared By : Chirag Patel Page 9

values a and b and just above and just below a and b. 2 If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested. 3 Apply guidelines 1 and 2 to output conditions. For example, assume that a temperature vs. pressure table is required as output from an engineering analysis program. Test cases should be designed to create an output report that produces the maximum (and minimum) allowable number of table entries. 4 If internal program data structures have prescribed boundaries (e.g., an array has a defined limit of 100 entries), be certain to design a test case to exercise the data structure at its boundary. Most software engineers intuitively perform BVA to some degree. By applying these guidelines, boundary testing will be more complete, thereby having a higher likelihood for error detection. 8) Give test case generation and tool support. The basis path testing method can be applied to a procedural design or to source code. So, presenting basis path testing as a series of steps are to be considered here. Note here, the average, although an extremely simple algorithm contains compound conditions and loops. The following steps can be applied to derive the basis set: 1. Using the design or code as a foundation, draw a corresponding flow graph. A flow graph is created using the symbols and construction rules. A flow graph is created by numbering that algorithm for PDL statements that will be mapped into corresponding flow graph nodes. The corresponding flow graph is in Figure on the right. 2. Determine the cyclomatic complexity of the resultant flow graph. The cyclomatic complexity, V (G), is determined by applying the algorithms described in Section 17.5.2. It should be noted that V (G) can be determined without developing a flow graph by counting all conditional statements in the PDL (for the procedure average, compound conditions count as two) and adding 1. Referring to Figure on the right, Flow Graph for Procedure Average V (G) = 6 regions V (G) = 17 edges -- 13 nodes + 2 = 6 V (G) = 5 predicate nodes + 1 = 6 3. Determine a basis set of linearly independent paths. The value of V (G) provides the number of linearly independent paths through the program control structure. In the case of procedure average, we expect to specify six paths: Prepared By : Chirag Patel Page 10

Path 1: 1-2-10-11-13 Path 2: 1-2-10-12-13 Path 3: 1-2-3-10-11-13 Path 4: 1-2-3-4-5-8-9-2-... Path 5: 1-2-3-4-5-6-8-9-2-... Path 6: 1-2-3-4-5-6-7-8-9-2-... The ellipsis (...) following paths 4, 5, and 6 indicates that any path through the remainder of the control structure is acceptable. It is often worthwhile to identify predicate nodes as an aid in the derivation of test cases. In this case, nodes 2, 3, 5, 6, and 10 are predicate nodes. 4. Prepare test cases that will force execution of each path in the basis set. Data should be chosen so that conditions at the predicate nodes are appropriately set as each path is tested. Test cases that satisfy the basis set just described are Path 1 test case: Value (k) = valid input, where k< i for 2 i 100 Value (i) = -- 999 where 2 i 100 Expected results: Correct average based on k values and proper totals. Note: Path 1 cannot be tested stand-alone but must be tested as part of path 4, 5, and 6 tests. In this way we can also test other paths listed above. Each test case is executed and compared to expected results. Once all test cases have been completed, the tester can be sure that all statements in the program have been executed at least once. Tool Support For deciding whether the test case will satisfy the coverage criteria we need to use some tool. In fact for all structural testing methods the use of some tool is must. Generally such tools give the feedback about what needs to be tested in order to fulfil the criteria. But such tools are not easily available; hence fully automated tool for selecting the test cases that satisfy the criteria is not at all available. Hence we must select the test cases randomly, and then using the tool get the feedback which will be used for selecting the next possible test cases. This process is continued iteratively until the desired criteria are not satisfied. There are many tools available for statement and branch coverage. Both commercial and freeware tools are available for different source languages. To get coverage data the execution of the program during testing has to be monitored. The coverage data can then be obtained can then be obtained by inserting some statements in the program. Such insertion is called probing. Thus the purpose of the probing is to collect the coverage data required for selection of test cases. There are three phases obtaining the coverage data. Prepared By : Chirag Patel Page 11

Phase 1: Prepare the program with probes. Phase 2: Execute the program with test case. Phase 3: Analyse the output from probed data. The probing insertion can be done automatically by pre-processor. Complex data flow testing tools are also available. 9) Explain metrics coverage analysis and reliability. While developing the software project many work products such as SRS, design document, sources code are being created. Along with these work products many errors may get generated. Project manager has to identify all these errors to bring quality software. Error tracking is a process of assessing the status of the software project. The software team performs the formal technical reviews to test the software developed. In this review various errors are identified and corrected. Any errors that remain uncovered and are found in later tasks are called defects. The defect removal efficiency can be defined as, DRE = E / (E + D) Where, DRE is the defect removal efficiency, E is the error and D is the defect. The DRE represents the effectiveness of quality assurance activities. The DRE also helps the project manager to assess the progress of software project as it gets developed through its schedule work task. During error tracking activity following metrics are computed: 1. Errors per requirements specification page: denoted by Ereq 2. Errors per component design level : denoted by Edesign 3. Errors per component code level: denoted by Ecode 4. DRE Requirement analysis. 5. DRE architectural design 6. DRE component level design 7. DRE coding. The project manager calculates current values for Ereq, Edesign and Ecode. 1. These values are then compared with past projects. 2. If the current result differs more than 20% from the average, then there may be clause cause for concern and investigation needs to be made in this regard. 3. These errors tracking metrics can also be used for better target review and testing resources. Reliability Software reliability is defined as the probability of failure free operation of a computer program in a specified environment for a specified environment for a specified time. The software reliability can be measured, directed and estimated. Prepared By : Chirag Patel Page 12

Measure of reliability and Availability Normally there are two measures of software reliability. 1. MTBF mean-time-between-failure is a simple measure of software reliability which can be calculated as MTBF = MTTF + MTTR Where, MTTF means mean-time-to-failure and, MTTR stands for mean-time-to-repair. Many software researchers feel that MTBF is more useful measure of software reliability 2. Availability It s another measure of software reliability is defined as the probability that the program is working according to the requirements at a given points in time. It is measured as, Availability = (MTTF / (MTTF + MTTR)) x 100 % MTBF is equally sensitive to MTTF and MTTR but availability is more sensitive to MTTR. 10) Differentiate alpha testing and beta testing. It is virtually impossible for a software developer to foresee how the customer will really use a program. Instructions for use may be misinterpreted; strange combinations of data may be regularly used; output that seemed clear to the tester may be unintelligible to a user in the field. When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. Conducted by the end-user rather than software engineers, an acceptance test can range from an informal "test drive" to a planned and systematically executed series of tests. In fact, acceptance testing can be conducted over a period of weeks or months, thereby uncovering cumulative errors that might degrade the system over time. If software is developed as a product to be used by many customers, it is impractical to perform formal acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the end-user seems able to find. The alpha test is conducted at the developer's site by a customer. The software is used in a natural setting with the developer "looking over the shoulder" of the user and recording errors and usage problems. Alpha tests are conducted in a controlled environment. The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not present. Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled by the developer. The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals. As a result of problems reported during beta tests, software engineers make modifications and then prepare for release of the software product to the entire customer base. Prepared By : Chirag Patel Page 13

Prepared By : Chirag Patel Page 14