Using Code Coverage to Improve the Reliability of Embedded Software. Whitepaper V

Similar documents
An Industry Proof-of-Concept Demonstration of MC/DC from Automated Combinatorial Testing

Vector Software W H I T E P A P E R. How to Evaluate Embedded Software Test Tools

Vector Software. Using VectorCAST to Satisfy Software Verification and Validation for ISO W H I T E P A P E R

Advanced Software Testing Understanding Code Coverage

Tessy. Automated dynamic module/unit testing for embedded applications. CTE Classification Tree Editor for test case specifications

INTRODUCTION TO SOFTWARE ENGINEERING

By V-cubed Solutions, Inc. Page1. All rights reserved by V-cubed Solutions, Inc.

WHITE PAPER. 10 Reasons to Use Static Analysis for Embedded Software Development

Testing, code coverage and static analysis. COSC345 Software Engineering

Fault-Injection testing and code coverage measurement using Virtual Prototypes on the context of the ISO standard

Baseline Testing Services. Whitepaper Vx.x

Verification and Test with Model-Based Design

TESSY and ISO TESSY White Paper Author: Frank Büchner. How TESSY helps to achieve ISO compliance

By Matthew Noonan, Project Manager, Resource Group s Embedded Systems & Solutions


Simulink Verification and Validation

Manuel Oriol, CHCRC-C, Software Testing ABB

School of Computer Science CPS109 Course Notes 5 Alexander Ferworn Updated Fall 15

Lecture 18: Structure-based Testing

Intro to Proving Absence of Errors in C/C++ Code

Testing: (A Little) Logic Coverage

18-642: Unit Testing 1/31/ Philip Koopman

Structural Coverage Analysis for Safety-Critical Code - Who Cares? 2015 LDRA Ltd 1

CERTIFIED. Faster & Cheaper Testing. Develop standards compliant C & C++ faster and cheaper, with Cantata automated unit & integration testing.

O B J E C T L E V E L T E S T I N G

Important From Last Time

Leveraging Formal Methods Based Software Verification to Prove Code Quality & Achieve MISRA compliance

Smart Test Case Quantifier Using MC/DC Coverage Criterion

Darshan Institute of Engineering & Technology for Diploma Studies

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

Generic BST Interface

Utilizing Fast Testing to Transform Java Development into an Agile, Quick Release, Low Risk Process

Logic Coverage. Moonzoo Kim School of Computing KAIST. The original slides are taken from Chap. 8 of Intro. to SW Testing 2 nd ed by Ammann and Offutt

Chap 2. Introduction to Software Testing

Sample Exam ISTQB Advanced Test Analyst Answer Rationale. Prepared By

A linked list grows as data is added to it. In a linked list each item is packaged into a node.

Page 1. Stuff. Last Time. Today. Safety-Critical Systems MISRA-C. Terminology. Interrupts Inline assembly Intrinsics

Software Testing part II (white box) Lecturer: Giuseppe Santucci

STRUCTURAL TESTING. AKA White Box Testing. Thanks go to Andreas Zeller for allowing incorporation of his materials. F. Tip and M.

Structural Testing. Testing Tactics. Why Structural? Structural white box. Functional black box. Structural white box. Functional black box

Testing: Test design and testing process

Coverage Metrics for Functional Validation of Hardware Designs

In this lab we will practice creating, throwing and handling exceptions.

VectorCAST SP3 and Beyond

CS159. Nathan Sprague. September 30, 2015

automatisiertensoftwaretests

Lecture 17: Testing Strategies. Developer Testing

Simulink 모델과 C/C++ 코드에대한매스웍스의정형검증툴소개 The MathWorks, Inc. 1

Error Detection by Code Coverage Analysis without Instrumenting the Code

STRUCTURAL TESTING. AKA White Box Testing. Thanks go to Andreas Zeller for allowing incorporation of his materials. F. Tip and M.

MTAT : Software Testing

Leveraging Formal Methods for Verifying Models and Embedded Code Prashant Mathapati Application Engineering Group

Seven Roadblocks to 100% Structural Coverage (and how to avoid them)

Path Testing + Coverage. Chapter 8

RELIABLE SOFTWARE SYSTEMS

CS 520 Theory and Practice of Software Engineering Fall 2018

Proposal of a software coding analysis tool using symbolic execution for a railway system

Testing Techniques for Ada 95

Report. Certificate Z

Combinatorial Clause Coverage CoC

Software Engineering (CSC 4350/6350) Rao Casturi

Testing Tactics. Structural Testing. Why Structural? Why Structural? Functional black box. Structural white box. Functional black box

Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm. Rao Casturi 11/10/2015

Formal Approach in Software Testing

Agile Software Development. Lecture 7: Software Testing

Verification and Validation. Assuring that a software system meets a user s needs. Verification vs Validation. The V & V Process

Don t Be the Developer Whose Rocket Crashes on Lift off LDRA Ltd

Jay Abraham 1 MathWorks, Natick, MA, 01760

18-642: Unit Testing 9/18/ Philip Koopman

INTERPRETING FINGERPRINT AUTHENTICATION PERFORMANCE TECHNICAL WHITE PAPER

Software Engineering Fall 2014

(Not Quite) Minijava

MTAT : Software Testing

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

CIS 110 Introduction to Computer Programming 8 October 2013 Midterm

MTAT : Software Testing

Write perfect C code to solve the three problems below.

Debugging Common USB Issues. Total Phase, Inc.

MISRA-C. Subset of the C language for critical systems

Testing Methods: White Box Testing II

Aerospace Software Engineering

Software Engineering

Software Engineering Testing and Debugging Testing

Structural Testing. (c) 2007 Mauro Pezzè & Michal Young Ch 12, slide 1

The Path Not Taken: Maximizing the ROI of Increased Decision Coverage

+ Abstract Data Types

Chapter 7 Control Statements Continued

Introduction to Software Testing Chapter 3, Sec# 1 & 2 Logic Coverage

Software Testing and Maintenance

COMP219: Artificial Intelligence. Lecture 14: Knowledge Representation

Written exam TDDD04 Software Testing

Software Testing. 1. Testing is the process of demonstrating that errors are not present.

Machine-Independent Optimizations

CODE / CONFIGURATION COVERAGE

Don t Judge Software by Its (Code) Coverage

Benefits of Collecting Code Coverage Metrics during HIL/ECU Testing

Verification by Static Analysis

Profile-Guided Program Simplification for Effective Testing and Analysis

COMP 3705 Advanced Software Engineering: Software Testing

Difference Between Dates Case Study 2002 M. J. Clancy and M. C. Linn

Transcription:

Using Code Coverage to Improve the Reliability of Embedded Software Whitepaper V2.0 2017-12

Table of Contents 1 Introduction... 3 2 Levels of Code Coverage... 3 2.1 Statement Coverage... 3 2.2 Statement Coverage Limitations... 3 2.3 Branch Coverage... 3 2.4 Branch Coverage Limitations... 3 3 How VectorCAST Supports Compliance with the ISO 26262 Standard... 4 4 Coverage from Different Types of Testing... 4 5 Challenges of Capturing Coverage in an Embedded Environment... 5 6 Capturing Coverage Data in a Shared Environment... 5 7 Conclusion... 5 2

1 Introduction Code coverage is a metric used to gauge the completeness of software testing, by identifying which areas of source code in an application were exercised during a test. It provides a convenient way to ensure that applications are not released with untested code. Can code coverage be used to improve reliability? The answer is yes, performing code coverage analysis is the simplest way to ensure quality in a software application. 2 Levels of Code Coverage Code coverage can be measured in a variety of ways. The most common approach is to measure one or a combination of one or more of the following: statement coverage, branch coverage, and Modified Condition/Decision Coverage (MC/DC). The following sections will describe each of these coverage types in detail. 2.1 Statement Coverage Statement Coverage is the measure of which executable lines of code have been executed. It does not take into consideration loops or conditional statements, just the statements within an executable line. It should be noted that a statement is not the same as a line of code. In C, C++, Java or Ada, typically, a semicolon terminates a statement. In some scenarios, a single statement can span multiple lines of code. While Statement Coverage provides a good measure of what has and hasn't been executed in the program, it does have some limitations. 2.2 Statement Coverage Limitations Consider the following code fragment in Figure 1. int* p = NULL; if (condition) p = &variable; *p = 123;. Figure 1: Statement coverage defect code example By 'condition' being true, it is possible to achieve 100% statement coverage. However, this test case misses the scenario where 'condition' is false. This program would de-reference a null pointer in that case. While Statement Coverage is a good metric, it is the entry level of code coverage. Ideally the condition false case is also tested. 2.3 Branch Coverage Branch Coverage measures whether decision and branch points are tested completely for all possible outcomes. For example an 'if' statement must take on both true and false outcomes to be considered covered. If only one of the paths is taken, this is classified as 'partial' coverage. However, like Statement Coverage, there are some subtleties we need to be conscious of, especially when working with languages that perform lazy evaluation. Lazy evaluation is the technique of delaying computation of a piece of code until it is needed. 2.4 Branch Coverage Limitations A typical scenario where lazy evaluation can occur is with complex Boolean expressions like the code in Figure 2. int* p = NULL; if (condition1 && (condition2 function1(*p))) statement1; else Figure 2: Branch coverage defect code example Consider the scenario where 'condition1' is false. Lazy evaluation would not need to evaluate 'condition2' or ' function1(*p)'. This would also result in covering the 'false' coverage path for 'if (condition1 && (condition2 function1(*p)))'. Now consider the scenario that 'condition1' and 'condition2' are both 'true'. Again, lazy evaluation would result in 'function1(*p)' not being evaluated. This would also result in the 'true' coverage path being taken for the conditional above. In this case, it is possible to have 100% Branch coverage and still have potential defects in the software. 3

3 How VectorCAST Supports Compliance with the ISO 26262 Standard MC/DC is sort of a super branch coverage. It reports on the true and false outcomes of a complex conditional as is done in branch coverage, but it also reports on the true and false outcomes of the sub-condition in a complex conditional. MC/DC was created at Boeing and is typically required for aviation software for DO-178B Level A certification. It addresses the issue raised by lazy evaluation, by requiring a demonstration that every sub-condition can affect the outcome of the decision independent of the other sub-condition values. Looking at the example in Fig. 2, we would need to verify 'condition1' for its true and false while holding 'condition2' and 'function1(*p)' fixed, then we would need to do the same, for 'condition2' while holding 'condition1' and 'function1(*p)' fixed. Finally, we would do the same for 'function1(*p)', while holding 'condition1' and 'condition2' fixed. The verification of each sub-condition for its 'true' and 'false' values while holding the other sub-conditions fixed is known as an 'MC/DC pair'. An MC/DC truth table is typically used to identify pairs. An example of this can be seen in Table 1. Row Ca Cb Cc Rslt Pa Pb Pc *1 T T T T 5 *2 T T F T 6 4 3 T F T T 7 4 4 T F F F 2 3 5 F T T F 1 6 F T F F 2 7 F F T F 3 Table 1: Modified Condition / Decision Coverage Truth Table Actual Expression is: ( condition1 && ( condition2 function1 ( *p ) ) ) Condition "a" (Ca) is: condition1 Condition "b" (Cb) is: condition2 Condition "c" (Cc) is: function1 ( *p ) Simplified Expression is: ( a && ( b c ) ) Pa => no pair was satisfied 1/5 2/6 3/7 Pb => no pair was satisfied 2/4 Pc => no pair was satisfied 3/4 4 Coverage from Different Types of Testing Software testing comes in many flavors. For simplicity, we will use three terms in this paper: > System / Functional Testing: Testing the fully integrated application > Integration Testing: Testing integrated sub-systems > Unit Testing: Testing a few individual files or classes Every project does some amount of system testing where the source code is stimulated with some of the same actions that the end users will do. One of the most common causes of applications being fielded with bugs is that unexpected, and therefore untested, combinations of inputs are encountered by the application when in the field. Not as many projects do integration testing and even fewer do unit testing. If you have done integration or unit testing, you are probably painfully aware of the amount of test code that has to be generated to isolate a single file or group of files from the rest of the application. At the most stringent levels of unit and integration test, it is not uncommon for the amount of test code written to be larger than the amount of application code being tested. As a result, these levels of testing are generally applied to mission and safety critical applications in regulated markets such as: aviation, medical devices, railway, process control, and soon automotive. Many of the applications written for these industries contain embedded software. 4

The structural testing process for regulated industries often revolves around testing the high and low-level requirements and analyzing the code coverage that results from this requirements based testing. On many projects, high-level or functional requirements are tested first. Code Coverage can be used to capture and report on the amount coverage achieved. Unfortunately, it is almost impossible to get 100% code coverage during system/functional testing. More commonly, you will achieve 60%-70% code coverage during this type of testing. The remaining 30-40% code coverage is achieved using unit and integration testing techniques. Unit testing involves using test code in the form of drivers and stubs to isolate particular functions in the application, and stimulating those functions with test cases. These "low-level" requirement based tests provide much greater control over the code being tested and are used to augment the previously executed system tests and allow you to get to 100% coverage. For this reason, it is very convenient to be able to share coverage data from different types of testing. 5 Challenges of Capturing Coverage in an Embedded Environment As the old saying goes, You can t get something for nothing. There s always a price to be paid. In the case of code coverage, the price to be paid is the addition of instrumentation to the source files to be tested. Instrumentation is the additional source code added to an application to allow the collection of coverage data as tests are executed. The overhead associated with instrumentation translates directly into increased source file and program size, and indirectly into increased execution time. Being able to forecast the impact of instrumenting source code for coverage can be of critical importance. This is especially true when testing in a real-time embedded environment. It is not possible however, to forecast the precise impact that instrumentation would have on a particular set of application files. No algorithm exists for this purpose and none is possible. Too many variables are involved and every application is unique in its complexities. It is possible however, to derive a set of estimates from a representative example. 6 Capturing Coverage Data in a Shared Environment One of the primary issues with managing code coverage in an embedded target environment is the absence of reserve memory to accommodate the additional instrumentation code. Estimates done at Vector Software on different samples of code indicate that the source code can grow up to an additional 10% (for a state of the art coverage tool) for the various types of coverage mentioned. For most 32 bit targets this is not an issue. For 8 and 16 bit targets with limited memory this most certainly can be an issue. Therefore, different code instrumentation techniques are employed to reduce the size of the executable. Several different data capture techniques are also used depending on the memory available. In-memory caching systems are used to monitor which sections of code have been reached. This is essential technology in keeping the amount of RAM used in the instrumented executable to a minimum. 7 Conclusion Do not mistake Code Coverage for Tested or Bug Free. To have 100% code coverage does not mean the application is reliable. It only means that the tests that were executed and stimulated all of the code for the desired level of coverage. Code coverage is not a final activity, but rather a continual activity toward a reliable system. The earlier it is employed the better. It provides scope to add tests, improve tests, remove redundant tests, and cover more code, thereby ensuring higher quality and more reliable systems. While there are some hurdles to overcome using code coverage tools in an embedded environment, recent tool advances make the use of such tools practical in almost all instances. 5

Get More Information Visit our website for: > News > Products > Demo software > Support > Training classes > Addresses www.vector.com