Programming Embedded Systems

Similar documents
Programming Embedded Systems

Introduction. Easy to get started, based on description of the inputs

EECS 4313 Software Engineering Testing. Topic 01: Limits and objectives of software testing Zhen Ming (Jack) Jiang

Computer Science and Software Engineering University of Wisconsin - Platteville 9-Software Testing, Verification and Validation

Lecture 15 Software Testing

CS 4387/5387 SOFTWARE V&V LECTURE 4 BLACK-BOX TESTING

Software Engineering Testing and Debugging Testing

Introduction To Software Testing. Brian Nielsen. Center of Embedded Software Systems Aalborg University, Denmark CSS

Software Engineering

Topics in Software Testing

Testing: Test design and testing process

In this Lecture you will Learn: Testing in Software Development Process. What is Software Testing. Static Testing vs.

linaye/gl.html

1 Visible deviation from the specification or expected behavior for end-user is called: a) an error b) a fault c) a failure d) a defect e) a mistake

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

Verification and Validation. Assuring that a software system meets a user s needs. Verification vs Validation. The V & V Process

Ingegneria del Software Corso di Laurea in Informatica per il Management

MTAT : Software Testing

Chapter 11, Testing. Using UML, Patterns, and Java. Object-Oriented Software Engineering

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013!

Testing. CMSC 433 Programming Language Technologies and Paradigms Spring A Real Testing Example. Example (Black Box)?

Quality Assurance in Software Development

Programming Embedded Systems

Software Testing. 1. Testing is the process of demonstrating that errors are not present.

Black Box Testing. EEC 521: Software Engineering. Specification-Based Testing. No Source Code. Software Testing

MTAT : Software Testing

More on Verification and Model Checking

MTAT : Software Testing

CSE 403: Software Engineering, Fall courses.cs.washington.edu/courses/cse403/16au/ Unit Testing. Emina Torlak

Equivalence Class Partitioning. Equivalence Partitioning. Definition and Example. Example set of classes

Manuel Oriol, CHCRC-C, Software Testing ABB

Softwaretechnik. Lecture 08: Testing and Debugging Overview. Peter Thiemann SS University of Freiburg, Germany

(See related materials in textbook.) CSE 435: Software Engineering (slides adapted from Ghezzi et al & Stirewalt

Bridge Course On Software Testing

Specification-based test design

Chapter 8 Software Testing. Chapter 8 Software testing

Softwaretechnik. Lecture 08: Testing and Debugging Overview. Peter Thiemann SS University of Freiburg, Germany

No Source Code. EEC 521: Software Engineering. Specification-Based Testing. Advantages

COMP 3705 Advanced Software Engineering: Software Testing

Aerospace Software Engineering

Software Testing. Software Testing. in the textbook. Chapter 8. Verification and Validation. Verification and Validation: Goals

Part 5. Verification and Validation

Software Engineering 2 A practical course in software engineering. Ekkart Kindler

People tell me that testing is

Examination Questions Time allowed: 1 hour 15 minutes

MTAT : Software Testing

Testing. ECE/CS 5780/6780: Embedded System Design. Why is testing so hard? Why do testing?

Input Space Partitioning

Chap 2. Introduction to Software Testing

Introduction to Software Testing Chapter 4 Input Space Partition Testing

Formal Approach in Software Testing

MONIKA HEINER.

Part I: Preliminaries 24

Lecture 18: Structure-based Testing

Software Testing. Software Testing

Software Engineering: Theory and Practice. Verification by Testing. Test Case Design. Tom Verhoeff

Lecture 20: SW Testing Presented by: Mohammad El-Ramly, PhD

Test design techniques

INTRODUCTION TO SOFTWARE ENGINEERING

10. Software Testing Fundamental Concepts

Verification Overview Testing Theory and Principles Testing in Practice. Verification. Miaoqing Huang University of Arkansas 1 / 80

Testing, Debugging, Program Verification

Testing Theory. Agenda - What will you learn today? A Software Life-cycle Model Which part will we talk about today? Theory Lecture Plan

Program Testing and Analysis: Manual Testing Prof. Dr. Michael Pradel Software Lab, TU Darmstadt

Program Analysis. Program Analysis

6.170 Lecture 6 Procedure specifications MIT EECS

Why testing and analysis. Software Testing. A framework for software testing. Outline. Software Qualities. Dependability Properties

Lecture 26: Testing. Software Engineering ITCS 3155 Fall Dr. Jamie Payton

An Introduction to Lustre

18-642: Testing Overview

Software Testing. Massimo Felici IF

Chapter 9. Software Testing

(From Glenford Myers: The Art of Software Testing)

Software Testing for Developer Development Testing. Duvan Luong, Ph.D. Operational Excellence Networks

Software Testing. Lecturer: Sebastian Coope Ashton Building, Room G.18

Chapter 10. Testing and Quality Assurance

Sample Exam Syllabus

Testing! The material for this lecture is drawn, in part, from! The Practice of Programming (Kernighan & Pike) Chapter 6!

F. Tip and M. Weintraub FUNCTIONAL TESTING

Testing. UW CSE 160 Winter 2016

Testing & Debugging TB-1

Software technology 7. Testing (2) BSc Course Dr. Katalin Balla

Software Testing Prof. Meenakshi D Souza Department of Computer Science and Engineering International Institute of Information Technology, Bangalore

Object-Oriented Design Lecture 5 CSU 370 Fall 2008 (Pucella) Friday, Sep 26, 2008

Comparing procedure specifications

Software Testing Lecture 1. Justin Pearson

Coding and Unit Testing! The Coding Phase! Coding vs. Code! Coding! Overall Coding Language Trends!

Written exam TDDD04 Software Testing

Analysis of the Test Driven Development by Example

Agile Software Development. Lecture 7: Software Testing

Software Engineering Theory. Lena Buffoni (slides by Kristian Sandahl/Mariam Kamkar) Department of Computer and Information Science

CMPSCI 521/621 Homework 2 Solutions

Verification and Validation

Testing and Debugging

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams

Software Design Models, Tools & Processes. Lecture 6: Transition Phase Cecilia Mascolo

What is Structural Testing?

Software Testing: Introduction

Verification and Validation

Transcription:

Programming Embedded Systems Lecture 8 Overview of software testing Wednesday Feb 8, 2012 Philipp Rümmer Uppsala University Philipp.Ruemmer@it.uu.se 1/53

Lecture outline Testing in general Unit testing Coverage criteria System testing Some slides borrowed from the course Testing, Debugging, Verification at Chalmers 2/53

Overview Ideas and techniques of testing have become essential knowledge for all software developers. Expect to use the concepts presented here many times in your career. Testing is not the only, but the primary method that industry uses to evaluate software under development. 3/53

Overview (2) Testing field is HUGE Many different approaches, many different opinions Creating an exhaustive overview is a futile endeavour This lecture will cover some of the most important/common notions, in particular w.r.t. embedded systems 4/53

Correctness Software is called correct if it complies with its specification Spec. might be implicit or explicit, formal or informal, declarative or imperative, etc. Often: spec. is a set of requirements and/or use cases Software that violates spec. contains bugs/defects 5/53

Validation vs. Verification The V&V of software engineering Validation: assessment whether program has intended behaviour (this can mean: check whether chosen requirements are correct or consistent) Verification: assessment whether program complies to specification/requirements 6/53

Validation vs. Verification (2) Testing is the most common approach to V&V Testing is form of dynamic V&V Works through concrete execution of software Alternatives: Stativ V&V: model checking, etc. Code inspection/reviews 7/53

Faults, errors, failures Fault: abnormal condition or defect in a component (software or hardware) Error: deviation from correct system state caused by a fault human mistake when using the system Failure: observably incorrect behaviour of system, caused by a fault Many different definitions exist! 8/53

Fault/failure example for (;;) { if (GPIO_ReadInputDataBit(GPIOC, Software fault: SwitchPin)) { ++count; wrong RHS } else if (count!= 1) { GPIO_WriteBit(GPIOC, ON1Pin, Bit_RESET); GPIO_WriteBit(GPIOC, ON2Pin, Bit_RESET); count = 1; 0 Error: variable has } wrong value if (count == 10) // 0.2 seconds GPIO_WriteBit(GPIOC, ON1Pin, Bit_SET); else if (count == 100) { // 0.2 + 1.8 seconds GPIO_WriteBit(GPIOC, ON1Pin, Bit_RESET); GPIO_WriteBit(GPIOC, ON2Pin, Bit_SET); } Failure: wrong value on pin, rockets are launched } vtaskdelayuntil(&lastwakeuptime, PollPeriod / porttick_rate_ms); 9/53

Testing consists of... designing test inputs running tests analysing results Done by test engineer Each of the steps can be automated 10/53

Test engineers vs. developers In practice, often different people Various opinions on this... Can you think of advantages/disadvantages? 11/53

Opinion 1 (Glenford J. Myers,...) Programmer should avoid testing his/her own program (misunderstanding of specs carry over to testing) A programming organisation should not test its own programs Conflict of interest 12/53

Opinion 2 (Kent Beck, Erich Gamma,...) Test-driven development: Developers create tests before implementation Test cases are form of specification Re-run tests on all incremental changes 13/53

What is testing good for? Bit of philosophy... 14/53

Boris Beizer's levels of test process maturity Level 0: Testing = debugging Level 1: Purpose of testing: show that software works Level 2: Purpose of testing: show that software does not work Level 3: Purpose of testing: reduce risk due to software Level 4: Testing is mental discipline to develop high-quality software 15/53

Level 0: testing = debugging Naïve starting point: debug software by writing test cases and manually observing the outcome No notion of correctness Does not help much to develop software that is reliable or safe 16/53

Level 1: show that software works Correctness is (almost) impossible to achieve Danger: you are subconsciously steered towards tests likely to not fail the program What do we know if no failures? good software? or bad tests? No strict goal, no real stopping rule, no formal test technique 17/53

Level 2: show that software doesn't work Goal of testing is to find bugs Puts testers and developers into an adversarial relationship What if there are no failures? Practice in most software companies 18/53

Level 3: reduce risk Correct software is not achievable Evaluate potential risks incurred by software, minimise by writing software appropriately testing guided by risk analysis Testers + developers cooperate 19/53

Level 4: testing as mental discipline Learn from test outcomes to improve development process Quality management instead of just testing V&V guides overall development Compare with spell checker: purpose is not (only) to find mistakes, but to improve writing capabilities 20/53

Categories of testing 21/53

Overview Acceptance testing System testing Integration testing Unit testing Orthogonal dimension: regression testing 22/53

Unit testing Assess software units with respect to low-level unit design E.g., pre-/post-conditions formulated for individual functions As early as possible during development 23/53

Integration testing Assess software with respect to high-level/architectural design Focus on interaction between different modules Normally done after unit/module testing 24/53

System testing Assess software with respect to system-level specification Could be overall requirements, or more detailed specification Testing by observing externally visible behaviour (no internals) Black-box approach Usually rather late during development (but also applied to early versions of complete system) 25/53

Acceptance testing Assess software with respect to user requirements Can be done either by system provider or by customer (prior to transferring ownership) Late in development process 26/53

Testing w.r.t. V model 27/53

Regression testing Testing that is done after changes in the software. Purpose: gain confidence that the change(s) did not cause (new) failures. Standard part of the maintenance phase of software development. Ideally: every implemented feature is covered (preserved) by regression tests Often: regression tests are written as reaction to found bugs 28/53

Unit testing 29/53

Structure of a unit test case Step 1: Initialisation E.g., prepare inputs Step 2: Call functions of implementation under test (IUT) Step 3: Decision (oracle) whether the test succeeds or fails Steps can be performed manually or automatically Preferable for many reasons: Tests can be re-run easily; Correct behaviour is described formally 30/53

Unit testing example Suppose we want to test this function: void sort(int *array) { } An executable test case: int testsort(void) { int a[] = { 3, 1, 5 }; sort(a); Initialisation IUT invocation return (a[0] <= a[1] && a[1] <= a[2]); } Oracle, return true if test succeeded 31/53

Test oracles Oracles are often independent of particular test case Can sometimes be derived mechanically from unit specification Oracle is essential for being able to run tests automatically Common special case: Oracle just compares unit outputs with desired outputs 32/53

Ambiguous; to clarify, write either A or B or A or B, or both Common English patterns (e.g., used in Elevator lab!) English Logic C (on ints) Lustre (later in course) A and B A but B A if B A when B A whenever B if A, then B A implies B A forces B only if A, B B only if A A precisely when B A if and only if B A or B either A or B A or B A & B A && B A and B B => A!B A B => A A => B!A B A => B B => A!B A B => A A <=> B A == B, (A && B) (!A &&!B) A (+) B (exclusive or) A v B (logical or) A!= B, A ^ B, (A &&!B) (!A && B) A B A = B A xor B A or B 33/53

Sets and suites Test set: set of test cases for a particular unit Test suite: union of test sets for a number of units 34/53

Automated, repeatable testing By using a tool you can automatically run a large collection of tests The testing code can be integrated into the actual code (side-effect: documentation) After debugging, the tests are rerun to check if failure is gone Whenever code is extended, all old test cases can be rerun to check that nothing is broken (regression testing) 35/53

Automated, repeatable testing (2) Supported by unit testing frameworks (also called test harness): e.g., xunit SUnit: SmallTalk JUnit: Java CppUnit: C++ One of the most common commercial frameworks: C++Test, Parasoft can be integrated with MDK-ARM 36/53

Construction of test suites Black-box (specification-based) Derive test suites from external descriptions of the software, including specifications, requirements, design, and input space knowledge White-box (implementation-based) Derive test suites from the source code internals of the software, specifically including branches, individual conditions, and statements Many approaches in between (e.g., model-based) 37/53

Test suite construction is related to coverage criteria Tests are written with a particular goal in mind Exercise program code thoroughly (white-box); or Cover input space thoroughly (black-box) 38/53

Assessing quality of test suites: coverage criteria (used in various kinds of testing) 39/53

Common kinds of coverage criteria Control-flow graph coverage Logic coverage Structural coverage Input space partitioning Mutation coverage In embedded software, it is often required to demonstrate coverage E.g., DO-178B (avionics standard), level A requires some level of MC/DC coverage 40/53

Input space partitioning Based on input domain modelling (IDM): abstract description of input space Input space partitioned into blocks (sets of input values) Values of each block are equivalent w.r.t. some characteristic Coverage criteria: has each block been covered by test cases? 41/53

Interface-based IDM Characteristics derived from signature, datatypes E.g., for integer inputs: interesting blocks are zero, positive, negative, maximum number, etc. 42/53

Functionality-based IDM Characteristics derived from intended program functionality E.g., different expected program outputs 43/53

Example: triangle classification Consider program typedef enum { Scalene, Isosceles, Equilateral, Invalid } TriType; TriType determinetype(int length1, int length2, int length3) { } Which test inputs would you choose? 44/53

Control-flow graphs (CFGs) int mult(int a, int b){ int z = 0; while ( a!= 0) { if ( a % 2!= 0) { z = z+b; } a = a /2; b = b *2; } return z; } Graphical representation of code: Nodes are control-flow locations + statements Edges denote transitions + guards 45/53

Common notions in CFGs Execution Path: a path through a CFG that starts at the entry point and is either infinite or ends at one of the exit points Path condition: a path condition PC for an execution path p within a piece of code c is a condition that causes c to execute p if PC holds in the prestate of c Feasible execution path: path for which a satisfiable path condition exists Examples! 46/53

Statement coverage (SC) (a kind of CFG coverage) A test suite TS achieves SC if for every node n in the CFG there is at least one test in TS causing an execution path via n Often quantified: e.g., 80% SC Can SC always be achieved? 47/53

Branch coverage (BC) A test suite TS achieves BC if for every edge e in the CFG there is at least one test in TS causing an execution path via e. BC subsumes SC: if a test suite achieves BC (for a strongly connected CFG), it also achieves SC 48/53

Path coverage (PC) A test suite TS achieves PC if for every execution path ep of the CFG there is at least one test in TS causing ep PC subsumes BC PC cannot be achieved in practice number of paths is too large (for mult example, ) paths might be infeasible 49/53

Decision coverage (DC) (a kind of logic coverage) Decisions D(p) in a program p: set of maximum boolean expressions in p E.g., conditions of if, while, etc. But also other boolean expressions: A = B && (x >= 0); Precise definition is subject of many arguments: only consider decisions that program branches on? (B&&(x>=0) is a decision, B and (x>=0) are not) 50/53

Decision coverage (DC) (2) NB: multiple occurrences of the same expression are counted as different decisions! E.g. if (x >= 0) { } // if (x >= 0) { } Two decisions 51/53

Decision coverage (DC) For a given decision d, DC is satisfied by a test suite TS if it contains at least one test where d evaluates to false, and one where d evaluates to true (might be the same test) A test suite TS achieves DC for a program p if it achieves DC for every decision d in D(p) 52/53

DC example Consider decision ((a < b) D) && (m >= n * o) Inputs to achieve DC? TS achieves DC if it triggers executions a = 5, b = 10, D = true, m = 1, n = 1, o = 1 and a = 10, b = 5, D = false, m = 1, n = 1, o = 1 53/53