Software Verification and Validation

Similar documents
Software Testing Interview Question and Answer

Examination Questions Time allowed: 1 hour 15 minutes

Sample Exam Syllabus

1 Visible deviation from the specification or expected behavior for end-user is called: a) an error b) a fault c) a failure d) a defect e) a mistake

Chapter 11, Testing. Using UML, Patterns, and Java. Object-Oriented Software Engineering

Verification and Test with Model-Based Design

10. Software Testing Fundamental Concepts

Manuel Oriol, CHCRC-C, Software Testing ABB

Certified Tester Foundation Level(CTFL)

WHITE PAPER. 10 Reasons to Use Static Analysis for Embedded Software Development

From Design to Production

Testing: Test design and testing process

Verification and Validation. Assuring that a software system meets a user s needs. Verification vs Validation. The V & V Process

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

Bridge Course On Software Testing

Part 5. Verification and Validation

Lecture 15 Software Testing

In this Lecture you will Learn: Testing in Software Development Process. What is Software Testing. Static Testing vs.

Terminology. There are many different types of errors and different ways how we can deal with them.

Verification and Validation of Models for Embedded Software Development Prashant Hegde MathWorks India Pvt. Ltd.

Chapter 9. Software Testing

Intro to Proving Absence of Errors in C/C++ Code

Verification and Validation

Topics in Software Testing

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams

Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms

Program Correctness and Efficiency. Chapter 2

Computer Science and Software Engineering University of Wisconsin - Platteville 9-Software Testing, Verification and Validation

Model-Based Design for High Integrity Software Development Mike Anthony Senior Application Engineer The MathWorks, Inc.

Software Testing. Software Testing

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

Certified Automotive Software Tester Sample Exam Paper Syllabus Version 2.0

Quote by Bruce Sterling, from: A Software Testing Primer, Nick Jenkins

Software Testing CS 408

Software Testing part II (white box) Lecturer: Giuseppe Santucci

Ingegneria del Software Corso di Laurea in Informatica per il Management

Sample Exam. Certified Tester Foundation Level

MONIKA HEINER.

Verification and Validation

Verification and Validation

Simulink 모델과 C/C++ 코드에대한매스웍스의정형검증툴소개 The MathWorks, Inc. 1

By V-cubed Solutions, Inc. Page1. All rights reserved by V-cubed Solutions, Inc.

Software Testing for Developer Development Testing. Duvan Luong, Ph.D. Operational Excellence Networks

Testing & Debugging TB-1

Why testing and analysis. Software Testing. A framework for software testing. Outline. Software Qualities. Dependability Properties

It is primarily checking of the code and/or manually reviewing the code or document to find errors This type of testing can be used by the developer

Standard Glossary of Terms used in Software Testing. Version 3.2. Beta - Foundation Terms

Leveraging Formal Methods Based Software Verification to Prove Code Quality & Achieve MISRA compliance

Advanced Software Engineering: Software Testing

Software Engineering 2 A practical course in software engineering. Ekkart Kindler

Sample Exam ISTQB Advanced Test Analyst Answer Rationale. Prepared By

Objectives. Chapter 19. Verification vs. validation. Topics covered. Static and dynamic verification. The V&V process

People tell me that testing is

INTRODUCTION TO SOFTWARE ENGINEERING

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CS SOFTWARE ENGINEERING

Testing. ECE/CS 5780/6780: Embedded System Design. Why is testing so hard? Why do testing?

Chapter 8 Software Testing. Chapter 8 Software testing

Aerospace Software Engineering

Developing AUTOSAR Compliant Embedded Software Senior Application Engineer Sang-Ho Yoon

Ian Sommerville 2006 Software Engineering, 8th edition. Chapter 22 Slide 1

Verification Overview Testing Theory and Principles Testing in Practice. Verification. Miaoqing Huang University of Arkansas 1 / 80

Topic: Software Verification, Validation and Testing Software Engineering. Faculty of Computing Universiti Teknologi Malaysia

Leveraging Formal Methods for Verifying Models and Embedded Code Prashant Mathapati Application Engineering Group

Automating Best Practices to Improve Design Quality

Simulink Verification and Validation

Standard Glossary of Terms Used in Software Testing. Version 3.01

Program Analysis. Program Analysis

CSE 403: Software Engineering, Fall courses.cs.washington.edu/courses/cse403/16au/ Unit Testing. Emina Torlak

Software Engineering (CSC 4350/6350) Rao Casturi

Test requirements in networked systems

Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm. Rao Casturi 11/10/2015

No Source Code. EEC 521: Software Engineering. Specification-Based Testing. Advantages

Black-box Testing Techniques

VETRI VINAYAHA COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Increasing Design Confidence Model and Code Verification

Verification and Validation

Software Engineering Fall 2014

Utilisation des Méthodes Formelles Sur le code et sur les modèles

Standard Glossary of Terms Used in Software Testing. Version 3.01

Black Box Testing. EEC 521: Software Engineering. Specification-Based Testing. No Source Code. Software Testing

Static Analysis in C/C++ code with Polyspace

Chap 2. Introduction to Software Testing

SOFTWARE ENGINEERING SOFTWARE VERIFICATION AND VALIDATION. Saulius Ragaišis.

Verification and Validation of High-Integrity Systems

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

Test design techniques

GAIO. Solution. Corporate Profile / Product Catalog. Contact Information

EXAMINING THE CODE. 1. Examining the Design and Code 2. Formal Review: 3. Coding Standards and Guidelines: 4. Generic Code Review Checklist:

ΗΜΥ 317 Τεχνολογία Υπολογισμού

Computational Systems COMP1209

[IT6004-SOFTWARE TESTING] UNIT 2

Introduction To Software Testing. Brian Nielsen. Center of Embedded Software Systems Aalborg University, Denmark CSS

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013!

Darshan Institute of Engineering & Technology for Diploma Studies

The testing process. Component testing. System testing

Test Design Techniques ISTQB (International Software Testing Qualifications Board)

CTFL -Automotive Software Tester Sample Exam Paper Syllabus Version 2.0

18-642: Unit Testing 1/31/ Philip Koopman

Jay Abraham 1 MathWorks, Natick, MA, 01760

UNIT-2 Levels of Testing

Transcription:

Software Verification and Validation VIMIMA11 Design and integration of embedded systems Balázs Scherer BME-MIT 2017 1.

Software failures Error: Human operation that leads to an undersigned behavior. Fault: The Error s occurrence in the code, many times called bug. Executing this code part leads to Failure Failure: Aanomalous behavior of the software BME-MIT 2017 2.

Importance of the verification BME-MIT 2017 3.

ISTQB: International Software Testing Qualifications Board BME-MIT 2017 4.

Fundamentals of testing Testing shows presence of defects: Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found it is not a proof of correctness. Exhaustive testing is impossible: Testing everything (all combinations of is impossible inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, we use risks and priorities to focus testing efforts. Early testing: Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives. BME-MIT 2017 5.

Fundamentals of testing Defect clustering: A small number of modules contain most of the defects discovered during prerelease testing or show the most operational failures. Pesticide paradox: If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. To overcome this 'pesticide paradox', the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects. Testing is context dependent: Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site. BME-MIT 2017 6.

Fundamentals of testing Absence-of-errors fallacy: Finding and fixing defects does not help if the system built is unusable and does not fulfill the users' needs and expectations. BME-MIT 2017 7.

Testing life cycle BME-MIT 2017 8.

Test planning and control Determine the scope and risks and identify the objectives of testing o What software, components, systems or other products are in scope for testing o Are we testing primarily to uncover defects, to show that the software meets requirements, to demonstrate that the system is fit for purpose or to measure the qualities and attributes of the software? Determine the test approach o Test techniques to use o Coverage to achieve Implement the test policy and/or the test strategy Determine the required test resources BME-MIT 2017 9.

Test planning and control Schedule test analysis and design tasks, test implementation, execution and evaluation Determine the exit criteria: The testing do not last for exhaustion. There is a need for a well determined exit criteria, a coverage metric to achieve. Measure and analyze the results of reviews and testing Monitor and document progress, test coverage and exit criteria Provide information on testing Initiate corrective actions BME-MIT 2017 10.

Test analysis and design Review the test basis o product risk analysis, requirements, architecture, design specifications, and interfaces Identify test conditions based on analysis of test items, their specifications and behavior high-level list of what we are interested in testing Design the tests Selecting test methods based on test conditions Evaluate testability of the requirements and system The requirements may be written in a way that allows a tester to design tests; for example, if the per formance of the software is important, that should be specified in a testable way Design the test environment set-up and identify any required infrastructure and tools. BME-MIT 2017 11.

Test implementation and execution take the test conditions and make them into test cases Put together a high-level design for our tests Setup test environment Develop and prioritize test cases Create test suites from the test cases for efficient test execution: A test suite is a logical collection of test cases which naturally work together. Test suites often share data and a common high-level set of objectives. Implement and verify the environment Execute the test suites and individual test cases Log the outcome of test execution Compare actual results to expected results Analyze differences and repeat testing BME-MIT 2017 12.

Test implementation and execution take the test conditions and make them into test cases Put together a high-level design for our tests Setup test environment Develop and prioritize test cases Create test suites from the test cases for efficient test execution: A test suite is a logical collection of test cases which naturally work together. Test suites often share data and a common high-level set of objectives. Implement and verify the environment Execute the test suites and individual test cases Log the outcome of test execution Compare actual results to expected results Analyze differences and repeat testing BME-MIT 2017 13.

Evaluating exit criteria, reporting, and test closure Comparing test results to exit criteria Assess if more tests are needed or if the exit criteria specified should be changed Write a Test summary report Finalize and archive testware, such as scripts, the test environment, and any other test infrastructure, for later reuse. Evaluate how the testing went and analyze lessons learned for future releases and projects. BME-MIT 2017 14.

Who should perform the tests? Can the developer test its own software or system? - hard to find the problem in our work (do not understand well the specification) - Do not test everything (not expert at test methods, forget for example negative tests) + No one knows the software or system better Why testing independence is requires + independent: different point of view, can see problems invisible for the developer + knows testing methods + required by standards - Less knowhow about the system BME-MIT 2017 15.

Verification and test techniques BME-MIT 2017 16.

Static verification methods Do not requires an executable code o Can be used in early phases System level Requirement analysis Logical design User acceptance test Technical System design System Int. & test Subsystem level Subsystem design Subsystem Integration and test Modul level Module design Modul test Implementation BME-MIT 2017 17.

Static verification methods Do not requires an executable code o Can be used in early phases System level Requirement analysis Logical design User acceptance test Technical System design System Int. & test Subsystem level Subsystem design Subsystem Integration and test Modul level Module design Modul test Implementation BME-MIT 2017 18.

Importance of the verification BME-MIT 2017 19.

Static verification: Reviews Informal review o No formal process o Pair, or technical lead review, code or design o Results may be documented o Usefulness depends on the reviewer o Inexpensive way to get some benefits Walkthrough o Meeting led by the author to the peer group o May take the forms of scenarios or dry runs o Open ended meeting, with optional preparations o My vary in practice from quite informal to very formal o Main purpose is learning, and gaining understanding, and finding defects BME-MIT 2017 20.

Static verification: Reviews Technical review o Documented, defined defect detection process o Peers and technical experts are present o Typically lead by a trained moderator o There is pre-meeting preparations o Optionally use checklists o Review report is created o Main purpose is discussing, making decisions, solving technical problems, checking conformance to specification, plans, and standards Inspections o Lead by trained moderator o Detailed roles, formal process with checklists o Specified entry and exit criteria o Formal follow up process o Purpose is finding defects BME-MIT 2017 21.

Static verification: static analyzers Checking against the coding rules o MISRA-C compatibility analysis Code metrics o The 20% of code case the 80% of the problems o Cyclomatic complexity analysis and calculations o Comment ratio Data flow verification o Are there usage of non initialized variables? o Are there variable under, or overflows, are the a dividing with 0? Control flow analysis o Are there unreachable code lines BME-MIT 2017 22.

Cyclomatic complexity Gives information about the complexity of the code. Useful to identify problematic software parts, and to estimate the required testing effort. M = E N + 2 E: Edges of the graph N: Nodes in the graphs 8-7+2 = 3 BME-MIT 2017 23.

Static analysis tool example: PolySpace Software of MathWorks Can be used for C/C++ and Ada languages Checking compliance with MISRA-C rules Uses abstract interpretation to discover Run-Time problems o Array over indexing o Errors in pointer usage o Using non initialized variables o Usage of bad arithmetical operations (divide with 0, square root from negative number) o Over or under flow of variables o Not allowed type conversions o Unreachable code, endless loops BME-MIT 2017 24.

PolySpace in operation Static verification: Green: reliable, good code Red: faulty in every circumstances Grey: unreachable code Orange: unproven to be good or bad, need to be checked by the programmer BME-MIT 2017 25.

Dynamic verification techniques An executable code is needed for these methods: Testing System level Requirement analysis Logical design User acceptance test Technical System design System Int. & test Subsystem level Subsystem design Subsystem Integration and test Modul level Module design Modul test Implementation BME-MIT 2017 26.

Dynamic verification techniques An executable code is needed for these methods: Testing System level Requirement analysis Logical design User acceptance test Technical System design System Int. & test Subsystem level Subsystem design Subsystem Integration and test Modul level Module design Modul test Implementation BME-MIT 2017 27.

Dynamic methods BME-MIT 2017 28.

Test types Functional tests o Usually black box tests o Based on specifications Non functional tests o Maintainability, usability, reliability, efficiency o Performance, Load, Stress test Structure based tests o White box tests o Coverage based tests Software change tests o Confirmation test: test after bug fixing. Ensuring the bug fix is done well o Regression test: test after minor software change to ensure the change not affected the main functionality BME-MIT 2017 29.

White box, Grey box, and Black box tests White box test o The source code is known, and used during the test o Typically structure based test o Used mainly at the bottom part of V-model Black box tests o The internal behavior of the system is not known, the test is focus on the inputs and the outputs o Typically specification based test o Used during the whole verification & validation part of the V-model Grey box tests o Focus on the inputs and outputs o The internal structure, source code is not known, but there are information about the critical internal states of the system o Typically used in case embedded systems during integration BME-MIT 2017 30.

Specification based techniques The specification and requirements of the software or system to test is given. The goal is to create test data that is able to assure that the system or software meets its specification and requirements, in an efficient way. Techniques o equivalence partitioning o boundary value analysis; o decision tables; o state transition testing BME-MIT 2017 31.

Equivalence partitioning Goal: Identify Equivalence Classes o divide (i.e. to partition) a set of test conditions into groups or sets that can be considered the same (i.e. the system should handle them equivalently), hence 'equivalence partitioning o we are assuming that all the conditions in one partition will be treated in the same way by the software o equivalence-partitioning technique then requires that we need test only one condition from each partition o Wrong identification of partitions can cause problems Typical partitions or classes o Value ranges o Value sets o Special clasters BME-MIT 2017 32.

Equivalence partitioning examples Examples o Simple value ranges invalid valid invalid 0 100 200 255 o Complex variable sets BME-MIT 2017 33.

Equivalence partitioning examples Examples o Simple value ranges invalid valid invalid 0 100 200 255 o Complex variable sets BME-MIT 2017 34.

Equivalence partitioning examples Up switching characteristic of an automatic transmission controller BME-MIT 2017 35.

Boundary value analysis Defects like to hide in the partition boundaries o Wrong decision conditions o Wrong iteration exit or enter criteria o Wrong data structure handling Test both boundary of a cluster or partition invalid valid invalid 0 100 200 255 BME-MIT 2017 36.

In a real case it can be far from trivial Up switching characteristic of an automatic transmission controller BME-MIT 2017 37.

Why should we both perform equivalence and boundary tests? Do we cover the whole partition by an boundary value test? BME-MIT 2017 38.

Why should we both perform equivalence and boundary tests? Do we cover the whole partition by an boundary value test? o Theoretically yes. It can work in an ideal world. o If the boundary value test signals a defect: is the whole partition works wrong, or just the boundary is not good? o must test some values inside the equivalence partition o Testing only extreme situations give less confidence, o must test some values inside the equivalence partition o Sometimes it is extremely hard to find the boundaries o Sometimes there can be a wrong partitioning BME-MIT 2017 39.

Cause-effect analysis, decision table Analyzing input relationship to output o Cause: an input partition o Effect: an output partition o Using bool algebra to this speed > 50km/h Gas pedal < 30% Bool-graph o AND, OR relationships o Denied combinations Decision table o One column is one test case BME-MIT 2017 40.

Cause-effect analysis, Example Webshop discount analysis for a customer BME-MIT 2017 41.

State transition testing Using the state diagram Test goals: o the states that the software may occupy o the transitions from one state to another o the events that cause a transition o the actions that result from a transition o Testing for invalid transitions BME-MIT 2017 42.

Use case testing Typically used at the final stage of integration as system tests Test scenarios generated from user stories BME-MIT 2017 43.

Structure based tests BME-MIT 2017 44.

Structure based methods White box methods o Requires the execution of code Gives easy to understand measurement values Alone it is not a test Do not protect against malfunction Methods o Statement coverage o Decision coverage o Condition coverage o Path coverage BME-MIT 2017 45.

Terms Statement: one C language statement Statement Block o Continuous series of statements. Without branch, or decision. Condition o Simple logical condition (AND, OR ) Decision o Decision based on one or more logical conditions Branch o A possible outcome of a Decision Path o Series of statements between the modules input and the output BME-MIT 2017 46.

What do structure based methods good for? Typical application it to determine end of test condition. Specification Tests Érvényes felhasználói név T T T F Érvényes login T T F - Engedélyezet hozzáférési mód T F - - Hibatároló kiolvasás T T F F Hibatároló törlés T F F F Szoftver frissíés T F F F Minimum test case-ek A B C D Software Do we tested enough? BME-MIT 2017 47.

What do structure based methods good for? Typical application it to determine end of test condition. Specification Tests Érvényes felhasználói név T T T F Érvényes login T T F - Engedélyezet hozzáférési mód T F - - Hibatároló kiolvasás T T F F Hibatároló törlés T F F F Szoftver frissíés T F F F Minimum test case-ek A B C D Software Do we tested enough? Növekvő fedettség BME-MIT 2017 48.

Statement coverage Definition: Statement coverage = Number of statements exercised Total number of statements * 100% Not a strong metric 100% of Statement Coverage 67% of Statement Coverage BME-MIT 2017 49.

Decision coverage Definition: Decision coverage = Number of decision outcomes exercised Total number of decision outcomes * 100% Rather strong metric ((a == b) (c > 5 )) 50% of Decision coverage BME-MIT 2017 50.

Definition: o o Multiple Condition Coverage Every possible condition combination will be tested There is a huge grow in the test cases ((a < 10) (b > 5 )) 1. test: a = 5 (T), b = 0 (F) 2. test: a = 12 (F), b = 10 (T) 3. test: a = 15 (F), b = 2 (F) 4. test: a = 6 (T), b = 8 (T) 100% of Multiple Condition Coverage BME-MIT 2017 51.

Definition: o o o o MC/DC coverage Each entry and exit point is invoked Each decision takes every possible outcome Each condition in a decision takes every possible outcome Each condition in a decision is shown to independently affect the outcome of the decision ((a < 10) (b > 5 )) 1. test: a = 5 (T), b = 0 (F) 2. test: a = 12 (F), b = 10 (T) 3. test: a = 15 (F), b = 2 (F) 100% of MC/DC coverage BME-MIT 2017 52.

Path coverage Definition: Path Coverage = Too strong metric Number of executed path Total number of possible path * 100% 67% of Path coverage BME-MIT 2017 53.

Ad-hoc trial and error Non systematic tests Error guessing o Executed after normal tests o There are no rules for error guessing o Typical conditions to try include division by zero, blank (or no) input, empty files and the wrong kind of data User testing o o Acceptance tests made by the end users Based on the usage scenarios BME-MIT 2017 54.

Dynamic test techniques in V-model Application of the methods above System level Requirement analysis Logical design User acceptance test Technical System design System Int. & test Subsystem level Subsystem design Subsystem Integration and test Modul level Module design Modul test Implementation BME-MIT 2017 55.

Software Component Testing Connection to the design process The output of the specification based tests The output of the Implementation related tests Tested software components Testing software components Verification of data model Verification of behavior model Verification of Real-time model Connection to the design process Test cases from the design path Test cases from the implementation Source code BME-MIT 2017 56.

Module testing Goal is to find the problems as early as possible o Typically done by the developer Every module is handled separately o Low complexity tests o Easy to localize the problem Easier to integrate individually tested modules o Way to handle the complexity Module tests can be automated o Should be executed many times, therefore it should be fast and easy to execute BME-MIT 2017 57.

Typical problems of module testing The modules should be tested as separate component, but it depends on other modules o Isolation K K1 K2 UUT: Unit Under Test K21 K22 K23 BME-MIT 2017 58.

Typical problems of module testing The modules should be tested as separate component, but it depends on other modules o Isolation Test executer K K1 K2 UUT K21 K22 K23 Test stub BME-MIT 2017 59.

Typical problems of module testing The modules should be tested as separate component, but it depends on other modules o Isolation Test executer K K1 K2 UUT K21 K22 K23 Test stub Many times not executed in the real hardware: emulation BME-MIT 2017 60.

Requirements for unit tests in automotive BME-MIT 2017 61.

Tools of specification based tests xunit, cunit, Unity: Easy libraries for automated unit tests cunit.lib uut.c Compiler uuttest.exe uuttests.c Reports Using simple assertions: CU_ASSERT_TRUE(a), CU_ASSERT_EQUAL(a, b); BME-MIT 2017 62.

Tools of structure based tests Software based tools o Code instrumentation Overhead: time, data and programmemory o We test code with instrumentation. If we remove the instrumentation, then that is not the code we tested. BME-MIT 2017 63.

Tools of structure based tests Software based tools o Code instrumentation Overhead: time, data and programmemory o We test code with instrumentation. If we remove the instrumentation, then that is not the code we tested. Running the code in an Emulator o There is no need for code instrumentation o Not exactly the same environment as the real one Peripheral handling, timing o Using the debugger BME-MIT 2017 64.

Tools of structure based tests Hardware based measurement o Monitoring memory data and address lines On chip memories of modern microcontrollers do not make it possible o Internal microcontroller dependent support needed Trace modules ARM Embedded trace module o Costly hardware interface BME-MIT 2017 65.

Instrumentation tool: GCOV Part of the GCC toolchain, can be ported to embedded systems Instruments the code and creates logfiles #include <stdio.h> int main (void) { 1 int i; 10 for (i = 1; i < 10; i++) { 9 if (i % 3 == 0) 3 printf ("%d is divisible by 3\n", i); 9 if (i % 11 == 0) ###### printf ("%d is divisible by 11\n", i); 9 } 1 return 0; 1 } BME-MIT 2017 66.

Hardware measurement based tool Hardveres coverage tool BME-MIT 2017 67.

Questions about condition coverage Example code: if ( (a == 10) && (b == 20 )) if statement false true then branch instructions Instructions How can we measure condition coverage??? BME-MIT 2017 68.

Questions about condition coverage Example code: if ( (a == 10) && (b == 20 )) if statement true false Basic Blokkokras false If statement #1 condition true then branch If statement #2 condition Statements after, if false true then branch Statements after, if BME-MIT 2017 69.

Real-time model checking Shedulability calculation What information do we need? It is enough to measure the response time? BME-MIT 2017 70.

Response time van be tricky Execution time which is important Respone / Execution time [ms] Response time (run time) Execution time 0.9 Respone / Execution time [ms] 0.8 0.9 Response time (run time) Execution time 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 3.6 3.8 4 4.2 4.4 4.6 4.8 5 System time [us] x 10 4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 System time [us] x 10 5 BME-MIT 2017 71.

Execution time is needed for shedulability calculations DMA: Deadline Monotonic analysis How to measure execution time? BME-MIT 2017 72.

RTOS Trace hooks For example FreeRTOS has such hooks Event starting task switch tracetask_switched_out macro Saving old Tasks context Task context switch tracetask_switched_in macro Starting the new task BME-MIT 2017 73.

DWT: Data Watchpoint Trace 4 comparator: data address / program counter o Hardware watchpoint: put the processor into debug state o ETM trigger: start trace packest o PC sample trigger o Data address sample trigger Counters o Clock cycle counter o Sleep cycle counter o Interrupt overhead counter PC sampling Interrupt trace BME-MIT 2017 74.

Integration tests Application of the methods above System level Requirement analysis Logical design User acceptance test Technical System design System Int. & test Subsystem level Subsystem design Subsystem Integration and test Modul level Module design Modul test Implementation BME-MIT 2017 75.

Software integration Integrated software Software documentation Description files Integration of software components Creating the data and Code version of the software Generating documentation Creating Description files Connection to the design process Specification of software architecture Szoftver Szoftver Specification komponensek komponensek specifikációja of specifikációja software components Szoftver Szoftver Implementation komponensek komponensek specifikációja of specifikációja software components BME-MIT 2017 76.

Integration methods: Big-bang Big-bang o Integration of all the tested components o Harder to find the problems o Confirmation testing after bug fixing requires more effort o Can be effective in case of small systems BME-MIT 2017 77.

Incremental integration Baseline 0: 1 tested components Baseline 1: 2 tested components Baseline 2: 3 tested components Benefits o Easier to localize the problems o Easier to recover after problems BME-MIT 2017 78.

Top-down integration From top to bottom There is a need for test stubs o Simple printf o Time delays o Simple calculation based response o Response from a table Benefits o The higher level functionality is tested first Drawbacks o There is a need for test stubs o The details are tested late o The result is just a simulation for very long time Baseline 0 K K1 B1 K2 B2 K21 K22 K23 Test stub BME-MIT 2017 79.

Bottom-up integration Start from bottom, to up Benefits o Good for hardware intensive systems K B3 o We have real data and real timing from the start Drawbacks K1 K2 B1 o High level functions are tested at the end o There is a need for test drivers, and stubs K21 Baseline 0 K22 B2 Test stubs BME-MIT 2017 80.

Functional integration Function by function integration Benefits o The highest abstraction level is tested many times K B0 o Having real responses early o Real working functionality very early Drawbacks o Many test stubs needed K1 Test stubs K21 K2 B1 K22 B2 BME-MIT 2017 81.

Testing software systems Connection to the design process Test results Tested software system Testing software system Testing integrated Software systems Verification of software states Verification of Real-time model Connection to the design process Test cases from specification Software system BME-MIT 2017 82.

Typically used integration tests Specification based techniques o equivalence partitioning o boundary value analysis o decision tables o state transition testing No functional tests o Performance and load tests o Load tests o Usability tests BME-MIT 2017 83.

Continuous integration After Commit o Automatic sending for review o Automatic build o Automatic unit test Jenkins Gerrit BME-MIT 2017 84.

Continuous integration BME-MIT 2017 85.

System integration testing VIMIMA11 Design and integration of embedded systems Balázs Scherer BME-MIT 2017 86.

System integration testing Connection to the design process Test results Tested system Communication Interfaces Distributed operation System integration testing Real-time behavior Control loop verification Safety system verification Connection to the design process Test cases from specification Hardware system Software system BME-MIT 2017 87.

System integration test requirements ISO26262 BME-MIT 2017 88.

MIL, SIL, HIL, Test cells in V-modell Source ni.com BME-MIT 2017 89.

Importance of early testing BME-MIT 2017 90.

Model-In-the-Loop teszt Main functions can be tested very early o Simulink, LabVIEW PC Environment model Control system model BME-MIT 2017 91.

MIL (Model In the Loop) test Benefits o The main functions are tested very early o Domain specific languages are used: Simulink, LabVIEW o Very fast testing possibility Drawbacks o Details can be tested: real-time behavior, architecture dependency BME-MIT 2017 92.

Software-In-the-Loop tests Software generated from model o Code generation problems, and data precision errors can be found PC Environment model Control Software BME-MIT 2017 93.

Software-In-the-Loop tests Benefits o Code generation can be verified o Data precision can be verified Drawbacks o Code do not use the same processor as in the real system: o Real-time problems can not be verified o Real peripheral problems can not be verified BME-MIT 2017 94.

Processor-In-the-Loop test Real processor is used, with real peripherals Environment model Control Software BME-MIT 2017 95.

Processor-In-the-Loop test Benefits o Real-time behavior o Real arithmetic, and peripherals Drawbacks o Many hardware functions are still simulated BME-MIT 2017 96.

Hardware-In-the-Loop test Real hardware (ECU) is used Environment model Control Software BME-MIT 2017 97.

Hardware-In-the-Loop test Benefits o Real-time behavior o Real arithmetic, and peripherals o Repeatable tests o Safety critical situation can be examined without causing a problem Drawbacks o The environmental model should be very accurate BME-MIT 2017 98.

Hardware-In-The-Loop test environments BME-MIT 2017 99.

A typical HIL test architectre Interfacing o Digital I/O o Analog I/O o Communication lines Extended information o Diagnostic communication Device Under Test Global variables Diagnostic communication Real Time Environment simulator Hardware I/O Test Control Test configuration and control Environment model Test execution Test profiles Logs Tester BME-MIT 2017 100.

Widespread HIL test development environment Many supported HW for traditional measurements Many possible model representation mode Real-Time operation is possible NI VeriStand Extendable, with custom functions BME-MIT 2017 101.

Maturity phase, long term tests Long term tests o Parallel to many devices: 5-100 pcs. accelerated lifetime simulation tests o N x 1000 hours of testing: continuous control and logging is needed o Typically Climate chamber, shock pads, shake pads included o There is no need for complex environmental simulation o Requires automated tests BME-MIT 2017 102.

Development environments for test automation Able to build a test sequence from test steps o Cycles, if, case conditions can be used Parameters and Limit values are configurable for test steps o Typically read from a file Supports parallel testing o Typically many devices tested together BME-MIT 2017 103.

NI-TestStand TestStand properties o Creating LabVIEW, C#, or Phyton based test steps o Assigning limits and parameters to test steps o Controlling test sequences o Creating reports after test TestStand sequence Setup Step Step Main Step Sequence call Step Step Cleanup Step Subsequence Setup Main Step Step Step Cleanup BME-MIT 2017 104.

NI-TestStand user interface BME-MIT 2017 105.

User acceptance tests Connection to the design process Test Results Tested System User acceptance test Connection to the design process Test cases from specification Use Cases Integrated system BME-MIT 2017 106.

User acceptance testing Involving the end-user to the testing o The end-user knows what is really needed o Different point of view 80% of the functionality, but 20% of the code User acceptance tests System tests 80% of the code, 20% of the functionality Alfa test o At the developers site involving the end-user Beta test o At the real environment BME-MIT 2017 107.

Manufacturing tests Very similar to a simplified environment test o Executing a test sequence o Parallel execution is needed Many device need to be tested in a small period of time o Verification of the manufacturing not the design o Only the main functionality is tested BME-MIT 2017 108.

Manufacturing tests Bed of nails tester o In-Circuit-Test point in the PCB Very simplified environmental tests if any o Main functions is tested only o Many times special test software is downloaded to the UUT Vision based tests o Verifying the manufacturing, component placement and soldering BME-MIT 2017 109.