Embedding Software Quality II Using Hitex s Partner Tools

Size: px
Start display at page:

Download "Embedding Software Quality II Using Hitex s Partner Tools"

Transcription

1 Embedding Software Quality II Using Hitex s Partner Tools 1

2 Editorial More than a year has passed since Hitex released part one of the brochure Embedding Software Quality (part 1). Part 1 explains how an in-circuit emulator from Hitex can be used to achieve software quality, covering topics such as performance analysis, code coverage, using the HiSCRIPT language for regression tests, etc. Since then, customer interest in software quality has steadily grown. This is one main reason that Hitex has expanded its product portfolio with partner products aimed at achieving software quality. Another reason is that Hitex itself is dedicated to enhancing Software Quality, a motivation that also finds expression in the Hitex slogan: Embedding Software Quality. This is why part two of the brochure focuses on Hitex s partner tools and points out their usage towards software quality. The Author Frank Büchner studied Computer Science at the Technical University of Karlsruhe. After graduating, he has spent more than a dozen years working in the field of embedded systems. He is currently working at Hitex in Karlsruhe, Germany, as software product manager. Dr. Matthias Grochtmann contributed to the CTE article. Chris Hills contributed to the MISRA part of the DAC article. The RiskCAT article was co-authored by Dr. Günter Glöe and Ernst-Ulrich Mainka. The author would like to thank all the people who contributed in any form to this brochure. Hitex Development Tools thanks the DKE (Deutsche Kommission Elektrotechnik Elektronik Informationstechnik im DIN und VDE) and the IEC (International Electrotechnical Commission) for permission to reproduce extracts from page 31 from International Standard IEC All such extracts are copyright of IEC, Geneva, Switzerland. All rights reserved. Further information on DKE is available from and on the IEC is available from DKE and IEC have no responsibility for the placement and context in which the extracts and contents are reproduced by CATS; nor are DKE/IEC in any way responsible for the other content or accuracy therein. 2

3 Contents 1 Automate Testing with Tessy Issues of software testing Practical Problems Standard s Requirements Tessy the Ideal Solution Tessy s Basic Functionality Problem Specification Interface Analysis Test Driver Generation Test Data Acquisition Running the Test Test Evaluation Debugging Test Documentation Handling Externals Regression Testing with Tessy Test Case Adaptation Further Use of Test Data Test Coverage with Tessy Line Coverage Branch Coverage Integration Testing with Tessy Conclusion The Classification Tree Method and CTE From Problem to Test The Problem Definition Test Relevant Aspects Applying the Method Forming Classes A Systematic Approach Critical Values Error Sensitive Test Cases Risk Analysis Classes for Test Values Test Case Specification Separating Specification from Data Test Coverage More About the CTE The Classification Tree Method in the Development Process Conclusion Static Software Analysis using DAC MISRA The MISRA C Guidelines The Rules Example Formal Deviations Application of the MISRA C Guidelines Impacts of Applying the MISRA C Guidelines Future of MISRA C MISRA C and DAC Software Metrics Simple Metrics Sophisticated Complexity Metric Additional Metrics in DAC Displaying Metrics in DAC Using Metrics Concluding Remarks MISRA C Software Metrics References and Further Reading Using RiskCAT to Cope with IEC Introduction Applying IEC Tool Support Master IEC using RiskCAT Determining the Risk Determining the Safety Integrity Level (SIL) Determining the Measures and Their Degree of Obligation Selecting Required Measures Additional features of RiskCAT Attaching Notes to Measures Documenting the Results Project Store / Load Link to the Standard s Text More about IEC Conclusion Glossary References

4 1 Automate Testing with Tessy Automated Testing as a Means to Achieve Software Quality The crucial issue in achieving software quality is for the software to be zero-defect. This issue clearly dominates over others such as maintenance, portability and performance, since if bugs are responsible for a state of disorder; it would most likely result in the entire product loosing any credibility for quality and value. Dynamic testing is the principle method used to prove zerodefectiveness in software. 1.1 Issues of software testing Practical Problems Although the need for dynamic software testing is denied by no one, until now there is little tool support for this important task in the software development process. Therefore, dynamic software testing is mostly performed interactively. This means, the test object (the function under test) is loaded into a debugger and executed. The input values for the test object are entered manually, using the debugger. Then, the test object is executed and the behavior of it is watched. After execution, the results (i.e. the output values) are check against the expected values. This raises some practical problems: -> Manually dynamic software testing is timeconsuming. -> Besides being tedious and cumbersome, the process is error-prone. The entered test data is selected ad-hoc, and how careful the results are checked depends on the human tester and is varying. -> Normally, neither the test data nor the test environment is saved anywhere, so it is not possible to repeat tests exactly. -> If the software is changed, all previous tests should be repeated. This holds even true if the software was changed only little, e.g. for a bug fix. But not only if the software was changed, but also if a new compiler version is used to compile the software, or if the software is ported to another microcontroller architecture, repeating all tests is strongly recommended. If this has to be done manually, it often takes too much time to be practical and therefore, only the most important tests are repeated. Granted, most of the above listed problems are addressed in most software development departments, because of the great importance of dynamic software testing. The general approach is to implement a proprietary test environment. Of course, the quality of this test environment depends on the effort put in it. Therefore, there exist comprehensive test environment is for embedded software, mainly in big companies, which can afford it. These test environments can be adapted to new requirements (new test objects, compilers, etc.) and are well maintained. However, the common situation is that a test environment is tailored to a specific test object, the database and scripting tools involved are the favorite tools of the tester and adapting the test environment during the development process to changing test objects (with steadily changing interfaces) takes normally the same effort as to extend the test object itself. Adapting these home-grown test environments to new projects is normally so much effort that the current test environment is dropped and a new one is invented for the new project Standard s Requirements Nowadays, more and more standards that address the quality of the software development process or methods, that rate the quality of this process, come into play. Examples of such standards are Bootstrap, the Capability Maturity Model (CMM), SPiCE (ISO/IEC 15504), various military standards like the British Defence Standard 00-55, etc. These standards were driven by military and industries, where reliability under all circumstances is a must (e.g. avionics, aerospace, medicine). The availability of those standards leads to their appliance in other industries like automotive and telecom. There is a wide variety of standards (each industry / company tend to have its own), and they normally cover a great deal of issues, but when it comes to testing, all standards require at least more or less the same. This is not surprising, because most of the requirements could also come from common sense: -> Necessity of planing the tests in advance. This includes the determination of the test cases (how many? which?) and the provision of the required input and output test data. -> Necessity of documenting the tests. This should lead to full reproducibility of the tests (When was the test? Which version of the software was tested? Which test data was used? What was the outcome? Which version of the test environment was used? etc.) -> Necessity of unit and integration testing. This means intensive testing of single units (in C such a unit is a functions) and then integrating these units step by step to eventually form the whole software system. 4

5 -> Necessity of determining the test coverage. This reveals which parts of the source code were exercised by the tests (and which were not). Even if you are not forced to develop according to standards or standard methods, it is a good idea to address the issues mentioned above, at least partly Tessy the Ideal Solution Tessy is a commercially available tool that facilitates the automatic dynamic testing of software for embedded systems. The functionality of Tessy addresses almost all problems and requirements mentioned above. 1.2 Tessy s Basic Functionality Tessy can be brought into use as soon as a suitable compiler for the target hardware can compile the software module containing the C function to be tested. Tessy analyses the software module and lists all C functions it finds within it. The user can then select which function is to be tested Problem Specification A very elementary problem specification forms the starting point of our tour through Tessy s basic functionality: A range of values is specified by a given start value and a given length. Is a value (the proband) within this range or not? Or, more mathematically: range_start <= v1 < (range_start + range_length) The following source code could be the implementation of that problem in C: Example: range_start=5, range_len=2 Input value (v1) Expected result 4 no 5 yes 6 yes 7 no 8 no Input and expected result for function is_value_in_range Interface Analysis Tessy analyses the source code of the function to be tested and determines the amount and type of the variables of the test object s interface and whether they are input, output, or both. Tessy s internal Test Interface Editor (TIE) graphically displays this information about the data direction flow, which can - if necessary - easily be modified or completed by the user. Tessy The Test Interface Editor displays the findings of Tessy about the direction of the data flow for each variable of the function s interface A possible implementation in C according to the elementary problem specification Let s have a closer look at our test object, the C function is_value_in_range(). Using the value 5 for range_start and the value 2 for range_len, the following table gives the expected result depended on the input value v1. However, the implementation is (intentionally) erroneous: v1 == 7 results in yes instead of no Test Driver Generation From the information gained from the interface analysis, Tessy then automatically generates additional source code - the so-called test driver. The test driver consist of the startup code for the microcontroller in question, and code to call the function to be tested (the test object). Furthermore, the test driver may contain user supplied source code or data (e.g. for characteristic curves). Everything together forms the test application, which is compiled and linked automatically by Tessy. The test driver also supplies the main() function of the test application. 5

6 1.2.4 Test Data Acquisition A test case consists of the input values and the expected output values (results). These values can now be specified in Tessy using its built-in Test Data Editor (TDE). The TDE graphically displays the interface of the function to be tested and it is able to expand complex interface elements such as structures to elementary data types. Both input and output values are automatically stored in a database. Since the inputting of test data is normally a major task for the user, Tessy s TDE allows this to be accomplished with particular ease and efficiency: e.g. it s possible to cut/ copy&paste values and even test cases as a whole, preassign variables with values and conduct searches for variable names Running the Test To run the test, Tessy loads the test object via the HiTOP debugger into the test system. Tessy then executes all test cases in sequence on the test system. For each test case executed, Tessy checks if the actual result corresponds to the expected result. Test values are extracted from the database one by one and they are not included in the test application. Therefore, the test application can be run even on 8-bit microcontrollers. Furthermore, the size of the test application is independent of the number of test cases. Hence in principle, the number of test cases is unlimited. Test data acquisition using the TDE Also, the TDE lets you select the relationship to be evaluated between a result and its expected value (e.g. equal to, greater than, less than and others) in order to determine if a test was successful or not. However, it is not mandatory for the user to deduce an expected test result and manually enter it into Tessy before a test case can be run. Tessy is able to run a test case without having the expected test results specified. After such a test case is run, the user can easily take the actual results of this run as expected results for subsequent runs. This relieves the user of manually determining a result and then entering it. This is particularly useful if the result is hard to determine in advance, but easy to validate if presented, e.g. consider a sorting function, which input is a large amount of unsorted data and which output is expected to be the same data, but sorted. It is much easier to simply check the sorting afterwards than to determine it manually in advance and enter it. Test evaluation and documentation Test Evaluation Tessy shows with different colors if a test case did deliver the expected result (green) or not (red). Not yet executed test cases are marked yellow. Where exactly a test result value differs from its expected value, is displayed in the test Data Editor (TDE) and documented in the test report Debugging If executing a test case with a particular set of input values did not yield the expected result, the causes of this failure need to be determined. The tight integration of Tessy with HiTOP enables Tessy to re-run the test case, after Tessy sets a breakpoint at the entry point of the function to be tested. Test case execution is then interrupted automatically and control is passed to the debugger. 6

7 Debugging starts just at the beginning of the function under test, with the parameters that caused the unexpected result Because the function is called with the exact data that caused the unexpected result to occur, the error debugging process can begin immediately. In the example above, two line steps in HiTOP quickly reveal that a missing = in line #28 caused the wrong result. As soon as the error is found, Tessy s internal editor can be used to correct the function s code and all tests can easily be re-run with the newly modified test object Test Documentation Tessy generates conclusive reports on test case results in selectable degrees of detail. Of course, these reports also indicate if the execution of a test case yielded the expected result or not Handling Externals What happens, if the test object uses an external variable or an external function? In principle, there are two possibilities: -> An implementation of the external variable or function already exists in another source module. If this module can be compiled, it can be linked to the test application. In this case, the external variable or function can be considered like implemented in the module of the test object. -> If this implementation doesn t exist yet, Tessy will take over. External Variables Let s now handle the latter case, first for an external variable, for which no implementation exists. The variable adj is taken as example for such an external variable. The interface of is_value_in_range() extended by the external variable adj Tessy recognizes, that the interface of the test object was extended by an external variable adj and displays this variable in the Test Interface Editor (TIE) in the section External variables. The context menu of the TIE allows the user to direct Tessy to define the external variable, i.e. to allocate memory for it. This will prevent the linker error message undefined external for adj. The necessary source code is included in the test driver automatically by Tessy. Tessy can allocate memory for external variables A variable, that is defined by Tessy is marked by a red check mark in the TIE. Of course, Tessy lets you reverse your decision: a previously defined variable can be undefined, also by the context menu of the TIE. External Functions A similar situation arises, if the test object calls another function, for which no implementation exists yet. The following source code calls the external function absolute(), which serves as an example in the following. Tessy 7

8 The interface of is_value_in_range() extended by the external function absolute() Again, Tessy recognizes, that the interface of the test object was extended, this time by the call to the external function absolute(). This leads to the listing of the function absolute() in the section External functions of the Test Interface Editor (TIE). Again, Tessy is able to provide a replacement function (socalled stub function) for the missing implementation of the external function. Tessy can be directed to do so by using the context menu of the TIE. The stub function editor allows the user to provide source code for the stubs The source code provided by the user is automatically included by Tessy in the test driver. Tessy also takes care of the management of the source code for the whole bunch of stub functions that may exist. The user has not to care about file names or linking issues: the user can always access and change the source code of a certain stub function using Tessy. Please note, that stub functions are only intended to serve as a temporary replacement for a function until the real implementation becomes available. Therefore, the source code often consists only of a few lines of code, leading to a reasonable behavior of the function (e.g. a stub function may return a predefined value instead of actually calculating one, or a stub function may just indicate that everything is o.k., but in fact do not perform any check.) Tessy can create stub functions for missing external functions. Please note the red check mark denoting that Tessy is allocating memory for the external variable adj However, the situation is not as easy as with external variables, because Tessy cannot guess what implementation the stub function should have. But Tessy allows the user to provide the source code for the body of the stub function. The stub function editor is opened by the context menu of the TIE. 1.3 Regression Testing with Tessy Regression tests provide verification that modifications and enhancements made to a program do not lead to undesired effects. Regression tests are thus re-runs of successfully completed test cases. If some cases did not yield a positive result and subsequent corrections of the source code have been implemented, to ensure that these modifications have no undesired effects on test cases that were executed successfully, it s necessary not only to rerun failed test cases but all test cases. Tessy provides a batch feature that enables extensive regression testing to take place without any user intervention. 8

9 1.3.1 Test Case Adaptation A situation could arise where further development of the program has resulted in the function under test being so modified that its test cases are no longer executable (e.g. due the introduction of an additional parameter). Since Tessy has stored information on existing interfaces, it is able to recognize alterations made to an interface and refers the user to them. Tessy s internal Interface Data Assign editor (IDA) allows efficient assignment of newly introduced interface elements to the elements of an existing interface. The IDA performs this allocation automatically wherever possible. This feature allows continued use of values from old interface elements in new ones. This is possible thanks to the concept of separating interface information from test data. to the internal structure of the test object (e.g. the lines of a C function or the decisions in a C function). Code coverage is a measure for test coverage. A code coverage below 100% indicates, that there is still something left to test Line Coverage A very simple coverage measure is line coverage. This coverage measure indicates, if a line of a test object was executed or not. Line coverage does not reveal how often this line was executed. Because of the simplicity of the line coverage measure, even achieving 100% line coverage neither guarantees zero-defect code nor sufficient testing. A simple example reveals this quickly: Tessy Allocation of interface elements Further Use of Test Data The pre-requisite for ensuring test cases are compatible with the latest version of software is that test data, including data for modified interfaces, is reusable in the most efficient and extensive means possible. The re-use of test data allows re-running test at any time and rerunning tests implies regression testing and regression testing is an essential procedure in assuring software quality. Regression testing often is necessary for software to be revised in terms of code size, performance, maintenance and reusability, while at the same time guaranteeing that existing functionality remains unchanged. 1.4 Test Coverage with Tessy In general, the code coverage analysis reveals which portions of a program were executed. There are different kinds of coverage analysis, which are of more or less worth for the user. Coverage analysis is related to the white-box approach of software testing, because the analysis refers Even if all lines were marked as executed (one or more times), it is obviously possible, that not necessarily a test run was made, in which the condition cond evaluated to 0, what in turn would prevent line #4 from being executed (i.e. the non-existent then clause would have been executed), what in turn will would have caused a division-by-zero error in line #6. To sum up, line coverage is a very weak measure. However, if you have not reached 100% line coverage, it is strongly recommended to find out what is the cause for it. You may find dead code, which cannot be executed under any circumstances. Such code can be deleted. Furthermore, you may find code which was not exercised by any of the test cases, hence you have not enough or not the right test cases. In any case, you have found out that the test coverage is not sufficient and that should prevent you from finishing testing. Although line coverage is a weak measure, it is better than no measure at all and achieving 90% percent line coverage is always better than achieving only 80%. Statement coverage is similar to line coverage, but covers assembler statements instead of C language lines. An alternative name for line / statement coverage is C0 code coverage Branch Coverage The coverage analysis built-in Tessy does more than simple line coverage. It does not only reveal if a line was executed 9

10 or not, but also how often a line was executed. This information is sufficient to state if a branch of the code was taken or not. This kind of coverage is also know as C1 code coverage. Tessy s coverage analysis is activated by a simple click of the mouse. Tessy allows to distinguish if only the test object is of interest for the code coverage analysis or if the functions, that are called by the test object, should be included in the analysis additionally. Activating Tessy s C1 code coverage analysis Tessy then automatically instruments the test object and executes the tests as usual. After that, the results of the coverage analysis are presented in a special report window. C1 Coverage results This report shows clearly, that e.g. the first if-statement was executed 3 times, but the then part of it was executed only one time, hence the condition of the if evaluated two times to false, what in turn means that the non-existing else part if the if-statement was exercised two times. By clicking on a test case in the upper left corner of the report window, the report window shows the information for that test case only. 1.5 Integration Testing with Tessy Tessy is labeled as a tool for doing unit tests. A unit is considered to be a C function. Although not labeled explicitly, Tessy is also perfectly well-suited to do integration tests for embedded software. Whereas unit tests concentrate on the test of the functionality of a single unit (i.e. a C function), the integration test concentrates of the test of the interfaces between the units. In an ideal world, integration testing starts with the unit test of a function f1, that does not call any other function. However, f1 could interact with peripherals. When the unit test for f1 is finished, f1 is considered to work correct under all circumstances, even in case of illegal parameters, etc. Then the integration test could proceed by picking up another function f2 for unit testing. f2 may call f1, but no other function, which has not passed its unit test already. In case f2 calls f1, the (already tested) code of f1 should be linked to the test application for the unit test of f2. This means, the unit test of f2 uses the actual implementation of f1. This automatically exercises the interfaces between f1 and f2. The test cases for the unit test of f2 should be tailored so that they mainly test the functionality of f2. It is not necessary that they test (again) the functionality of f1. Theoretically will sufficient tests for f2 automatically lead to a sufficient test of the interface of f2 to f1, at least to the extend as this interface is used by f2. In practice, such a strict ordering of units according to the call hierarchy may not be possible, because two functions may call each other, or because the timely order the functions are implemented is different from the call hierarchy. That s why Tessy provides stub functions. 1.6 Conclusion Tessy is a software tool that addresses unit testing, integration testing and regression testing. Also, Tessy let s you determine the test coverage. However, a part of Tessy, the Classification Tree Editor, addresses test planning and test case specification. This is the topic of the following article. 10

11 2 The Classification Tree Method and the CTE From problem definition to test case specification using the Classification Tree Method and the Classification Tree Editor (CTE) Testing is a compulsory step in the software development process. However, the planning of such testing often raises the same questions: -> How many tests should be run? -> What test data should be used? -> How can error sensitive tests be created? -> How can redundant tests be avoided? -> Have any test cases been overlooked? -> When is it safe to end testing? Anyone who has been confronted with such issues will be glad to know that the Classification Tree Method offers a systematic procedure to create test case specifications based on a problem definition. 2.1 From Problem to Test The Problem Definition The Classification Tree Method is applied to the definition of a (functional) problem. Informally expressed, the solution to such problems requires a function to be executed, so that one can determine if the function yields the expected result or not. Data processing software normally solves functional problems, since input data is processed according to an algorithm (i.e. the function) to become output data (i.e. the solution). Here s an example of a functional problem definition: An initial value and a length define a range of values. Determine if a given value is within the defined range or not. Only integer numbers are to be considered Test Relevant Aspects The first step in using the Classification Tree Method is to consider all possible test cases and to identify all relevant test aspects. These aspects are then classified, i.e. the set of all possible values of an aspect is divided (completely and disjunctively) into classes. So in our example, the input data range consists of all possible ranges of values that can be formed from integer numbers, combined with all possible test values, i.e. with all integer numbers. The initial value and the length of the range can be regarded as test relevant aspects. This is convenient since according to the problem definition, a range of values is defined by an initial value and a length. It s convenient to classify if the test value is within the range of values or not using a position aspect. So the three initial aspects to be used for classification are start value, length and position and they thus form the basis of the so-called Classification Tree. A particular problem definition can have completely different classifications, that are each relevant and usable. The figure shows a simple Classification Tree for the problem definition described above. Three branches emerge from the root (i.e. the node is_val_in_range ), which lead to the three base classifications (rectangular nodes) range_start, range_length and position. 2.2 Applying the Method To design the Classification Tree and for further application of the Classification Tree Method, it is reasonable to use a tool that supports drawing the Classification Tree and the specification of test cases. The Classification Tree Editor (CTE) has been especially created for this purpose. It comes complete with its own graphical editor that is intended specifically for drawing Classification Trees Forming Classes Classes are now formed for the base classifications, where all possible values of an aspect are classified both completely and disjunctively. Since the initial value covers all integer numbers, it would be reasonable to form a class for positive values, a class for negative values and another class for the value zero. This class formation is useful since it ensures that a test case with a negative initial value is not overlooked. Classes are shown in the Classification Tree as frameless nodes. The branches, which represent the classes, emerge from the classification nodes (in this case: range_start ). A simple Classification Tree for the problem definition described above A Systematic Approach The same classes that were formed for the range_start classification can also be formed for range_length. This means that a class for negative values is also implemented for the length. This is reasonable since the problem definition does not prohibit lengths from being negative values, even if negative lengths themselves are practically not necessary or even senseless. So, the systematic approach to specify test cases has revealed that the CTE 11

12 problem definition is insufficient. A clarification of the problem definition is probably the best way to remedy this issue. Note, a test case that includes a negative length would have most likely been overlooked if this systematic approach with the Classification Tree Method had not been adopted. Since negative values can occur in both the initial value as well as the length, a negative initial value combined with a negative length would be a valid input. A test case for such a combination would most likely be missing in a set of spontaneously selected test cases. Note, the problem definition described is a relatively simple one. Imagine how many important test cases could be overlooked if problems with dozens of aspects or input parameters are to be tested. Two classes are formed for test values, so that in terms of position they are categorized as being either inside or outside the range. The Classification Tree - further developed Critical Values It is generally recognized that during testing, critical values (also known as boundary values) of input parameters promise the most success if the objective is to generate malfunctions. This fact should also be taken into account when using the Classification Tree Method. Hence in our example, a further classification can be introduced for the size ( size ) of start values that are positive. However the classes and quantity of classes introduced depends on the problem definition and last but not least on the rating of the person who creates the classification tree. The criterion where is a problem assumed to exist? can serve as a guide here. The problem definition itself usually provides clues for critical values. If the problem definition already mentions case distinctions (e.g. if the pressure exceeds 10 units, open the safety valve), then this would point to obvious critical values. The current problem definition yields no clues whatsoever on critical values, therefore the black box approach cannot be used any further, since if a value of 5 for range_start produces the correct result, it cannot be assumed that using a value of 6 would produce incorrect results Error Sensitive Test Cases Considering the assumed implementation (i.e. using the white box approach), an interesting question arises if we consider an initial value that is very large. What happens if a range begins with an initial value that is the largest possible positive value and furthermore, the range has a positive length? Would some kind of wrap around then take place and would a very small test value then be incorrectly considered as being in range? Or, does the program simply crash? Based on these considerations, positive initial values can be either placed in a class with the largest possible positive value or in another class for all other values. A third possible class would be a class with the smallest possible positive integer value, i.e. the number one. In our example, the smallest positive whole number would be classified as a normal value and there is no valid reason for it to have its own class. This is an arbitrary decision made by the creator of the classification tree. The example described explains the idea behind the Classification Tree Method, which is: detailed consideration of the problem definition and a systematic approach leads to the determination of error sensitive test case specifications and avoids the specification of redundant test cases Risk Analysis A risk analysis of the problem would usually also help to find test relevant aspects and therefore form test case specifications. If a malfunction has particularly grave consequences when it occurs under certain conditions, then is it important that tests are carried out under these same conditions. The main objective here is not for these test cases to expose malfunctions with a high probability of occurrence, but rather to ensure that no malfunctions occur under these particular conditions Classes for Test values As already mentioned, two classes are created to categorize test values: position inside the range and position outside the range. If a test value lies outside the range, it is useful for the test if we then further classify if the test value is below or above the range. The consideration of critical values leads to a further classifcation of test values that are located in the immediate vicinity of the range limits. 12

13 2.2.8 Separating Specification from Data A Classification Tree specifies test cases, but it does not specify test data. Although we have determined that a test case with a normal positive initial value can exist, we have not determined which concrete value is to be actually used later on in the test. The complete classification tree Test Case Specification If the Classification Tree has been created, the next task is to specify suitable test cases. A test case specification results from the combination of classes, depicted as leaves in the classification tree. Classes that belong to a classification are of course non combinable, since by definition, classes are disjunctive and no representative can be found which would belong to several classes of one classification. In our example, it s not possible for a test value to be simultaneously inside and outside the range. Expressed in another way: a leaf class is selected for each of the initial test relevant aspects (i.e. for each base classification). In the CTE, a test case specification comprises of a line that is made up of test case descriptions and markers in the combination table. The combination table is located in the CTE, beneath the Classification Tree. The desired classes are selected by simply setting the markers in the combination table. The CTE supports this selection process by automatically ensuring that only combinable classes are selected for a test case. Classification Tree and some test case specifications Expressed in another way, the creation of test case specifications using the Classification Tree Method does not include a means of arriving at concrete test values from the terms in the Classification Tree ( normal and positive ). This may appear strange in the given case, since zero has an obvious concrete value. However, for general problem definitions, concrete values are not so obvious. For example, if a problem definition leads to a class for green triangles, what concrete value should be used for green? However, technical issues are not the deciding factor. The abstraction of classes from concrete test data is a deliberate methodical means of making test ideas explicit. Thus the implementation of a test case specification into concrete test data is a separate procedure that can for example be performed by the Test Data Editor of the test tool Tessy [1]. Due to the separation of test case specification and test data selection, it is not absolutely necessary for the developer of the software to create the test case specifications Test Coverage The number of test case specifications and thus the scope of a test remain in principle for the user to decide. However, based on the Classification Tree, it s possible for some values to be determined that provide clues to a reasonable number of test cases required. The first value is the number of test cases, if each leaf class is included at least once in a test case specification. This number is known as the minimum criterion. Since leaf classes of the same base classification cannot be combined, the minimum criterion is the largest number of leaf classes that belong to a base classification. In our example, the largest amount of leaf classes, namely seven, belong to the base classification position. Seven is thus the value of the minimum criterion. The maximum criterion is the number of test cases that results when all permitted combinations of leaf classes are considered. In our example, the maximum criterion amounts to 105 (i.e. 5 * 3 * 7). CTE 13

14 A reasonable number of test case specifications obviously lies somewhere between the minimum and maximum criterion. As a rule of thumb, the total number of leaf classes gives an estimate for the number of test cases required to get sufficient test coverage. The objective of the Classification Tree Method is to determine a sufficient but minimum number of test case specifications. So generally speaking, it is not necessary to specify a test case for each possible combination. In fact, the Classification Tree Method should enable the user to use well-designed specifications, thus reducing the number of tests. The Classification Tree provides the necessary overview for this. In practical applications, this reduction of test cases is essential, since the maximum criterion can easily run into very high numbers. Furthermore, the required expenditure for running automatic tests to their full extent in comparison to the benefits they provide is excessive. The wish of some users to run tests not only with one representative of a class, but rather with all possible representatives (in combination with all possible representatives of all other combinable classes), fails due to the resulting astronomical number of test cases this becomes apparent even using our very basic example. Note, a large number of test cases does not automatically guarantee sufficient test coverage. This depends more on being able to produce test cases that are error sensitive and being able to avoid those that are redundant. 2.3 More About the CTE The main objective of the CTE is to comfortably support use of the Classification Tree Method. This includes on the one hand drawing and editing a Classification Tree. Here sub-trees can give an improved overview, descriptions and commentary can be added to help to improve documentation. Furthermore automatic layout of the tree always results in a clearer representation after modifications, elements of the tree can be copied and repositioned and parts of the classification tree can be stored in libraries and be reused later in other classification trees. On the other hand, the CTE also aids the management of test case specifications. Test case specifications can be provided with commentary and can be combined into test sequences, which is necessary for the description of dynamic processes. The CTE can also verify the Classification Tree and test case specifications. This would for example reveal incomplete tree sections or unused classes. Furthermore, it s possible to compile a statistical evaluation, perhaps of the number of different tree elements. This leads to an estimation of the necessary test expenditure. The compiled information can be exported in various file formats, which aids the transfer of test case specifications to other tools as well as documentation. All in all, the CTE provides all the functionality required to make efficient use of the Classification Tree Method. A concrete example of an extensive application of the Classification Tree Method and the CTE is the test case specification for the conformance testing of operating systems according to the OSEK/VDX specification [2]. The CTE is an integral part of Tessy [1], which is a tool to automate the testing of embedded software. Of course, it s possible to export test case specifications from the CTE to Tessy. Since the CTE s application area is not limited to the testing of embedded software, it is also available as a separate product. CTE and Tessy both originate from DaimlerChrysler s software technology research laboratory. 14

15 2.4 The Classification Tree Method in the Development Process The creation of test case specifications according to the Classification Tree Method using the CTE can, and should, be created independently from the implementation. This would ideally take place before the implementation stage and should be performed by someone other than the software developer. This is not only desirable due to the higher probability of finding errors, but it also allows both tasks to be performed in parallel, leading to earlier completion. The Classification Tree and test case specifications remain easy to understand thanks to graphical representation and commentary and they can also be subject to review procedures. CTE Information generated by the CTE according to the Classification Tree Method documents the test coverage and thus contributes to the conformity of development processes according to various quality standards (Bootstrap, Spice, CMM). Due to the systematic approach and the compulsion to consider all test aspects when applying the Classification Tree Method, there is a high probability that the problem definition will be correctly represented in test case specifications. This probability can be raised by conducting review procedures. However, because of human participation in this transfer process, there is never complete assurance. The size of a Classification Tree can be a measure of a problem s complexity. The number of test cases deemed essential by the Classification Tree Method forms on the one hand a good measure of the required test expenditure and can on the other hand also serve as an estimation of the required implementation expenditure. However, COCOMO and function points are methods that are better suited to expenditure estimation. 2.5 Conclusion The Classification Tree Method has even in the case of our simple example allowed us to derive test cases that would have more than likely been overlooked if test case specification was performed spontaneously. The CTE provides an overview of specified test cases and thus allows redundant test cases to come to light and the presence of error sensitive test cases to be verified. Furthermore, the documentation of specified test cases aids quality in the software development process. 2.6 References [1] [2] 15

16 3 Static Software Analysis using DAC The Purpose of Static Software Analysis Static software analysis generates information from the source code of a program. The program is neither compiled nor executed. This information gained from static software analysis may serve several distinct purposes: To comprehend the source code, to reveal weak points in the source code, to check the compliance with coding standards like MISRA and to apply software metrics (various numeric information about the source code). Besides its other functions, a great deal of the functionality of DAC, the Development Assistant for C, relates to static software analysis, which forms the base for code comprehension, detection of questionable constructs and software metrics. In various graphical and textual forms, the source code comprehension part provides answers to questions like: -> From where in the source code a certain function is called? -> Where a certain variable is defined and what is its type? -> Which global data types exist? The detection of weak points or questionable constructs reveals things like: -> Assignments in conditions -> Unused variables -> Etc. A coding standard, which is in fact a catalog of questionable constructs to avoid, is listed in the MISRA guide. Software metrics give descriptive numbers for various aspects of the software, such as the: -> Size of the program -> Complexity of the program -> Maintainability -> Software quality of the program This article delves deeper in the subject of MISRA and of software metrics. 3.1 MISRA MISRA actually is an acronym for The Motor Industry Software Reliability Association - a consortium steered by MIRA Ltd, a research / consulting company offering services for the automotive industry. MIRA Ltd. is based in the UK and was formerly known as The Motor Industry Research Association. However, what is most often associated with MIRA or MISRA is a publication named Guidelines For The Use Of The C Language In Vehicle Based Software, published in April 1998 by the MISRA consortium. This document is also often referred to as the MISRA C Guidelines or simply MISRA C. Please note that there is also an earlier publication (1994) of MIRA, called MISRA Guidelines for Vehicle Based Software. This document is independent of a specific programming language, whereas the later document concentrates on C. A Technical Clarification Document, published in 2000, clarifies things raised on the interpretation of the MISRA C Guidelines from The MISRA C Guidelines The MISRA C Guidelines are available in hard copy printed form only from MISRA or Hitex. The MISRA C Guidelines can be viewed as consisting of two major parts, the first introductory part stating the objectives of the guidelines, the rationale behind them, the scope of the guidelines, how to apply them and the context they assume. The second major part consists of the rules themselves. In addition, some appendices and references, etc. are provided The Rules There are 127 rules in the guidelines, of which 93 rules are classified as required, the remaining 34 rules are classified advisory. Besides the formal deviations (see below), for a product that is claimed to conform to the MISRA C Guidelines, all required rules have to be obeyed. The rules are listed in an order that bundles the rules belonging to a topic, e.g. all rules related to operators are listed subsequently. The rules form a safe(r) subset of the C language by forbidding constructs which are likely to be programming errors, cause misunderstandings or depend on the compiler behaviour. The overall objective of the rules is to prevent runtime errors. Whereas some of the rules look like just common sense and you wonder why they are listed in such a guideline, other rules are harder to comprehend. Their understanding requires some understanding of the ISO C 9899:1990 (C90) standard. Please note: The MISRA C Guidelines reference the 1990 version of the standard (C90) and not the 1999 version (C99). Studying the rules will certainly raise your expertise as a C programmer. You probably will eventually realize that you were not aware of the problem and just good luck has prevented you from running into it. So, the MISRA C Guidelines can be 16

17 considered as some kind of C teaching book. In practice however, simply applying the MISRAC C guidelines may prevent you from having the problems without spending too much consideration in possible traps and pitfalls of the C language Example Let us consider the following C function as an example: When analysed by DAC, this function shows a violation of (unexpectedly many) rules of the MISRA C Guidelines: Background: ISO C states that when evaluating the equal operator, both operands are expanded to at last integer data size. Therefore, the value of the character variable ccc will be promoted to an integer value. Because characters normally consist of fewer bits than integers (e.g. characters are normally made up of 8 bits or a byte, and integer normally consist of at least two bytes, depending on the microcontroller architecture), ccc has to be expanded. Because the ISO C Standard does not specify if the basic type char is treated as signed or unsigned, it cannot be predicted if the promotion is done by sign extension or not. Therefore, the promotion can result in a value of 0x00FF or 0xFFFF (presumed an integer is made up of two bytes). Of course, the result of the evaluation of the if-condition depends on the result of the promotion. However, correcting the violation of rule 14 in line 3 remedies the problem, because when signed or unsigned is specified, the result of the promotion will also be non-ambiguous, what will make the outcome of the evaluation of the if-condition predictable. Rule 14, by the way, is a required one, and you are strongly recommended to obey it. The violation of rule 13 (and of rule 14) is remedied by something like: DAC The MISRA rules violated by the example source code Rule 13 This rule prohibits the use of the basic C types like char, int, etc. The objective is to have a predictable size for a data type, even if the source code is ported to another microcontroller architecture, where the size may differ. So this rule is targeted mainly for portability, but also with respect to understandability and maintainability it is always a good idea to know what size the original programmer meant a variable to have. Rule 14 Just by analysing the source code, it is unpredictable if the function f1() will return 0 or 1, i.e. if the if-condition will evaluate to true or false. However, correcting the violation of rule 14 in line 3 remedies the problem, because when signed or unsigned is specified, implementation-defined behaviour of the compiler is avoided, what makes the outcome of the condition predictable. Rule 18 The objective of this rule is (a)to have well-known and portable data sizes, which relates rule 18 to rule 13 and (b)to avoid mixing signed and unsigned arithmetic, which relates rule 18 to rule 14. Rule 59 The objective of this rule is to avoid any misinterpretation by the human reader of what the compiler will consider as being part of the block and what not. It is tempting to add code to a (non-braced) block by just adding the code and indenting it correctly. Later it becomes very hard for a human reader to detect this as cause of an error. Rule 59 is a required one. Rule 82 This rule requires a single return statement at the (textual) end of the function. This rule is advisory. 17

18 The following code remedies all violations of all the rules Formal deviations Unfortunately, not all rules are as easy to follow than the rules violated in function f1() above. An example of this is rule 1 stating that the code must be ISO 9899 compliant. Because embedded software always interacts with hardware and normally also deals with interrupts, and these issues are not mentioned in ISO 9899:1990, it is impossible e.g. to write a program that declares an interrupt service routine and is compliant to MISRA rule 1. When writing embedded software, you must use a compiler targeted for the embedded system, and a characteristic of such a compiler is that it features nonstandard extensions, most of them compiler-specific, necessary for dealing with the particularities of embedded programs like accessing hardware or declaring interrupts or the like. To cope with this, the MISRA C Guidelines define a formal procedure, called deviation procedure, which allows the exclusion of some (or even all?) of the rules from having to be applied to be MISRA C compliant. For advisory rules, formal deviations are not required; violation of advisory rules does not prevent software from being MISRA C compliant. The guidelines require that the deviations are clearly identified, that a reason is given for each deviation and that the scope of the deviations is clearly stated. The scope of the deviation can be company-wide, for a class of circumstances (these deviations are called standing deviations ), for a project or even for a single file. Furthermore, for every rule that has to be met it must be stated how the checking of his rule will be enforced. The related document is the so-called compliance matrix, indicating if a rule is checked manually or by a tool Application of the MISRA C Guidelines The MISRA C Guidelines categorize the rules into 18 aspects of the C programming language, e.g. environment, operators, expressions, functions, identifiers, etc. There is no grouping of the rules according to different purposes like portability or maintainability. The Guidelines do not consider some rules more important than others, nor do they consider some rules easier to obey than others. The rules were also not specifically designed to be automatically testable by static checking tools. Study the Rules The process of introducing the MISRA C Guidelines in an organization therefore should start by an in-depth consideration of each rule, classifying it under different aspects. This initial consideration should result in several sets of rules, e.g. one containing the rules that have to be obeyed in every case, another set containing the rules that are subject to a formal deviation, yet another set containing the rules for a more strict usage of the guidelines, etc. This process should be conducted by a senior programmer with an intimate knowledge of the traps and pitfalls of the C programming language. Assure Tool Support Although it may be possible to check all the MISRA C rules manually, in practice tool support is necessary. Therefore, the next step should evaluate which rules can be checked by what static software checking tool, because many, but not all of the rules can be checked statically, and also different tools may not check the same set of rules. Certainly an advanced compiler will do a part of the job, e.g. most compilers will complain about an assignment used in an expression like in if (a = b), what is mentioned in rule 35. However, most of the rules that can be checked statically are beyond the scope of a compiler and require specialized tools. Unfortunately, at the moment, there is no certification procedure for tools with respect to MISRA C 18

19 Guidelines. This means in practice, that there is no guarantee that two different tools will find exactly the same rule violations in any given piece of source code. Therefore, tool selection needs careful consideration. The MISRA C Guidelines suggest to document the findings about what tool covers which rule in the so-called compliance matrix. This should also include the settings required (e.g. the warning level), when appropriate. Adjust the Development Process The process of introducing the MISRA C Guidelines is concluded by introducing the rules to the software developer. The latter requires teaching, which may not be as easy as it seems at first sight, because the human factor has to be taken into account and a lot of convincing work has probably to be done, though the acceptance of the MISRA C Guidelines should be encouraged by the use of plain English (and not legal or standards English). However, these problems are beyond the scope of this article. Eventually, the tools to be used for the MISRA C check have to be integrated in the development process so that the checking process is automated. Also, the quality system has to be updated to reflect the checking process. This includes documenting which tools are to be used. A Word about Legacy Code Probably it is not useful to rewrite legacy code just to make it compliant to the MISRAC C Guidelines. This should only be considered when a greater part of the legacy code has to be re-written anyway. In addition, any code that is common to several projects might also be considered for a re-write for compliance to the MISRA C Guidelines. A new software project starting from scratch is certainly optimal for introducing the MISRA C Guidelines. However, projects seldom start from scratch and therefore the decision when to start using the MISRA C Guidelines and what sources should be subject to it, needs careful consideration Impacts of Applying the MISRA C Guidelines Certainly, source code complying to the MISRA C Guidelines will contain fewer programming mistakes than normal code. While this cannot be conclusively proved for the MISRA C Guidelines, they are considered Good Practice and there are statistics that show where this sort of regime is used code quality improves and the number of bugs per 100 lines of code decreases. Furthermore, an experienced C programmer will certainly admit that he or she once (being not yet such experienced) was trapped by one or the other programming error that is addressed by the MISRA C Guidelines. But are there also disadvantages when applying the MISRAC C Guidelines? Time required for the Coding Process After the MISRA C Guidelines are introduced in the development process, and the developers are used to use the rules, they tend to write MISRA C compliant the first time and the coding process should not take any longer than before. Because both the number of errors and the time required to correct them decreases, there should be an overall gain. The places where problems appear are where a rule stops you doing something you urgently need to do. This problem may initiate the deviation procedure, what may require some time. However, you can assume deviations will be less often required the longer the MISRA C Guidelines are in effect. Size and Speed of the Program Certainly some of the rules will not affect the code size or the speed of the program in the slightest, e.g. rule 59 requiring additional curly braces. However, some of the rules regrettably affect the size and / or speed of the program. Some examples are listed in the following: -> Rule 109 forbids the use of unions, and to avoid unions, you have to use separate memory locations for the items once intended to be in a union. Obviously, separate memory locations consume more data memory than using a union. -> Rule 82 requires a single point of exit for a function, what may require some additional source code and - if not optimized away by the compiler some additional code instructions, augmenting both code size and execution speed. -> Rule 111 allows a bitfield type to be only unsigned int or signed int. Because some compilers allow the (non-standard) usage of char as bitfield type, and char normally requires less data space than int, obviously the application of MISRA C Guidelines lead to a waste of data memory. However, it shall be clearly stated that the most of the rules are not considered to have any effect on neither program speed / size nor the time needed to implement the source code. The benefit with respect to program quality is normally much higher than the described disadvantages. DAC 19

20 3.1.7 Future of MISRA C The MISRA consortium carries on the work towards better software quality. Hitex is involved in these activities, which currently concentrate -> The rewriting of some of the rules, to clarify their intention. This will also be achieved by providing more examples. -> A certification procedure for software tools that claim to check the MISRA C Guidelines. -> Adopting the Guidelines to C99. A revised MISRA C Guidelines document is not expected to be published before MISRA C and DAC DAC starting from V 4.0 does check with full support 76 of the rules. Among the fully supported rules are 59 classified as required and 17 are classified as advisory by the MISRA C Guidelines. Supported by DAC with some limitations are 16 rules (13 required + 3 advisory). Not supported by DAC are 35 rules (21 required + 14 advisory), mostly because the rules cannot be checked statically. 3.3 Software Metrics Software metrics are measures that are determined by analysing the source code of software. Generally, they serve to quantify the productivity and the various aspects of software quality. Excerpt from the British Defence Standard about software metrics: Metrics are essential in order to be able to use past experience to accurately plan a new development. Metrics are also a useful indication of software quality and hence software safety. Relevant metrics include those associated with software development productivity and those associated with fault discovery. The identification of the metrics to be recorded should also indicate the values that are to be expected of each, and the interpretation that is to be made when actual values differ from the target or the expected values Simple Metrics Many of the metrics are directly measurable by simply calculating them in the respective object of observation (function, module, etc.), such as the lines of code (LOC) in a program. Some of the measures provide the ratio (in Metric Significance Unit *) Type Comment lines Lines containing comments only (including parts of a comment). Additional characters in this line must be white-space. F, M, G % Compound statements Compound statements {...} F % Declaration statements Declarations ending with a semicolon F % Empty statements Empty statements (consisting of a single semicolon only) or inline assembler instructions F % Executable lines Lines containing executable statements F, M, G % Executable statements Executable statements: expressions or, if, while, dowhile, for, break, continue, return, switch F % Expression statements Expressions (not expressions in other executable statements (e.g. the condition of an if)) F % Jump statements Branches (goto)f % Label statements Branch targets (for goto)f % Lines Lines F, M, G absolute Lines with comments Lines with (a part of) a comment and at least one character that does not belong to the comment (and which is not a white-space character)f, M, G % Local variables Local variables F absolute Loop statements Number of statements: while, do-while, for F % Maximal depth Maximum nesting depth F, M absolute Operators per line Operators per line F absolute Operators per operand Operators per operand F absolute Operators per token Operators per token F absolute Preprocessor lines Lines containing preprocessor statements F, M, G % Selection statements Number of statements: if, case (successive case statements count once)f % White lines Number of lines containing white-space only F, M, G % Functions Number of functions M, G absolute Global variables Number of global variables M, G absolute Statements Statements M, G absolute Number of modules Number of modules G absolute *) F = Function, M = Module, G = Group of modules Simple metrics in DAC allow the determination of the measures listed above 20

21 percent), e.g. the number of comment lines within the component observed in relation to the total number of lines. These parameters may refer to various components of a program, i.e., to individual functions, one or several modules or to the entire program. Obviously, some of the measures are not suitable for all components. For example, the number of functions metric only makes sense when associated with one or several modules Using the Simple Metrics Both significance and understandability of simple metrics seem obvious since most of the metrics can be directly determined by counting them. It is easy to find a correlation between a software quality aspect of interest and a metric. For example, if one wishes to know the defect rate to be expected in an object of observation, then the maximum nesting depth, or, more easily, the number of statements in this object of observation can be suitable Comment Portion However, when interpreting a certain value of a metric, such as the comment portion, with respect to a software quality aspect, such as maintainability, it soon becomes obvious that the metrics measure alone is not very meaningful. Naturally, a comment portion of 0%, i.e. no comment at all, is a bad value for maintainability. For the object of observation is definitely less understandable without a comment than when it has one. However, does a comment portion of 25% mean good or poor maintainability? This can only be determined by putting the calculated value in relation to the comment portions of the objects of observation, whose maintainability was already measured by using other methods. When comparing measures like the comment portion of different objects of observation with each other, parameters such as the counting methods must not vary, of course. Also, the comment portion does not say anything about the comment s quality. An object of observation with a smaller comment portion might be better documented than an object of observation with a much greater comment portion. The comment of an object of observation can even be of no significance. This is the case if it exists of e.g. preconfigured headers, which are not (completely) filled in. Program Lines Things are not getting simpler when it comes to the widely used metric Lines-of-Code (LOC). This metric originates from the assembler programming age when an assembler instruction was identical with a line of code. Applying this metric to the C programming language poses a couple of questions that will need to be addressed: -> Do definitions and declarations have to be counted? -> Do comment lines have to be counted? -> Do preprocessor lines have to be counted? -> Do macro definitions of the preprocessor have to be counted? -> How to count lines containing several statements that could be spread over several lines? Apparently, the large number of alternative formulations in C diminishes the significance of the Lines-of-Code metric, at least as long as different programming styles are used. Likewise, the LOC metric for those objects of observation written in different programming languages has little relevance. The lines of the functions in module sort.c as absolute values Executable Lines The software metric executable lines of DAC shall serve as an example how careful you must understand how a metric is calculated. The software metric executable lines can be applied to functions, modules and group of modules and gives the ratio (in percent) of the executable lines in the respective object of observation. When applied to a function, the line holding the opening curly brace { is to be considered the first line of the function, the line containing the closing curly brace } is considered to be the last line. The line holding the function name is not counted (if not { is on the same line.) Declarations and definitions are not counted. Concatenating lines with executable statements and inserting blank lines both reduce the ratio. Some examples may clarify things (for the Report type Simple values ) Two lines, none of them is executable: 0% executable lines DAC 21

22 In both cases: Three lines, one of them is executable: 33% executable lines Five lines, one of them is executable: 20% executable lines Two lines, one of them is executable: 50% executable lines The executable lines of the functions contained in module sort.c Conclusion Simple metrics give a hint on various aspects of software quality. The relation between software metrics and the various aspects of software quality is easy to see. However, one needs to acquaint oneself with the counting methods. And the measures have to be related to comparable values (determined under identical conditions), in order to make reliable statements about the software quality Surprise, Surprise Sometimes, the application of metrics results in a surprise. On closer examination of the defect rate, for example, one would expect that the defect rate for an object of observation, such as a function s code, is increasing proportionally to the size of that object of observation. This is true; however, what is quite astonishing is that the defect rate only shrinks up to a certain value with decreasing objects of observation, but then increases again with even smaller object of observation. To put it in another way: The defect density with small objects of observation is relatively high. This finding is explained by both the commonness of interface errors and the fact that the complexity of interfaces is disproportionately high with small objects of observation Sophisticated Complexity Metrics An evolution of the simple metrics above are metrics that directly attempt to yield measurements of a higher valence regarding the various software quality aspects, such as complexity, programming effort, or defect rate. Although these metrics are also based on simple metrics, which can be measured by counting them, they are more complex in that they are used within formulas. These formulas are based on theoretical constructs and are more or less backed by empirical studies. As far as the sophisticated software metrics are concerned, Halstead and McCabe have become popular references on the subject Halstead First: Some Theory In 1977, Halstead introduced a software theory, where a computer program is considered as a collection of tokens that can be classified either as operands or as operators. Halstead introduced primitive measures: n1 = the number of distinct operators that appear in a program n2 = the number of distinct operands that appear in a program N1 = the total number of operator occurrences N2 =the total number of operand occurrences Based on these primitive measures, Halstead introduced equations for vocabulary, overall program length, the potential minimal volume (an estimation of the shortest possible implementation of the algorithm in the analysed function), the intelligent content in a program, difficulty, effort and so on. Halstead also involved a model of the human memory to address the mental processing rate of the human brain. Some simple Halstead s equations are Vocabulary V = n1 + n2 Length L = N1 + N2 Halstead s work was basic for software metrics. Besides that Halstead s equations are not easy to understand and re-enact in one s mind, there is much controversy about it and it has been criticized from many fronts. Areas under 22

23 criticism include methodology, derivations of equations, human memory models, and others. However, what is decisive, empirical studies provided little support to the equations except for the program length. Even the usefulness of the equation for length is restricted, because program length is also correlated to other measures like the number of lines (LOC) in a program unit. Therefore, the discussion of Halstead s metrics is not carried on. Halstead s metrics in DAC DAC calculates eight metrics according to Halstead, namely difficulty, intelligence content, language level, length, programming effort, programming time, vocabulary and volume. All Halstead metrics are applied to functions as object of observation. All Halstead metrics come out with an absolute value. any other node by following directed edges in the graph.) Let us take the following C source code for function f() as an example: The absolute values for Halstead difficulty for all functions in a module Obviously, Halstead considers the two sorting functions bubble_sort() and search_sort() more difficult than the function swap(). The control graph associated with this source is DAC Cyclomatic Complexity Metric by McCabe The software metric mostly associated with McCabe is the Cyclomatic Complexity v(g). It is a measure for the complexity (= testability and understandability) of a program. The Cyclomatic Number The Cyclomatic Complexity v(g) stems from the graph theory, where the cyclomatic number indicates the number of linearly independent paths through a program, if the control flow of a program is pictured as a (directed) graph. The cyclomatic number in the graph theory is defined as Cyclomatic number = e n + 1 where e stand for the number of edges and n for the number of nodes in a strongly connected graph. (In a strongly connected graph, you can reach each node from The control graph for the code example However, the graph in the above figure is not strongly connected, because there is no directed edge from the node return to node if (a). To remedy this, a so-called virtual edge is introduced. This virtual edge artificially transforms the flow graph of (any) function to a strongly connected graph. You may think of the virtual edge as representing the rest of the program containing our example function f(), e.g. the flow of control between the returning of f() and the next call to f(). 23

24 Cyclomatic complexity is also the minimal number of paths that can, in (linear) combination, generate all possible paths. The proof to the above statements uses linear algebra (vector spaces, the rank of matrixes, Gaussian elimination, etc.) This proof / background is beyond the scope of this article. Control graph with virtual edge The Cyclomatic Complexity The Cyclomatic Complexity equals to the cyclomatic number for the strongly connected control graph of a function, i.e. the graph including the virtual edge. The Cyclomatic Complexity is defined as v(g) = e n + 2 where again e stands for the number of edges (not including the virtual edge) and n stands for the number of nodes. Counting the edges and nodes in the strongly connected graph gives e = 8 (not including the virtual edge) and n = 7, therefore the Cyclomatic Complexity is calculated to v(g) = = 3 The v refers to the cyclomatic number of the graph theory and G denotes the graph. Because of the additional edge (the virtual edge) the Cyclomatic Complexity of a function is one higher than the cyclomatic number for the not strongly connected control graph of that function. More Facts with Respect to the Cyclomatic Complexity v(g) The Cyclomatic Complexity is also the number of binary decisions in a function plus one. In the example above, there are two binary decisions, therefore, v(g) = = 3, what is the same result than when calculating using edges and nodes. Decisions that are not binary are converted to binary decisions. The Cyclomatic Complexity is also the number of independent paths through strongly connected directed graphs. Interpretation of the Cyclomatic Complexity Clearly, the Cyclomatic Complexity is a measure for the complexity of a function, because you may easily argument that the more binary decisions in a function are, the more complex the function is. A similar argumentation can be done with respect to the paths: The more independent paths exist through a function, the more complex a function is. Furthermore, the Cyclomatic Complexity is also a measure for test coverage: Because the Cyclomatic Complexity is the minimum number of paths that can, in linear combination, generate all possible paths, test coverage can certainly be measured by the number of linear independent paths that were taken during the tests. Unless the minimum number is reached, test coverage can be considered poor. Please note that you can test more different paths than the number for the Cyclomatic Complexity indicates, but there may still be a linearly independent path missing. To detect all linearly independent paths, Gaussian elimination can be used, and this again relates to linear algebra and is beyond the scope of this discussion. McCabe suggests that the Cyclomatic Complexity of a function should be limited to 10. Functions with a greater Cyclomatic Complexity are considered as (too) difficult to test and to maintain. Exceptions from that rule should only be allowed for projects under specific advantageous conditions, e.g. experienced staff, formal design, etc. Criticism on the Cyclomatic Complexity The biggest flaw of the Cyclomatic Complexity is certainly that the Cyclomatic Complexity ignores sequential statements. A function containing a lot of sequential statements, but no binary decision, has Cyclomatic Complexity of 1, what is misleading compared e.g. to our example function f() having complexity 3. 24

25 Also does the Cyclomatic Complexity ignore the kind of control flow statement in which a binary decision occurs. A binary decision in an if-statement (which is evaluated one time) counts as much as the binary decision controlling a loop, which is normally evaluated several times. The following source code for function f2() illustrates this inaccuracies of the LOC metric by calculating a weighted total of things like number of external inputs / outputs of a function, number of internal / external files accessed by a function, etc. DAC features function points and adjusted function points. Function points can be applied to a group of modules only and result in absolute values. The Cyclomatic Complexity for f2() is 2 (because there is one binary decision contained in f2(), namely the condition controlling the do-while loop.) How many times this decision is evaluated is not taken into account. (Can you tell how many times it is?) Also, neither the number of variables used in f2() nor the number of lines of f2() are taken into account. Please compare this to the function f() above, where the Cyclomatic Complexity is calculated to 3, and judge by yourself if you think f() or f2() is more complex. More Metrics from McCabe Besides the Cyclomatic Complexity, associated with the name of McCabe are many other metrics, e.g. the Essential Complexity Metric, the Design Complexity Metric, the Date Complexity Metric, etc. More or less, all metrics try to give a measure for the complexity of the program and try to deduct from these values the maintainability and effort required for the testing of the program Additional Metrics in DAC Function points Function points indicate both productivity (e.g. function point per person-year) and quality (e.g. defects per function point). Function points try to measure the tasks (task not in the sense of operating systems, but in the sense of problems solved or algorithmic tasks) a software implements. Function points address some of the Operator Density The density of operators (operators per line, operators per operand, operators per token) can be counted. The operators are weighted, i.e. seldom used operators like the comma-operator, have a higher weight than ubiquitous operators like the plus-operator +. Operator density metrics are applied to functions and result in absolute numbers Displaying Metrics in DAC Reports in DAC Simple The measures obtained can be displayed in DAC as simple values, where these values appear as absolute numbers or in percent. The latter depends on the metric and is provided by DAC as default value. Normalized The simple measures may also be displayed in a normalized manner. This report type serves to simultaneously show the measures of several metrics (e.g. executable lines and comment lines). Average If a measure is displayed for more than one object of observation, such as the number of lines for several functions within a module, the average value can be displayed. You are able to display several metrics at the same time. The mean values may be displayed as a radar type of chart. Distribution This report type allows you to display the distribution of a measures on several objects of observation. The distribution may be displayed as a pie chart type Display of Reports in DAC Metrics in DAC can always be displayed in table format, as a column chart or as a bar chart. DAC 25

26 For the distribution report, a pie chart can be displayed in addition. For the average report, a radar chart can be displayed in addition. A Radar Chart of the metrics normalized values for Halstead difficulty, McCabe cyclomatic number and executable statements for all functions of module sort Using Metrics Software metrics provide measures for most different aspects of software quality and thus help determine if modifications to the software have effect on the quality, i.e. if it improves or deteriorates. Selecting the Metrics In practice, it is recommended that you proceed as follows when selecting the metrics: First you should know about the software quality aspect that is of interest for you, such as maintainability or the defect rate to be expected, and what is the underlying purpose. As far as the maintainability aspect is concerned, you may wish to identify those parts of the program that are rather difficult to maintain. This is done to make these program parts easier to maintain, preferably by a programmer who is (still) able to maintain these program parts before he or she is assigned another project. Of course, this only makes sense for program parts that will continue to be developed in the future. However, for what parts does this not apply? The purpose of determining the defect rate to be expected could be test control. Program parts with a high defect rate to be expected will be tested more intensely than those with a lower defect rate. This ensures that the resources available for testing can be allocated in a useful and cost-effective manner. After that you can select one or more metrics that correlate well with your aspect of interest. For this, choose a simple metric that obviously shows a correlation with the aspect of interest. For the maintainability aspect, for example, you may select the comment portion; for the defect rate to be excepted the maximal nesting depth. As long as you are using a simple metric and compare it for different objects of observation, you will not be barking up the wrong tree. If you start to improve the comment portion in the least commented module in order to improve maintainability, you are definitely on the right track. Also, it is not the worst to put emphasis on the functions with the highest nesting depth during the test planning process. However, some issues have to taken into consideration when absolute measures alone are to be examined, or when more complex metrics are applied, namely: -> The theoretical deduction of a metric. -> How the metric is determined. -> The aspects that are not taken into account by the metric. -> If there exist empirical studies that prove the expected correlation actually exists Corrections Do corrective actions, such as modifications to the source, have to be taken on account of bad measures for certain metrics? This topic is much discussed and should be dealt with individually. Various reasons are conceivable for exceeding the specified limit values, e.g. the limit value 10 for v(g): -> An inexperienced software developer s programming style is indeed bad or too complicated. Here, it is best to have that developer revise the source code under guidance of an experienced engineer. -> The limit values are exceeded only with a small portion of the programmer s objects of observation, an instant known to the programmer. Here, it is not useful to alter the source code afterwards. -> Measures are frequently exceeded by all programmers, and revisions do not result in significant enhancements. Here, the limit values are likely to have been specified incorrectly and require correction. -> Although the specified limit values may have been observed, there are often problems as far as the quality is concerned. Here, it is to be checked whether modified limit values or further metrics would have pointed to problematic objects of observation in advance. 26

27 The experience an organization has had with metrics plays a major role. It is therefore recommended to observe the obtained measures for a while in order to develop a sense of useful limit values. Parameters like the engineer s experience, project type and project size must be held constant for optimum findings. In addition, the analysis of already finished projects can provide useful data. Corrective measures, as is the case with all problems of control, should be taken carefully. 3.4 Concluding Remarks MISRA C Anyone concerned about software quality with respect to programming embedded systems in C is strongly recommended to get familiar with the MISRA C Guidelines. Regardless to what extent the rules are applied eventually, expertise in C programming and software quality will raise in every case Software Metrics Metrics are nothing else than statistics of a program s source code. As is the case with all statistics, you can be easily put off the scent, unless you thoroughly question the methods involved (evaluation method, circumstances, assumptions used, etc.). Analogously, the following advice usually applied to statistics also holds true for metrics: Never trust any statistics you didn t fake yourself. 3.5 References and Further Reading Reference Manual for DAC V4.0, March 2002 User s Guide for DAC 4.0, March 2002 Kan, Stephen H.: Metrics and Models in Software Quality Engineering, Addison Wesley, 1998 Arthur Watson, Thomas McCabe: Structured testing: A testing methodology using the Cyclomatic Complexity Metric, NIST Special publication , September Information about software quality, DAC, and much more. Contains several compliance matrixes for MISRA C (one especially for DAC). Also more material about traps and pitfalls of C. The MISRA home page. British defence standards. DAC 27

28 4 Using RiskCAT to Cope with IEC The requirements determine the measures 4.1 IEC Introduction The requirements embedded systems and their software must meet normally stem from two sources: (1)The requirements for the product itself: What is the functionality of the product? What should it cost? What should it look like? etc. (2)The requirements for the (software)quality of the product: Which risks are present when using the product? Which level of reliability, maintainability and safety is required? How is this level determined? Which state-of-the-art techniques have to be applied to achieve this level? 4.2 Master IEC Using RiskCAT Determining the Risk The (planned) embedded system is analyzed for the risk it could bear. Several aspects are considered, e.g. for the life of a future user, for the finance and the reputation of the manufacturer, etc. There are three methods in IEC for determining the risk: (1)The Risk Graph method, a method well-known in Germany and defined in DIN V (2)Specifying accepted failure rates. The Risk Graph method is qualitative, specifying failure rates is (obviously) quantitative. Both methods are supported by RiskCAT. You simply select predefined descriptions to all aspects of the risk. Where do the requirements come from? The possible measures to achieve software quality are addressed by the standard IEC Applying IEC Applying IEC follows these steps: (1)Determining the risk of the product (2)From the risk: Determining the Safety Integrity Level (SIL) (3)From the SIL: Determining the measures that have to be applied and their degree of obligation resulting from IEC Tool Support When applying IEC 61508, the problem is the huge amount of measures (several hundreds) mentioned in it. Therefore, coping with IEC is almost impossible without the proper tool support, especially when taking into account that different parties are involved in this process (quality management, development, customer,...), needing a documented state of work to discuss and to review the evolving process. RiskCAT seems to be the only available tool for this purpose. Determining the risk by using the Risk Graph (left) and by specifying failure rates (right) Determining the Safety Integrity Level (SIL) For the risk determined by the Risk Graph method or by specifying failure rates, the SIL is automatically derived by RiskCAT according to the rules in IEC The SIL connects the risk analysis (in the left part of RiskCAT) with the determination of the measures (in the right part of RiskCAT, see below). Additionally, the user is able to adjust the SIL manually if 28

29 this should become necessary. This is obviously the case if the future operating environment is not yet determined and therefore the risk cannot be estimated. Another reason for manual adjustment of the SIL could be the wish to apply voluntarily a higher SIL than required, due to marketing reasons (e.g. to avoid loss of kudos). Another reason could be to have the possibility to use the system in a more stringent environment than planned initially, e.g. upgrading from use in non-public areas to use in public areas. With RiskCAT, it is easy to investigate the consequences of upgrading / downgrading the SIL voluntarily, therefore avoiding both spending too much effort in achieving an unnecessary high SIL and on the other hand - missing the opportunity for easily gaining a higher SIL and therefore additional quality with the product. classified a second time. This classification is visualized by the horizontal tabs at the top and at the bottom of RiskCAT s main window. Naturally, these classifications are different for each main class. Examples of effects when changing the SIL: -> Augmenting the SIL from 1 to 2 results for the Static Analysis technique Fagan Inspection from possible to recommended. -> Augmenting the SIL from 2 to 3 results for the Module Testing technique Boundary value analysis from recommended to highly recommended. -> Augmenting the SIL from 3 to 4 results for the Software architecture technique Re-try fault recovery mechanisms from recommended to highly recommended. The measures of the main class Software, Lifecycle, Design and Development (vertical) and the sub class SW-Architecture-Techniques (horizontal) The documentation for the project created by RiskCAT eventually will contain the selected parameters and the SIL derived from them. Indication of the measure s degree of obligation Depending on the Safety Integrity Level (SIL), IEC defines the degree of obligation of the measures. The degrees of obligation are Determining the Measures and Their Degree of Obligation -> -> -> -> -> This task is done automatically by RiskCAT. RiskCAT reasonably groups the measures and indicates their degree of obligation by a color coding scheme. Grouping of the measures by RiskCAT RiskCAT supports applying IEC by classifying the measures according to their area of application. The areas of application are grouped in these six main classes: (1) General, Lifecycle (2) General, Non lifecycle (3) System, Lifecycle (4) Software, Non lifecycle (5) Software, Lifecycle, Design and Development (6) Software, Lifecycle, not Design and Development possible recommended highly recommended mandatory not recommended RiskCAT indicates the degree of obligation (deducted from the SIL) for a measure by a color coding scheme. E.g., mandatory measures are displayed in green, highly recommended measures are displayed in blue. The color codes are indicated at the top of the main window, so the codes are always at hand. The color scheme allows to easily to differentiate according to the importance of the measures and concentrate on the most important. By just changing the SIL, the color of a measure may also change, which provides an easy method to study the effect of an SIL change to the degree of obligation of a measure. These classes can be selected by the vertical tabs on the left hand side of RiskCAT s main window. The measures belonging to one of the main classes are 29 RiskCAT

30 Alternative measures Some measures are alternative to each other. RiskCAT groups alternative measures adjacent and marks them by a gray background, what greatly simplifies the handling of these measures. 4.3 Additional Features of RiskCAT Attaching Notes to Measures A note can be attached to a measure, e.g. to state the reason why the measure in question was not selected. Measures with a note are marked by an! in front of the measure Documenting the Results The information gathered for the project in question can be stored in an RTF file for including it in the project documentation. Also the selection made can be (manually) transferred into a checklist (see next page) Project Store / Load The work done in RiskCAT, mainly the selection of measures, can be stored in a file and restored at a later time. This feature also allows the creation of companyspecific selections for a certain SIL level, which can be adjusted to the needs of different projects easily. Alternative measures are marked by a gray background Selecting Required Measures Now the user should go through each (sub) class of measures and decide which measure is to be selected. A selected measure can be marked in RiskCAT by doubleclicking on the measure. In principle, it is up to the user to decide if a measure is selected or not. This especially applies to recommended / highly recommended measures. The trade-off between expenditure and gain in additional quality certainly will effect the decision. Obviously, if the objective is compliance with the standard, the mandatory measures should be selected in the first place. The process of selecting the measures defines the level of quality the product eventually will have and influences the problems to be expected (or not) during maintenance or technical approval. Measures selected (x) and measures with attached note (!) Link to the Standard s Text RiskCAT comes with the IEC standard text in PDF format. By using the context menu for a measure, the appropriate standard text can be viewed. 30

31 Requirements on Software Design from part 3, clauses 7.4.2, and 7.4.3/Table A.2 Determination of division of responsibility between supplier and user Features that facilitate to control complexity, express concurrency,... Consideration of testability and capacity for safe modification Design method which facilitates modification Unambiguous notation for design representation Minimisation of safety-related part of SW by design Merging safety and non-safety: highest requirements OR independence Inclusion of functions to execute proof and diagnostic tests Inclusion of self-monitoring of control and data flow, failure detection,... Clear identification of pre-existing SW Justification for suitability of pre-existing SW by operation experience or V&V Data and data generation languages are subject to these requirements Determination of division of responsibility between supplier and user Detailed SW architecture design description Agreement on and documentation of required changes of safety req. Structured methods including for example, JSD, MASCOT, SADT and Yourdon. degree of obligation for SIL 2 Conformance by mandatory mandatory mandatory mandatory mandatory mandatory mandatory mandatory mandatory mandatory mandatory mandatory mandatory mandatory mandatory highly rec. A checklist of selected measures (1) IEC concentrates on the aspects reliability, safety and maintenance. These quality aspects are of high concern for electronic control units. (2) IEC covers all aspects of the life cycle. (3) IEC forms the base for standards of special application areas, e.g. for nuclear power plants, railroads or chemical industries. Therefore, being compliant to IEC is a good starting point when achieving compliance with more specific standards is the objective. (4) IEC is to be applied in all application areas where no specific standard exists, e.g. the automotive industry. 4.5 Due to the huge amount of potential measures and the variation of their degree of obligation according to the SIL, applying IEC seems almost impossible without the proper tool support. RiskCAT not only provides an overview of the possible measures and allows the selection of the required ones easily, it also serves for documentation and review purposes. RiskCAT is an invaluable tool when IEC is an issue. RiskCAT is also available for DIN EN and DIN EN (RiskCAT Railway, only in Germany). Link to the standard s text 4.4 Conclusion More about IEC The IEC is an important standard in the field of electronic control units and their software. The seven parts of IEC were released between 1998 and mid of Four attributes make IEC special when compared with other standards: 31 RiskCAT

32 Contents Automate testing with Tessy Tessy is a tool that automates the testing of embedded systems software. This article first gives an overview of the basic functionality of Tessy. Then, the usefulness of Tessy particularly for regression testing is shown. Eventually, doing C1 code coverage analysis using Tessy is discussed. The Classification Tree Method and the CTE The Classification Tree Method transforms a functional problem specification into a minimal and sufficient set of test case specifications. This article introduces the method using a very basic problem specification. Furthermore, the Classification Tree Editor (CTE), a graphical tool supporting the Classification Tree Method, is presented. Static software analysis using DAC Static software analysis generates information from the source code of a program. Static software analysis may reveal questionable constructs in the code, or various numeric information about the source code (software metrics) can be calculated. This article introduces the MISRA C Guidelines as a coding standard with the objective to achieve safer source code and introduces software metrics like those of McCabe or Halstead. Using RiskCAT to cope with IEC IEC was established as an important standard with respect to electronic control units and their software. The standard comprises several hundreds of possible measures for better quality. This article introduces IEC and shows how RiskCAT supports the application of the standard. RiskCAT helps determining the required safety level of the system and, derived from it, rates the liability of the measures given by IEC B0-ESQ-Part II - Jan

Tessy. Automated dynamic module/unit testing for embedded applications. CTE Classification Tree Editor for test case specifications

Tessy. Automated dynamic module/unit testing for embedded applications. CTE Classification Tree Editor for test case specifications Tessy Automated dynamic module/unit testing for embedded applications CTE Classification Tree Editor for test case specifications Automated module/unit testing and debugging at its best Tessy The Invaluable

More information

Tessy Frequently Asked Questions (FAQs)

Tessy Frequently Asked Questions (FAQs) Tessy Frequently Asked Questions (FAQs) General Q1 What is the main objective of Tessy? Q2 What is a unit for Tessy? Q3 What is a module for Tessy? Q4 What is unit testing? Q5 What is integration testing?

More information

Building a safe and secure embedded world. Testing State Machines. and Other Test Objects Maintaining a State. > TESSY Tutorial Author: Frank Büchner

Building a safe and secure embedded world. Testing State Machines. and Other Test Objects Maintaining a State. > TESSY Tutorial Author: Frank Büchner Building a safe and secure embedded world Testing State Machines and Other Test Objects Maintaining a State > TESSY Tutorial Author: Frank Büchner Topic: TESSY is especially well-suited for testing state

More information

TESSY and ISO TESSY White Paper Author: Frank Büchner. How TESSY helps to achieve ISO compliance

TESSY and ISO TESSY White Paper Author: Frank Büchner. How TESSY helps to achieve ISO compliance Building a safe and secure embedded world TESSY and ISO 26262 How TESSY helps to achieve ISO 26262 compliance TESSY White Paper Author: Frank Büchner Preface My White Papers detail specific topics related

More information

Software Testing and Maintenance

Software Testing and Maintenance Software Testing and Maintenance Testing Strategies Black Box Testing, also known as Behavioral Testing, is a software testing method in which the internal structure/ design/ implementation of the item

More information

Chapter 9. Software Testing

Chapter 9. Software Testing Chapter 9. Software Testing Table of Contents Objectives... 1 Introduction to software testing... 1 The testers... 2 The developers... 2 An independent testing team... 2 The customer... 2 Principles of

More information

10. Software Testing Fundamental Concepts

10. Software Testing Fundamental Concepts 10. Software Testing Fundamental Concepts Department of Computer Science and Engineering Hanyang University ERICA Campus 1 st Semester 2016 Testing in Object-Oriented Point of View Error Correction Cost

More information

Certified Automotive Software Tester Sample Exam Paper Syllabus Version 2.0

Certified Automotive Software Tester Sample Exam Paper Syllabus Version 2.0 Surname, Name: Gender: male female Company address: Telephone: Fax: E-mail-address: Invoice address: Training provider: Trainer: Certified Automotive Software Tester Sample Exam Paper Syllabus Version

More information

A Model-Based Reference Workflow for the Development of Safety-Related Software

A Model-Based Reference Workflow for the Development of Safety-Related Software A Model-Based Reference Workflow for the Development of Safety-Related Software 2010-01-2338 Published 10/19/2010 Michael Beine dspace GmbH Dirk Fleischer dspace Inc. Copyright 2010 SAE International ABSTRACT

More information

Lecture 15 Software Testing

Lecture 15 Software Testing Lecture 15 Software Testing Includes slides from the companion website for Sommerville, Software Engineering, 10/e. Pearson Higher Education, 2016. All rights reserved. Used with permission. Topics covered

More information

The Bizarre Truth! Automating the Automation. Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER

The Bizarre Truth! Automating the Automation. Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER The Bizarre Truth! Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER By Kimmo Nupponen 1 TABLE OF CONTENTS 1. The context Introduction 2. The approach Know the difference

More information

Software Testing part II (white box) Lecturer: Giuseppe Santucci

Software Testing part II (white box) Lecturer: Giuseppe Santucci Software Testing part II (white box) Lecturer: Giuseppe Santucci 4. White box testing White-box (or Glass-box) testing: general characteristics Statement coverage Decision coverage Condition coverage Decision

More information

Verification and Validation. Verification and validation

Verification and Validation. Verification and validation Verification and Validation Verification and validation Verification and Validation (V&V) is a whole life-cycle process. V&V has two objectives: Discovery of defects, Assessment of whether or not the system

More information

Chapter 8 Software Testing. Chapter 8 Software testing

Chapter 8 Software Testing. Chapter 8 Software testing Chapter 8 Software Testing 1 Topics covered Introduction to testing Stages for testing software system are: Development testing Release testing User testing Test-driven development as interleave approach.

More information

Table : IEEE Single Format ± a a 2 a 3 :::a 8 b b 2 b 3 :::b 23 If exponent bitstring a :::a 8 is Then numerical value represented is ( ) 2 = (

Table : IEEE Single Format ± a a 2 a 3 :::a 8 b b 2 b 3 :::b 23 If exponent bitstring a :::a 8 is Then numerical value represented is ( ) 2 = ( Floating Point Numbers in Java by Michael L. Overton Virtually all modern computers follow the IEEE 2 floating point standard in their representation of floating point numbers. The Java programming language

More information

MISRA C:2012 Addendum 2

MISRA C:2012 Addendum 2 Permit / Example / C:2012 / R.10.6.A.1 MISRA C:2012 Addendum 2 Coverage of MISRA C:2012 (including Amendment 1) against ISO/IEC TS 17961:2013 C Secure 2 nd Edition, January 2018 First published January

More information

Quality Assurance in Software Development

Quality Assurance in Software Development Quality Assurance in Software Development Qualitätssicherung in der Softwareentwicklung A.o.Univ.-Prof. Dipl.-Ing. Dr. Bernhard Aichernig Graz University of Technology Austria Summer Term 2017 1 / 47 Agenda

More information

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements Journal of Software Engineering and Applications, 2016, 9, 112-127 Published Online April 2016 in SciRes. http://www.scirp.org/journal/jsea http://dx.doi.org/10.4236/jsea.2016.94010 The Analysis and Proposed

More information

Frequently Asked Questions. AUTOSAR C++14 Coding Guidelines

Frequently Asked Questions. AUTOSAR C++14 Coding Guidelines Frequently Asked Questions AUTOSAR C++14 Coding Guidelines General Q: What is AUTOSAR? A: AUTOSAR (AUTomotive Open System ARchitecture) is a partnership of over 180 automotive manufacturers, automotive

More information

Lecture 1 Contracts. 1 A Mysterious Program : Principles of Imperative Computation (Spring 2018) Frank Pfenning

Lecture 1 Contracts. 1 A Mysterious Program : Principles of Imperative Computation (Spring 2018) Frank Pfenning Lecture 1 Contracts 15-122: Principles of Imperative Computation (Spring 2018) Frank Pfenning In these notes we review contracts, which we use to collectively denote function contracts, loop invariants,

More information

Darshan Institute of Engineering & Technology for Diploma Studies

Darshan Institute of Engineering & Technology for Diploma Studies CODING Good software development organizations normally require their programmers to follow some welldefined and standard style of coding called coding standards. Most software development organizations

More information

Error Detection by Code Coverage Analysis without Instrumenting the Code

Error Detection by Code Coverage Analysis without Instrumenting the Code Error Detection by Code Coverage Analysis without Instrumenting the Code Erol Simsek, isystem AG Exhaustive testing to detect software errors constantly demands more time within development cycles. Software

More information

Exercises: Instructions and Advice

Exercises: Instructions and Advice Instructions Exercises: Instructions and Advice The exercises in this course are primarily practical programming tasks that are designed to help the student master the intellectual content of the subjects

More information

1 Visible deviation from the specification or expected behavior for end-user is called: a) an error b) a fault c) a failure d) a defect e) a mistake

1 Visible deviation from the specification or expected behavior for end-user is called: a) an error b) a fault c) a failure d) a defect e) a mistake Sample ISTQB examination 1 Visible deviation from the specification or expected behavior for end-user is called: a) an error b) a fault c) a failure d) a defect e) a mistake 2 Regression testing should

More information

MISRA C:2012 WHITE PAPER

MISRA C:2012 WHITE PAPER WHITE PAPER MISRA C:2012 Since its launch in 1998, MISRA C has become established as the most widely used set of coding guidelines for the C language throughout the world. Originally developed within the

More information

Lecture 1 Contracts : Principles of Imperative Computation (Fall 2018) Frank Pfenning

Lecture 1 Contracts : Principles of Imperative Computation (Fall 2018) Frank Pfenning Lecture 1 Contracts 15-122: Principles of Imperative Computation (Fall 2018) Frank Pfenning In these notes we review contracts, which we use to collectively denote function contracts, loop invariants,

More information

Software Quality. Chapter What is Quality?

Software Quality. Chapter What is Quality? Chapter 1 Software Quality 1.1 What is Quality? The purpose of software quality analysis, or software quality engineering, is to produce acceptable products at acceptable cost, where cost includes calendar

More information

Black-box Testing Techniques

Black-box Testing Techniques T-76.5613 Software Testing and Quality Assurance Lecture 4, 20.9.2006 Black-box Testing Techniques SoberIT Black-box test case design techniques Basic techniques Equivalence partitioning Boundary value

More information

White-Box Testing Techniques

White-Box Testing Techniques T-76.5613 Software Testing and Quality Assurance Lecture 3, 18.9.2006 White-Box Testing Techniques SoberIT Content What are white-box testing techniques Control flow testing Statement coverage Branch coverage

More information

Sample Exam Syllabus

Sample Exam Syllabus ISTQB Foundation Level 2011 Syllabus Version 2.9 Release Date: December 16th, 2017. Version.2.9 Page 1 of 46 Dec 16th, 2017 Copyright 2017 (hereinafter called ISTQB ). All rights reserved. The authors

More information

Hitex Germany. Application Note. Tessy s Test Application Size

Hitex Germany. Application Note. Tessy s Test Application Size Hitex Germany Head Quarters Greschbachstr. 12 76229 Karlsruhe Germany +049-721-9628-0 Fax +049-721-9628-149 E-mail: Sales@hitex.de WEB: www.hitex.de Hitex UK Warwick University Science Park Coventry CV47EZ

More information

SCOS-2000 Technical Note

SCOS-2000 Technical Note SCOS-2000 Technical Note MDA Study Prototyping Technical Note Document Reference: Document Status: Issue 1.0 Prepared By: Eugenio Zanatta MDA Study Prototyping Page: 2 Action Name Date Signature Prepared

More information

ISO/IEC/ IEEE INTERNATIONAL STANDARD. Systems and software engineering Requirements for acquirers and suppliers of user documentation

ISO/IEC/ IEEE INTERNATIONAL STANDARD. Systems and software engineering Requirements for acquirers and suppliers of user documentation INTERNATIONAL STANDARD ISO/IEC/ IEEE 26512 First edition 2011-06-01 Systems and software engineering Requirements for acquirers and suppliers of user documentation Ingénierie du logiciel et des systèmes

More information

INTRODUCTION TO SOFTWARE ENGINEERING

INTRODUCTION TO SOFTWARE ENGINEERING INTRODUCTION TO SOFTWARE ENGINEERING Introduction to Software Testing d_sinnig@cs.concordia.ca Department for Computer Science and Software Engineering What is software testing? Software testing consists

More information

Quality Indicators for Automotive Test Case Specifications

Quality Indicators for Automotive Test Case Specifications Quality Indicators for Automotive Test Case Specifications Katharina Juhnke Daimler AG Group Research & MBC Development Email: katharina.juhnke@daimler.com Matthias Tichy Ulm University Institute of Software

More information

DRVerify: The Verification of Physical Verification

DRVerify: The Verification of Physical Verification DRVerify: The Verification of Physical Verification Sage Design Automation, Inc. Santa Clara, California, USA Who checks the checker? DRC (design rule check) is the most fundamental physical verification

More information

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams Three General Principles of QA COMP 4004 Fall 2008 Notes Adapted from Dr. A. Williams Software Quality Assurance Lec2 1 Three General Principles of QA Know what you are doing. Know what you should be doing.

More information

Technical Writing Process An Overview

Technical Writing Process An Overview techitive press Technical Writing Process An Overview Tenneti C S techitive press Copyrights Author: Chakravarthy Srinivas Tenneti Book: Technical Writing Process: An Overview Techitive.com 2013 All rights

More information

Testing is a very big and important topic when it comes to software development. Testing has a number of aspects that need to be considered.

Testing is a very big and important topic when it comes to software development. Testing has a number of aspects that need to be considered. Testing Testing is a very big and important topic when it comes to software development. Testing has a number of aspects that need to be considered. System stability is the system going to crash or not?

More information

efmea RAISING EFFICIENCY OF FMEA BY MATRIX-BASED FUNCTION AND FAILURE NETWORKS

efmea RAISING EFFICIENCY OF FMEA BY MATRIX-BASED FUNCTION AND FAILURE NETWORKS efmea RAISING EFFICIENCY OF FMEA BY MATRIX-BASED FUNCTION AND FAILURE NETWORKS Maik Maurer Technische Universität München, Product Development, Boltzmannstr. 15, 85748 Garching, Germany. Email: maik.maurer@pe.mw.tum.de

More information

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013!

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013! Testing Prof. Leon Osterweil CS 520/620 Spring 2013 Relations and Analysis A software product consists of A collection of (types of) artifacts Related to each other by myriad Relations The relations are

More information

A Linear-Time Heuristic for Improving Network Partitions

A Linear-Time Heuristic for Improving Network Partitions A Linear-Time Heuristic for Improving Network Partitions ECE 556 Project Report Josh Brauer Introduction The Fiduccia-Matteyses min-cut heuristic provides an efficient solution to the problem of separating

More information

ISO/IEC INTERNATIONAL STANDARD. Software engineering Lifecycle profiles for Very Small Entities (VSEs) Part 2: Framework and taxonomy

ISO/IEC INTERNATIONAL STANDARD. Software engineering Lifecycle profiles for Very Small Entities (VSEs) Part 2: Framework and taxonomy INTERNATIONAL STANDARD ISO/IEC 29110-2 First edition 2011-01-15 Software engineering Lifecycle profiles for Very Small Entities (VSEs) Part 2: Framework and taxonomy Ingénierie du logiciel Profils de cycle

More information

Chap 2. Introduction to Software Testing

Chap 2. Introduction to Software Testing Chap 2. Introduction to Software Testing 2.1 Software Testing Concepts and Processes 2.2 Test Management 1 2.1 Software Testing Concepts and Processes 1. Introduction 2. Testing Dimensions 3. Test Concepts

More information

Tools and Methods for Validation and Verification as requested by ISO26262

Tools and Methods for Validation and Verification as requested by ISO26262 Tools and for Validation and Verification as requested by ISO26262 Markus Gebhardt, Axel Kaske ETAS GmbH Markus.Gebhardt@etas.com Axel.Kaske@etas.com 1 Abstract The following article will have a look on

More information

MISRA C:2012. by Paul Burden Member of MISRA C Working Group and co-author of MISRA C:2012. February 2013

MISRA C:2012. by Paul Burden Member of MISRA C Working Group and co-author of MISRA C:2012. February 2013 WHITEPAPER MISRA C:2012 by Paul Burden Member of MISRA C Working Group and co-author of MISRA C:2012 February 2013 Since its launch in 1998, MISRA C has become established as the most widely used set of

More information

Data Cleansing Strategies

Data Cleansing Strategies Page 1 of 8 Data Cleansing Strategies InfoManagement Direct, October 2004 Kuldeep Dongre The presence of data alone does not ensure that all the management functions and decisions can be smoothly undertaken.

More information

Algorithms and Flowcharts

Algorithms and Flowcharts UNIT 2 Chapter 1 Algorithms and Flowcharts After studying this lesson, the students will be able to understand the need of Algorithm and Flowcharts; solve problems by using algorithms and flowcharts; get

More information

Part 5. Verification and Validation

Part 5. Verification and Validation Software Engineering Part 5. Verification and Validation - Verification and Validation - Software Testing Ver. 1.7 This lecture note is based on materials from Ian Sommerville 2006. Anyone can use this

More information

EXAMINING THE CODE. 1. Examining the Design and Code 2. Formal Review: 3. Coding Standards and Guidelines: 4. Generic Code Review Checklist:

EXAMINING THE CODE. 1. Examining the Design and Code 2. Formal Review: 3. Coding Standards and Guidelines: 4. Generic Code Review Checklist: EXAMINING THE CODE CONTENTS I. Static White Box Testing II. 1. Examining the Design and Code 2. Formal Review: 3. Coding Standards and Guidelines: 4. Generic Code Review Checklist: Dynamic White Box Testing

More information

Verification Overview Testing Theory and Principles Testing in Practice. Verification. Miaoqing Huang University of Arkansas 1 / 80

Verification Overview Testing Theory and Principles Testing in Practice. Verification. Miaoqing Huang University of Arkansas 1 / 80 1 / 80 Verification Miaoqing Huang University of Arkansas Outline 1 Verification Overview 2 Testing Theory and Principles Theoretical Foundations of Testing Empirical Testing Principles 3 Testing in Practice

More information

Software Requirements Specification (SRS) Software Requirements Specification for <Name of Project>

Software Requirements Specification (SRS) Software Requirements Specification for <Name of Project> Software Requirements Specification (SRS) Software Requirements Specification for Version Release Responsible Party Major Changes Date 0.1 Initial Document Release for

More information

(Head Quarters) Greschbachstr D Karlsruhe Fax

(Head Quarters) Greschbachstr D Karlsruhe Fax ApplicationNote.dot - 04/2016-003 Visit us at www.hitex.de, www.hitex.co.uk or www.hitex.com Hitex Germany (Head Quarters) Greschbachstr. 12 +049-721-9628-0 D-76229 Karlsruhe Fax +049-721-9628-149 E-mail:

More information

ISO compliant verification of functional requirements in the model-based software development process

ISO compliant verification of functional requirements in the model-based software development process requirements in the model-based software development process Hans J. Holberg SVP Marketing & Sales, BTC Embedded Systems AG An der Schmiede 4, 26135 Oldenburg, Germany hans.j.holberg@btc-es.de Dr. Udo

More information

People tell me that testing is

People tell me that testing is Software Testing Mark Micallef mark.micallef@um.edu.mt People tell me that testing is Boring Not for developers A second class activity Not necessary because they are very good coders 1 What is quality?

More information

Guidelines for deployment of MathWorks R2010a toolset within a DO-178B-compliant process

Guidelines for deployment of MathWorks R2010a toolset within a DO-178B-compliant process Guidelines for deployment of MathWorks R2010a toolset within a DO-178B-compliant process UK MathWorks Aerospace & Defence Industry Working Group Guidelines for deployment of MathWorks R2010a toolset within

More information

HOW TO PROVE AND ASSESS CONFORMITY OF GUM-SUPPORTING SOFTWARE PRODUCTS

HOW TO PROVE AND ASSESS CONFORMITY OF GUM-SUPPORTING SOFTWARE PRODUCTS XX IMEKO World Congress Metrology for Green Growth September 9-14, 2012, Busan, Republic of Korea HOW TO PROVE AND ASSESS CONFORMITY OF GUM-SUPPORTING SOFTWARE PRODUCTS N. Greif, H. Schrepf Physikalisch-Technische

More information

Expression des Besoins et Identification des Objectifs de Sécurité

Expression des Besoins et Identification des Objectifs de Sécurité PREMIER MINISTRE Secrétariat général de la défense nationale Direction centrale de la sécurité des systèmes d information Sous-direction des opérations Bureau conseil Expression des Besoins et Identification

More information

Software Testing for Developer Development Testing. Duvan Luong, Ph.D. Operational Excellence Networks

Software Testing for Developer Development Testing. Duvan Luong, Ph.D. Operational Excellence Networks Software Testing for Developer Development Testing Duvan Luong, Ph.D. Operational Excellence Networks Contents R&D Testing Approaches Static Analysis White Box Testing Black Box Testing 4/2/2012 2 Development

More information

Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms

Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms Standard Glossary of Terms used in Software Testing Version 3.2 Foundation Extension - Usability Terms International Software Testing Qualifications Board Copyright Notice This document may be copied in

More information

Certification Authorities Software Team (CAST) Position Paper CAST-25

Certification Authorities Software Team (CAST) Position Paper CAST-25 Certification Authorities Software Team (CAST) Position Paper CAST-25 CONSIDERATIONS WHEN USING A QUALIFIABLE DEVELOPMENT ENVIRONMENT (QDE) IN CERTIFICATION PROJECTS COMPLETED SEPTEMBER 2005 (Rev 0) NOTE:

More information

ISO/IEC INTERNATIONAL STANDARD

ISO/IEC INTERNATIONAL STANDARD INTERNATIONAL STANDARD ISO/IEC 14143-2 First edition 2002-11-15 Information technology Software measurement Functional size measurement Part 2: Conformity evaluation of software size measurement methods

More information

Developing Real-Time Systems

Developing Real-Time Systems Developing Real-Time Systems by George R. Dimble, Jr. Introduction George R. Trimble, Jr., obtained a B.A. from St. John's College in 1948 and an M.A. in mathematics from the University of Delaware in

More information

By V-cubed Solutions, Inc. Page1. All rights reserved by V-cubed Solutions, Inc.

By V-cubed Solutions, Inc.   Page1. All rights reserved by V-cubed Solutions, Inc. By V-cubed Solutions, Inc. Page1 Purpose of Document This document will demonstrate the efficacy of CODESCROLL CODE INSPECTOR, CONTROLLER TESTER, and QUALITYSCROLL COVER, which has been developed by V-cubed

More information

Tool Qualification Plan for Testwell CTC++

Tool Qualification Plan for Testwell CTC++ Tool Qualification Plan for Testwell CTC++ Version: 0.8 Date: 2014-11-17 Status: Author: File: Size: Generic / Adapted / Presented / Generated / Reviewed / Final Dr. Martin Wildmoser, Dr. Oscar Slotosch

More information

SERIES X: DATA NETWORKS, OPEN SYSTEM COMMUNICATIONS AND SECURITY. ITU-T X.660 Guidelines for using object identifiers for the Internet of things

SERIES X: DATA NETWORKS, OPEN SYSTEM COMMUNICATIONS AND SECURITY. ITU-T X.660 Guidelines for using object identifiers for the Internet of things I n t e r n a t i o n a l T e l e c o m m u n i c a t i o n U n i o n ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU Series X Supplement 31 (09/2017) SERIES X: DATA NETWORKS, OPEN SYSTEM COMMUNICATIONS

More information

Sample Question Paper. Software Testing (ETIT 414)

Sample Question Paper. Software Testing (ETIT 414) Sample Question Paper Software Testing (ETIT 414) Q 1 i) What is functional testing? This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type

More information

OCL Support in MOF Repositories

OCL Support in MOF Repositories OCL Support in MOF Repositories Joachim Hoessler, Michael Soden Department of Computer Science Technical University Berlin hoessler@cs.tu-berlin.de, soden@cs.tu-berlin.de Abstract From metamodels that

More information

Topics in Software Testing

Topics in Software Testing Dependable Software Systems Topics in Software Testing Material drawn from [Beizer, Sommerville] Software Testing Software testing is a critical element of software quality assurance and represents the

More information

Sample Exam. Certified Tester Foundation Level

Sample Exam. Certified Tester Foundation Level Sample Exam Certified Tester Foundation Level Answer Table ASTQB Created - 2018 American Stware Testing Qualifications Board Copyright Notice This document may be copied in its entirety, or extracts made,

More information

Generic Requirements Management and Verification Process for Ground Segment and Mission Operations Preparation

Generic Requirements Management and Verification Process for Ground Segment and Mission Operations Preparation Generic Requirements Management and Verification Process for Ground Segment and Mission Operations Preparation Dr. Frank Wallrapp 1 and Andreas Lex 2 German Space Operations Center, DLR Oberpfaffenhofen,

More information

Testing and Validation of Simulink Models with Reactis

Testing and Validation of Simulink Models with Reactis Testing and Validation of Simulink Models with Reactis Build better embedded software faster. Generate tests from Simulink models. Detect runtime errors. Execute and debug Simulink models. Track coverage.

More information

CITS5501 Software Testing and Quality Assurance Formal methods

CITS5501 Software Testing and Quality Assurance Formal methods CITS5501 Software Testing and Quality Assurance Formal methods Unit coordinator: Arran Stewart May 1, 2018 1 / 49 Sources Pressman, R., Software Engineering: A Practitioner s Approach, McGraw-Hill, 2005

More information

Verification and Validation

Verification and Validation Steven Zeil February 13, 2013 Contents 1 The Process 3 1 2 Non-Testing V&V 7 2.1 Code Review....... 8 2.2 Mathematically-based verification......................... 19 2.3 Static analysis tools... 23 2.4

More information

Verification and Validation

Verification and Validation Steven Zeil February 13, 2013 Contents 1 The Process 2 2 Non-Testing V&V 3 2.1 Code Review........... 4 2.2 Mathematically-based verification.................................. 8 2.3 Static analysis tools.......

More information

ParkIT Application User Manual Page 1/63

ParkIT Application User Manual Page 1/63 ParkIT Application User Manual Page 1/63 Version: v1.5 Document version: 2016.07.28 Table of Contents 1 Application overview... 5 1.1 Basic handling methods... 5 2 Main screen... 6 3 User authentication...

More information

Addressing Verification Bottlenecks of Fully Synthesized Processor Cores using Equivalence Checkers

Addressing Verification Bottlenecks of Fully Synthesized Processor Cores using Equivalence Checkers Addressing Verification Bottlenecks of Fully Synthesized Processor Cores using Equivalence Checkers Subash Chandar G (g-chandar1@ti.com), Vaideeswaran S (vaidee@ti.com) DSP Design, Texas Instruments India

More information

ASTQB Advance Test Analyst Sample Exam Answer Key and Rationale

ASTQB Advance Test Analyst Sample Exam Answer Key and Rationale ASTQB Advance Test Analyst Sample Exam Answer Key and Rationale Total number points = 120 points Total number points to pass = 78 points Question Answer Explanation / Rationale Learning 1 A A is correct.

More information

Dataworks Development, Inc. P.O. Box 174 Mountlake Terrace, WA (425) fax (425)

Dataworks Development, Inc. P.O. Box 174 Mountlake Terrace, WA (425) fax (425) Dataworks Development, Inc. P.O. Box 174 Mountlake Terrace, WA 98043 (425) 673-1974 fax (425) 673-2506 The Freezerworks Validation Verification Package Dataworks Development, Inc. has over 20 years of

More information

This document is a preview generated by EVS

This document is a preview generated by EVS INTERNATIONAL STANDARD ISO/IEC/ IEEE 29119-3 First edition 2013-09-01 Software and systems engineering Software testing Part 3: Test documentation Ingénierie du logiciel et des systèmes Essais du logiciel

More information

1.1 Jadex - Engineering Goal-Oriented Agents

1.1 Jadex - Engineering Goal-Oriented Agents 1.1 Jadex - Engineering Goal-Oriented Agents In previous sections of the book agents have been considered as software artifacts that differ from objects mainly in their capability to autonomously execute

More information

Software Engineering (CSC 4350/6350) Rao Casturi

Software Engineering (CSC 4350/6350) Rao Casturi Software Engineering (CSC 4350/6350) Rao Casturi Testing Software Engineering -CSC4350/6350 - Rao Casturi 2 Testing What is testing? Process of finding the divergence between the expected behavior of the

More information

Verification, Validation, and Test with Model-Based Design

Verification, Validation, and Test with Model-Based Design 2008-01-2709 Verification, Validation, and Test with Model-Based Design Copyright 2008 The MathWorks, Inc Tom Erkkinen The MathWorks, Inc. Mirko Conrad The MathWorks, Inc. ABSTRACT Model-Based Design with

More information

- Table of Contents -

- Table of Contents - - Table of Contents - 1 INTRODUCTION... 1 1.1 OBJECTIVES OF THIS GUIDE... 1 1.2 ORGANIZATION OF THIS GUIDE... 2 1.3 COMMON CRITERIA STANDARDS DOCUMENTS... 3 1.4 TERMS AND DEFINITIONS... 5 2 BASIC KNOWLEDGE

More information

Conformance Requirements Guideline Version 0.1

Conformance Requirements Guideline Version 0.1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 Editors: Conformance Requirements Guideline Version 0.1 Aug 22, 2001 Lynne Rosenthal (lynne.rosenthal@nist.gov)

More information

Control of Processes in Operating Systems: The Boss-Slave Relation

Control of Processes in Operating Systems: The Boss-Slave Relation Control of Processes in Operating Systems: The Boss-Slave Relation R. Stockton Gaines Communications Research Division, Institute for Defense Analyses, Princeton NJ and The RAND Corporation, Santa Monica

More information

Incompatibility Dimensions and Integration of Atomic Commit Protocols

Incompatibility Dimensions and Integration of Atomic Commit Protocols The International Arab Journal of Information Technology, Vol. 5, No. 4, October 2008 381 Incompatibility Dimensions and Integration of Atomic Commit Protocols Yousef Al-Houmaily Department of Computer

More information

Software Testing CS 408

Software Testing CS 408 Software Testing CS 408 1/09/18 Course Webpage: http://www.cs.purdue.edu/homes/suresh/408-spring2018 1 The Course Understand testing in the context of an Agile software development methodology - Detail

More information

Experience Report: Error Distribution in Safety-Critical Software and Software Risk Analysis Based on Unit Tests

Experience Report: Error Distribution in Safety-Critical Software and Software Risk Analysis Based on Unit Tests Experience Report: Error Distribution in Safety-Critical Software and Software Risk Analysis Based on Unit Tests Stephan Ramberger, Thomas Gruber, Wolfgang Herzner Division Information Technologies ARC

More information

Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm. Rao Casturi 11/10/2015

Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm. Rao Casturi 11/10/2015 Software Engineering Fall 2015 (CSC 4350/6350) TR. 5:30 pm 7:15 pm Rao Casturi 11/10/2015 http://cs.gsu.edu/~ncasturi1 Class announcements Final Exam date - Dec 1 st. Final Presentations Dec 3 rd. And

More information

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA Chapter 1 : BioMath: Transformation of Graphs Use the results in part (a) to identify the vertex of the parabola. c. Find a vertical line on your graph paper so that when you fold the paper, the left portion

More information

Inheriting, Copying, Deleting COMOS. Platform Inheriting, Copying, Deleting. Trademarks 1. Inheritance. Copying: General Definitions 3

Inheriting, Copying, Deleting COMOS. Platform Inheriting, Copying, Deleting. Trademarks 1. Inheritance. Copying: General Definitions 3 Trademarks 1 Inheritance 2 COMOS Platform Operating Manual Copying: General Definitions 3 Copying with the Navigator 4 Copy structure 5 Copying across projects 6 Cross-class copying 7 The Object Matcher

More information

Examination Questions Time allowed: 1 hour 15 minutes

Examination Questions Time allowed: 1 hour 15 minutes Swedish Software Testing Board (SSTB) International Software Testing Qualifications Board (ISTQB) Foundation Certificate in Software Testing Practice Exam Examination Questions 2011-10-10 Time allowed:

More information

Software Engineering Fall 2014

Software Engineering Fall 2014 Software Engineering Fall 2014 (CSC 4350/6350) Mon.- Wed. 5:30 pm 7:15 pm ALC : 107 Rao Casturi 11/10/2014 Final Exam date - Dec 10 th? Class announcements Final Presentations Dec 3 rd. And Dec 8 th. Ability

More information

CTFL -Automotive Software Tester Sample Exam Paper Syllabus Version 2.0

CTFL -Automotive Software Tester Sample Exam Paper Syllabus Version 2.0 Surname, Forename: Gender: male female Company address: Telephone: Fax: E-mail-address: Invoice address: Training provider: Trainer: CTFL -Automotive Software Tester Sample Exam Paper Syllabus Version

More information

CERTIFIED. Faster & Cheaper Testing. Develop standards compliant C & C++ faster and cheaper, with Cantata automated unit & integration testing.

CERTIFIED. Faster & Cheaper Testing. Develop standards compliant C & C++ faster and cheaper, with Cantata automated unit & integration testing. CERTIFIED Faster & Cheaper Testing Develop standards compliant C & C++ faster and cheaper, with Cantata automated unit & integration testing. Why Industry leaders use Cantata Cut the cost of standards

More information

Topic C. Communicating the Precision of Measured Numbers

Topic C. Communicating the Precision of Measured Numbers Topic C. Communicating the Precision of Measured Numbers C. page 1 of 14 Topic C. Communicating the Precision of Measured Numbers This topic includes Section 1. Reporting measurements Section 2. Rounding

More information

CERT C++ COMPLIANCE ENFORCEMENT

CERT C++ COMPLIANCE ENFORCEMENT CERT C++ COMPLIANCE ENFORCEMENT AUTOMATED SOURCE CODE ANALYSIS TO MAINTAIN COMPLIANCE SIMPLIFY AND STREAMLINE CERT C++ COMPLIANCE The CERT C++ compliance module reports on dataflow problems, software defects,

More information

Fachgebiet Softwaretechnik, Heinz Nixdorf Institut, Universität Paderborn. 4. Testing

Fachgebiet Softwaretechnik, Heinz Nixdorf Institut, Universität Paderborn. 4. Testing 4. vs. Model Checking (usually) means checking the correctness of source code Model Checking means verifying the properties of a model given in some formal (not program code) notation Attention: things

More information

Definition: A data structure is a way of organizing data in a computer so that it can be used efficiently.

Definition: A data structure is a way of organizing data in a computer so that it can be used efficiently. The Science of Computing I Lesson 4: Introduction to Data Structures Living with Cyber Pillar: Data Structures The need for data structures The algorithms we design to solve problems rarely do so without

More information