SESSION PRE-1. Exploratory Test Automation. DOUG HOFFMAN, Software Quality Methods, LLC.

Size: px
Start display at page:

Download "SESSION PRE-1. Exploratory Test Automation. DOUG HOFFMAN, Software Quality Methods, LLC."

Transcription

1 SESSION PRE-1 DOUG HOFFMAN, Software Quality Methods, LLC. About Doug Hoffman I am a management consultant in testing/qa strategy and tactics. I help plan quality strategies and tactical approaches for organizations, especially esoteric test automation. I gravitated into quality assurance from engineering. I ve been a production engineer, developer, support engineer, tester, writer, instructor, and I ve managed manufacturing quality assurance, software quality assurance, technical support, software development, and documentation. My work has cut across the industry from start-ups to multi-trillion dollar organizations. Along the way I have learned a great deal about software testing and automation. I enjoy sharing what I ve learned with interested people. Current employment President of Software Quality Methods, LLC. (SQM) Management consultant in strategic and tactical planning for software quality Education B.A. in Computer Science MS in Electrical Engineering, (Digital Design and Information Science) MBA Professional President, Association for Software Testing Past Chair, Silicon Valley Section, American Society for Quality (ASQ) Founding Member and Past Chair, Santa Clara Valley Software Quality Association (SSQA) Certified in Software Quality Engineering (ASQ-CSQE, 1995) Certified Quality Manager (ASQ-CMQ/OE, 2003) Participant in the Los Altos Workshop on Software Testing and dozens of other offshoots 2 Page 1

2 Today s Topics Test automation Automated exploratory tests Ancillary test support Test oracles and automation Results comparison 3 Automation Background 4 Page 2

3 Opportunities For Automation Program analysis Test design Test case management/selection Input selection/generation Automated test case Test execution control Actual results capture Expected results generation Results comparison Report generation 5 Automation Defined For This Class Get the computer to do one or more of: Test design Input selection/generation Automated test case Test execution control Actual results capture Expected results generation Results comparison 6 Page 3

4 Automated vs. Manual Tests An automated test is not equivalent to the most similar manual test: Automated comparison is typically more precise (and may be tripped by irrelevant discrepancies) Skilled human comparison samples a wider range of dimensions, noting oddities that one wouldn't program the computer to detect Automated input is consistent, while humans cannot closely replicate their test activities 7 Automation Narrows Our Scope An automated test is more limited than a manual test: The test exercise must be automated in advance People can integrate outside experience People may gain insights We can only check machine available results We only check result values that we pre-specify Cannot easily back up when something unexpected happens 8 Page 4

5 Typical Automated Tests Are based on the functions of a test tool Automate a manual tester s actions Are a list of test activities (script) Work at the UI level Does program checking at specified points in the script Are used to repeat and speed up manual testing 9 Valuable Regression Automation Build/smoke tests Required repeatability Rerun many times Demo script Comfort managers 10 Page 5

6 Questions We Should Ask About Testing and Automation Should we limit our thinking to what a tool does? Should we focus automation on things we can do manually and script? Should we limit ourselves to UIs or APIs? Are we checking everything that s important? Do speedy manual tests find more or different bugs than manually running tests? Can inefficient or approximate tests be valuable? Must tests do the same things every time? 11 Exploratory Automated Tests 12 Page 6

7 Exploratory Automation? Enables and amplifies tester s abilities Does something new every time May use massive numbers of iterations May use multiple parallel oracles Can find bugs we never imagined 13 Enable the Exploratory Tester Enables manual tester to do more things Enables manual tester to work faster Amplifies manual tester s abilities Can look under the covers Real time monitoring of the system 14 Page 7

8 Randomness and Tests Random number generators Pseudo-Random numbers Generating random seeds Repeatable by entering seed value Randomized input values Randomized data generation 15 Advantages of Exploratory Automation Does things a manual tester cannot do Does something new every time May use massive numbers of iterations May feed inputs directly to SUT Oracles may check internal information May have multiple parallel oracles Supplements baseline tests Can uncover obscure bugs Can uncover bugs impossible to find manually 16 Page 8

9 Examples of Exploratory Automation Cem Kaner s Telenova example using random events Statistical packet profiles for data link testing Using Dumb monkeys and MS Word Sandboxed random regression tests 1/3 or 3x heuristic in test harness Periodic database unload/check Database links Database locking (single and multi-threaded) Device front panel state machine long walks Database load/unload dropouts Database unbalanced splitting Random machine instruction generation 17 Disadvantages of Exploratory Automation May not be repeatable (even with seeds) Difficulty of capturing program and system information for diagnosis May use multiple real-time oracles Coordination of autonomous oracles with the test Does not provide rigorous coverage Can uncover bugs that can t be fixed 18 Page 9

10 A Model of Test Execution 19 Oracles and Test Automation Good automated testing depends on our ability to programmatically detect whether the SUT behaves in expected or unexpected ways Our ability to automate testing is fundamentally constrained by our ability to create and use oracles. 20 Page 10

11 The Oracle The principle or mechanism for telling whether the SUT behavior appears OK or if further investigation is required Answers the question is this expected behavior? A fundamental part of every test execution Expected result is NOT required in advance Multiple oracles can be used for a test 21 Expanded Software Testing Model Test Inputs Test Results Precondition Data Precondition Program State Environmental Inputs System Under Test Postcondition Data Post-condition Program State Environmental Results 22 Page 11

12 Implications of the Expanded Model The test exercise is the easy part We don t control all inputs We can t check everything Multiple domains are involved We don t know all the factors 23 Important Factors in Outcome Comparison Which outcomes do we check? How do we know what to expect? Do we know how to capture it? Can we store the results? What differences matter? Are fuzzy comparisons needed? When do we compare outcomes? 24 Page 12

13 Software Test Automation: About Test Oracles 25 The Test Oracle Two slightly different views on the application of the word Reference Function: You ask the oracle what the correct answer is Reference and Evaluation Function: You ask it whether the program behavior is abnormal Using an oracle, you can compare the program s behavior to a reference (predicted behavior) and decide whether the program passed the test. Deterministic oracle (mismatch means abnormal) Probabilistic oracle (means probably abnormal) 26 Page 13

14 A Caveat About Test Results It is important to recognize that evaluations are heuristic reported passes and fails are only guesses We can have false alarms: A mismatch between actual and expected behavior might not matter. We must investigate when a test fails to know if there is something to report. We can miss defects: A match between actual and expected behavior might result from the same error in both, a limitation or error in the test, not checking the right things, or incomplete coverage. These are silent misses and we don t ever know to investigate. 27 Reference Function Under this view, you compare your results to those obtained from the oracle using the same inputs The oracle is the method of generating the expected results Comparison and evaluation may be handled separately Check for equality between actual and expected results If the results match, there is nothing interesting ( Pass ) If they don t match, further investigation is required ( Fail ) 28 Page 14

15 Reference And Evaluation Function Under this view, the oracle accepts both the inputs and results The oracle checks whether the results could be consistent with the inputs Comparison and evaluation are handled at once It checks the expected results for the possibility or likelihood they are wrong given the inputs Pass if results are possible or likely Further investigation is required if the results are not possible or unlikely ( Fail ) 29 Evaluation Function Under this view, the oracle only looks at results The oracle checks whether the results are consistent with some conditions or rules May be real-time or post-execution Some examples: Memory leak tests Database consistency check Statistical profiling Self-Verifying data 30 Page 15

16 Where To Get Expected Results Previous version Competitive product Model System or subsystem Heuristics (assertions, fuzzing, limits, etc.) Invariants Behavioral characteristics Secondary characteristics Embedded data 31 Which Results To Check Expected results Anticipated likely errors Major environmental factors Invariants Available easy oracles 32 Page 16

17 Checking Results Exact comparison Values Ranges Behaviors Key words/values Heuristic comparison Similarity Algorithm based Secondary characteristics Check during or after the test run 33 Airport X-Ray Exercise Test for auto-recognition of a knife What are we testing for? Which outcomes are interesting? How will we know? How can we automate the stimulus (inputs)? How can we automatically detect the outcome(s)? What method of verdict generation? 34 Page 17

18 Software Test Automation: Oracle Characteristics 35 Oracles: Challenges Completeness of information Accuracy of information Usability of the oracle or of its results Maintainability of the oracle Complexity when compared to the SUT Temporal relationships Costs To Page 18

19 Oracle Completeness Input Coverage Result Coverage Function Coverage Sufficiency Types of errors possible SUT environments There may be more than one oracle for the SUT Inputs may affect more than one oracle 37 Oracle Accuracy How similar to SUT Arithmetic accuracy Statistically similar How independent from SUT Algorithms Sub-programs & libraries System platform Operating environment Close correspondence makes common mode faults more likely and reduces maintainability How extensive The more ways in which the oracle matches the SUT, i.e. the more complex the oracle, the more errors Types of possible errors 38 Page 19

20 Form of information Oracle Usability Bits and bytes Electronic signals Hardcopy and display Location of information Data set size Fitness for intended use Availability of comparators Support in SUT environments 39 Oracle Maintainability COTS or custom Custom oracle can become more complex than the SUT More complex oracles make more errors Keep correspondence through SUT changes Test exercises Test data Tools Ancillary support activities required 40 Page 20

21 Oracle Complexity Correspondence with SUT Coverage of SUT domains and functions Accuracy of generated results Maintenance effort to keep correspondence through SUT changes Test exercises Test data Tools Ancillary support activities required 41 Oracle Temporal Relationships How fast to generate results How fast to compare When is the oracle run When are results generated When are results compared 42 Page 21

22 Oracle Costs Creation or acquisition costs Maintenance of oracle and comparators Execution cost Cost of comparisons Additional analysis of errors Cost of misses Cost of false alarms 43 Software Test Automation: Types of Test Oracles 44 Page 22

23 Types of Test Oracles None Perfect Consistency Self-Verifying Model based Hand-crafted Heuristic Statistical Computational Human 45 Notes On Test Oracles At least one type is used in every test Implemented oracles may have some characteristics of multiple types of oracles Multiple oracles may be used for one test Oracles may be independent or integrated into a test The oracles are key to good automation 46 Page 23

24 Oracle Taxonomy No Oracle Complete Oracle Consistency Definition Advantages Disadvantages - Doesn t check correctness of results, (only that some results were produced) - Independent generation of all expected results - Verifies current run results with a previous run - Can run any amount of data (limited only by the time the SUT takes) - No encountered errors go undetected - Can be used for many different tests -Fast way to automate using an oracle (if available) - Verification is straightforward - Can generate and verify large amounts of data - Only spectacular failures are noticed - Expensive to implement and maintain - Complex and often timeconsuming when run - Original run may include undetected errors 47 Oracle Taxonomy Self-Verifying Model Based Hand-Crafted Definition Advantages Disadvantages - Embeds answer within data in the messages - Uses digital data model of SUT behavior - Result is carefully selected by test designer - Allows extensive post-test analysis - Does not require external oracles - Verification is based on message contents - Can generate and verify large amounts of complex data - May use digital model for multiple tests - Digital form of model easier to maintain than automated test - Tests may work for multiple SUTs by using different models - Useful for some very complex SUTs - Expected result can be well understood - Must define answers and generate messages to contain them - Maintenance of complex SUT models is expensive - Model must match expected behavior - Does the same thing every time - Limited number of cases can be generated 48 Page 24

25 Oracle Taxonomy Heuristic Definition Advantages Disadvantages - Verifies some characteristics - Faster and easier than Perfect Oracle - Much less expensive to create and use - May be reusable across SUTs and Tests - Can miss systematic errors - Can miss obvious errors Statistical Computational - Uses statistical correlation between inputs and outcomes - Reverses the behavior of the SUT to revert results to inputs - Allows checking of very large data sets - Allows checking of live systems data - Allows checking after the fact - Good for mathematical functions - Good for straightforward transformations - May miss systematic errors - Can miss obvious errors - Limited applicability - May require complex programming Human - Applies a person s brain power to decide correctness - Available - Flexible (can always be applied) - Applies a broad spectrum of filters - Slow - Error prone - Easily distracted 49 No Oracle Strategy Method: Generate [usually random] inputs Run the test Easy to do Tests can run fast Only spectacular events are noticed May give a false sense of accomplishment 50 Page 25

26 No Oracle In Context May be useful: Early development testing Robustness testing Load or life testing Usually avoided: Results must be correct Used as a primary test mechanism Input non-trivial to generate 51 No Oracle Example Test functions through an API: 1. Select a random function and use random parameter values 2. Run the test exercise 3. Repeat without checking results 4. Watch for hangs or crashes 52 Page 26

27 No Oracle Example Test random character input to a word processor: 1. Generate a random input character 2. Run the test exercise 3. Repeat without checking results 4. Watch for hangs or crashes 53 No Oracle Model Test Inputs Test Results Precondition Data Precondition Program State Environmental Inputs System Under Test Postcondition Data Post-condition Program State Environmental Results 54 Page 27

28 Perfect Oracle Strategy Independent implementation Complete coverage over domains Input ranges Result ranges Correct results Usually expensive 55 Perfect Oracle In Context May be useful: Inexpensive to create or acquire Extremely high reliability required Reasonably good coverage of a range Usually avoided: Too expensive to create or acquire High risk that the oracle may be wrong Other types of oracles are sufficient 56 Page 28

29 Perfect Oracle Example Create a perfect sine (x) function oracle: 1. Determine the algorithm used by the SUT 2. Implement a separate sine (x) function using a different algorithm 3. Generate random values for x 4. The values returned from both sine (x) functions should match 57 Perfect Oracle Model Verdict Test Inputs Test = Test Results Precondition Data Precondition Program State Environmental Inputs Test Oracle System Under Test Postcondition Data Postcondition Program State Environmental Results 58 Page 29

30 Consistency Oracle Strategy Checking for changes or differences Regression checking Validated Unvalidated Alternate versions or platforms I call it A / B comparison 59 Consistency Oracle Strategy Checking for changes or difference Most frequently used strategy Most common methods: Golden master Run against competing product Run against another version of product (Note that all oracles might be called consistency oracles. I m using the term to describe oracles that compare with saved or separately generated results.) 60 Page 30

31 Consistency Oracle In Context May be useful: Inexpensive to create High likelihood that the oracle is correct Have preexisting automated tests Usually avoided: High risk that the oracle may be wrong Only type of oracle used Other types of oracles are available 61 Simple Consistency Oracle Example Create a consistency test using log files: 1. Create a test exercise that provides a log 2. Run the test exercise and save the log to a file 3. Run the test exercise again 4. Compare the logs 62 Page 31

32 Consistency Oracle Example Create a consistency test using screen shots: 1. Create a test exercise 2. Capture screen shots at designated places in the test 3. Identify places that will change (e.g., today s date) 4. Run the test exercise again 5. Compare the screen shots at the same places (masking out places that change) 63 Consistency Oracle Model Verdict Test Inputs Test Results Test = Precondition Data Precondition Program State Environmental Inputs Previous Results System Under Test Postcondition Data Postcondition Program State Environmental Results 64 Page 32

33 Self-Verifying Data (SVD) Strategy Self-Descriptive data Cyclic algorithms Shared keys (with algorithms) 65 SVD In Context May be useful: High volume of inputs or referenced data Key or seed can be used for data generation Straightforward to incorporate key with data Usually avoided: Outcomes don t reflect SVD data Complex data structure Data not easily generated from a key Key not easy to include with data 66 Page 33

34 Self-Descriptive SVD 1. Describe the expected result in the result Name of font (e.g., Comic Sans MS) Color (e.g., Blue) Size (e.g., 36 point) The following line should be xyz Etc. 67 Cyclic Algorithm SVD 1. Use a pattern when data is generated E.g., start, increment, count E.g., basic string, count of iterations 2. Identify the pattern in the result 3. Confirm that the actual pattern is expected Build into comparator Embed with the data 68 Page 34

35 Shared Keys SVD 1. Generate a coded identifier (e.g., random number seed) 2. Generate the test data using an algorithm 3. Attach the seed to the data Embedded Added field or envelope 4. Confirm by applying the algorithm using the identifier 69 Simple Shared Keys SVD Example Create a random name: 1. Generate and save random number seed (S) and convert to a string 2. Use the first random value using RAND(S) as the Length (L) 3. Generate random name (N) with L characters using RAND() 4. Concatenate the seed to the name 70 Page 35

36 Simple SVD Example Assume the seed (S) is 8 characters and name field has a maximum of 128 characters Generate a random name with length of L characters (a maximum of 120) Name = L Random characters 8 character S 9 to 128 characters long 71 Shared Keys SVD Example Create a database record: 1. Generate a random number Seed (S) 2. Store the Seed value in an added field within the record 3. Generate the record using the Seed and an algorithm 4. Verify records using the Seed and algorithm 72 Page 36

37 Self-Verifying Data Oracle Model Verdict Test Inputs Test = Test Results Precondition Data Precondition Program State Environmental Inputs Test Oracle System Under Test Postcondition Data Postcondition Program State Environmental Results 73 Model Based Oracle Strategy Identify and describe a [machine readable] model of some aspect of the SUT Design tests using the model as input Implement the tests (reading in the model) Update the model or use multiple models as needed 74 Page 37

38 Model Based Oracle In Context May be useful: Simple model State machine or screen hierarchy defined States can be monitored Events can be reliably generated Usually avoided: Complex model Unstable or poorly defined model State of program not easily determined Events are difficult to generate or manage 75 Model-Based Tests and Oracles Describe a state model of the software (e.g., menu tree) Describe the model of the SUT in a machine readable form Write a program to use the model as a basis for testing Use the model as a guide for generation of events Generate random events / inputs to the program The program responds by performing a function and/or moving to a new state The test also uses the model as an oracle to check the correctness of responses Douglas Hoffman Copyright , SQM, LLC. Slide 76 Page 38

39 Some Types of Models State Diagrams State Transition Tables Flow Charts Use Cases Data Flow Diagrams Entity-Relationship Diagrams Class Diagrams Activity Diagrams 77 State Transition Table Example S1 E3 E2 E1 E5 E5 S2 E4 Initial State Event Result New State S1 E1 <none> S1 E2 logged in S2 E3 SU log in S3 S2 E4 S4 E5 <none> S2 E6 logged out Exit S3 E4 S4 E5 admin S3 E6 logged out Exit E6 S3 S4 E4 E6 Exit Douglas Hoffman Copyright , SQM, LLC. Slide 78 Page 39

40 State Transition Table Example 1. Describe the state transition table in machine readable form 2. Design a test exercise applying the table (e.g., walk all transitions, random walk, or generate illegal events) 3. Verify transitions within the test by sensing the state 79 Test Inputs Model Based Oracle Model Precondition Data Precondition Program State Environmental Inputs Test Oracle Model of SUT System Under Test Verdict Test Results Postcondition Data Postcondition Program State Environmental Results 80 Page 40

41 Hand-Crafted Oracle Strategy Expected result is carefully crafted (or selected) with the input values Input and result are specified together Oracle is frequently built into the test The approach is most often taken for regression tests in complex SUTs 81 Hand Crafted Oracle In Context May be useful: Complex function Special cases are easily identified Usually avoided: Outcomes extremely difficult or time consuming to predict Other oracles are available 82 Page 41

42 Hand-Crafted Oracle Example Sales order processing: 1. Choose specific values for order information (e.g., boundary cases) 2. Identify what the order should look like and how the SUT should react 3. Verify actual vs. expected when the test is run 83 Hand-Crafted Oracle Model Test Inputs Precondition Data Test Oracle Verdict Test Results Postcondition Data Precondition Program State Environmental Inputs System Under Test Postcondition Program State Environmental Results 84 Page 42

43 Heuristic Oracle Strategy A Heuristic Oracle uses an approximation (a rule of thumb) or partial information that supports but does not mandate a given conclusion. We may use either exact or probabilistic evaluation. The heuristic doesn t tell that the program works correctly but it can tell that the program is misbehaving or something needs more investigation. This can be a cheap way to spot errors early in testing Note that most heuristics are prone to both Type I and Type II errors Note: All oracles are heuristic WRT pass/fail. Heuristic oracles explicitly apply heuristic techniques. 85 Heuristic Oracle In Context May be useful: Speed of comparison is very important Exact results are difficult to generate Few expected exception cases Simple heuristic available Usually avoided: Used as the only oracle Heuristic is too complex More exact checks are required Too many expected exceptions 86 Page 43

44 Some Useful Oracle Heuristics Similar results that don t always work Less exact computations (16 bits instead of 64) Statistical properties Subsets Break into ranges Other relationships not explicit in SUT Date/transaction number One home address Timings General characteristics Harmonic or repeating patterns 87 Simple Heuristic Oracle Example Data communications packets: 1. Generate random packets 2. Compute CRC (cyclic redundancy check) value 3. Append the CRC to each packet 4. Transmit packets 5. Compute CRC upon receipt 6. Value should correspond to appended CRC 88 Page 44

45 Heuristic Oracle Example Employee database integrity check: 1. Employee older than dependents 2. Employee start date after company established 3. Employee age greater than Zip code (if USA) 5 or 9 digits 5. SSN has 9 digits 6. No duplicate SSN entries 89 Heuristic Oracle Model Test Inputs Precondition Data Test Oracle Verdict Test Results Postcondition Data Precondition Program State Environmental Inputs System Under Test Postcondition Program State Environmental Results 90 Page 45

46 Statistical Oracle Strategy Principle idea: Use high-volume random tests Results checking based on population statistical characteristics The oracle: Computes the statistical characteristics from the inputs Computes the statistical characteristics from the outcomes Compares the statistics in light of the expected transformation through the SUT 91 Statistical Oracle In Context May be useful: Inputs and outcomes can be statistically counted Strong correlation between input and outcome population statistics Population statistics are easily computable Usually avoided: Used as the only oracle Input is not countable for statistical profiling Input/outcome populations are not statistically related 92 Page 46

47 Simple Statistical Oracle Computing the right state taxes: 1. Generate random items, quantities, locations, etc. for purchase transactions 2. Track total amounts of purchases and taxes collected 3. Compute average tax across all locations 4. (total taxes collected) should equal (total purchases * average taxes) 93 Test Inputs Precondition Data Statistical Oracle Model Test Oracle Statistics Test Results Postcondition Data Precondition Program State Environmental Inputs System Under Test Postcondition Program State Environmental Results 94 Page 47

48 Computational Oracle Strategy Principle idea: Perform the reverse (inverse) function Identify whether the outcomes are possible The oracle: Applies reversal function on the outcomes Computes the possible starting point for the inputs Compares the actual inputs to check if they are in the set of possible starting points Is subject to common-mode problems May miss obvious errors 95 Computational Oracle In Context May be useful: Inverse function exists Round tripping data conversions Data conversions are not lossy functions Usually avoided: No inversion possible Inversion loses too much fidelity Inversion too complex or time consuming 96 Page 48

49 Simple Computational Oracle Example Computing the square root: 1. Generate a random input value (x) 2. Compute y = square root (x) 3. Compute z = y 2 4. Check x = z 97 Computational Oracle Example Splitting a table: 1. Generate and populate a table 2. Select some row in the table 3. Split the table 4. The oracle deletes the split between the two tables 5. Check the original and final tables are the same 98 Page 49

50 Computational Oracle Model Test Inputs Precondition Data Test Oracle Verdict Test Results Postcondition Data Precondition Program State Environmental Inputs System Under Test Postcondition Program State Environmental Results 99 Human Oracle Strategy Set a person in front of the SUT to observe Human uses their judgment to decide the verdict Works for manual or automated exercises Works for scripted or unscripted tests Note that a human oracle is applied whenever any other oracle identifies a potential bug 100 Page 50

51 Human Oracle In Context May be useful: Outcome medium is not machine readable Input/outcome relationship is very complex Insufficient time to automate the oracle Usually avoided: High volume of comparisons Very repetitive checking High level of detail or specificity required 101 Test Inputs Human Oracle Model Verdict Test Results Precondition Data Test Oracle Postcondition Data Precondition Program State Environmental Inputs System Under Test Postcondition Program State Environmental Results 102 Page 51

52 Context and Oracles Exercise What is your context? What are we testing for? Which outcomes are interesting? How will we know? How can we automate the stimulus (inputs)? How can we automatically detect the outcome(s)? What method of verdict generation? 103 Software Test Automation: Test Comparators 104 Page 52

53 Deterministic Evaluation Mechanisms The comparator (or oracle) accepts the inputs and/or the test outputs to compare for a match between expected and actual behaviors If they do not match, investigation is required Non-Deterministic The comparator (or oracle) accepts the inputs and/or the test outputs and evaluates whether the results are plausible If the outcomes are implausible or not close enough, investigation is required 105 Deterministic Evaluation Comparison for equality of two data sets Saved result from a previous test Parallel function previous version competitor s product standard reference function custom oracle 106 Page 53

54 Deterministic Evaluation Comparison for outcome correctness Inverse function mathematical inverse operational inverse Useful mathematical rules Deterministic incidental or informative attributes Expected result embedded in the data self-descriptive (blue) embedded key (CRC, seed) cyclic data 107 Non-Deterministic Evaluation Approximate comparisons or similarities Compare insufficient attributes use 16 bit functions to check 64 bit functions testable pattern for a set of values for a variable one or a few primary attributes of outcomes Statistical distributions test for outliers, means, predicted distribution statistical properties (predicted population statistics) comparison of correlated variables populations 108 Page 54

55 Non-Deterministic Evaluation Approximate comparisons Approximate models At the end of the exercise Z should be true X is usually greater than Y precondition = postcondition Fuzzy comparisons within a range of values bitmaps statistical analysis (likely properties) shading CRC type summaries 109 Non-Deterministic Evaluation Similarities Incidental or informative attributes correlations (time of day and sales order number) relative duration (e.g., within factor of x/3 to 3x) likely sequences size or shape (number of digits/characters) uniqueness of SSN Reordered sets (where order matters) itemized list of sales items (taxable and nontaxable) reorder asynchronous events for comparison 110 Page 55

56 Software Test Automation: Some Parting Words about the Design of Automated Tests 111 Extends our reach Augments human capabilities Does something different every time Is heavily dependent on oracles May access internal information Can surface bugs that we didn t consider May check invariants 112 Page 56

57 High Volume Random Tests Principle idea High-volume testing using varied inputs Results checking based on individual results or population s statistical characteristics Fundamental goal is to have a huge numbers of tests The individual tests may not be not all that powerful or compelling Input is varied for each step Individual results may not be checked for correctness sometimes population statistics are used The power of the approach lies in the large number of tests 113 Low Volume Exploratory Tests Principle idea One-at-a-time testing using varied inputs Use automation to make exploration easy Fundamental goal is to enable exploration Variations on a theme (modify automated tests) Quick-and-dirty generation of tests/data/comparisons Background checking Memory leak detection File modification Etc. Command line or UI based variations 114 Page 57

58 Good Automated Tests Start with a known state Build variation into the tests Plan for the capture of data on error Check for errors during the test run Capture information when an error is noticed Minimize error masking and cascading 115 Start With a Known State Data Load preset values in advance of testing Reduce dependencies on other tests Program State External view Internal state variables Environment Decide on desired controlled configuration Capture relevant session information 116 Page 58

59 Build Variation Into the Tests Dumb monkeys Data driven tests Pseudo-random event generation Model driven and model based automation Variations on a theme Configuration variables 117 Plan For Capture of Data Know what data may be important to identify and fix errors When necessary, capture prospective diagnostic information before an error is detected Include information beyond inputs and outcomes Check as many things as possible Design tests to minimize changes after errors occur 118 Page 59

60 Check for Errors Periodically check for errors as the test runs Document expectations in the tests Capture prospective diagnostic information before an error is detected Capture information when the error is found (don t wait) Results Other domains Dump the world Check as many things as possible 119 Capture Information When An Error is Noticed When something unexpected is detected: dump the world All interesting information Let the users of the information determine what s interesting Since the developers are typically the information users, enlist their help to automate information gathering OR Freeze wait for a person When interesting information is expensive to capture When there isn t time to create automated capture routines 120 Page 60

61 Don t Encourage Error Masking or Error Cascading Session runs a series of tests A test does not run to normal completion Error masking occurs if testing stops Error cascading occurs if one or more downstream tests fails as a consequence of this test failing Impossible to avoid altogether Should not design automated tests that unnecessarily cause either situation 121 Recapping Use automation to explore (not just repeat) Extend your reach into the system Think of models for test execution Automate based on available oracles Use different types of oracles Design good automated tests 122 Page 61

62 Page 62

Test Oracles. Test Oracle

Test Oracles. Test Oracle Encontro Brasileiro de Testes de Software April 23, 2010 Douglas Hoffman, BACS, MBA, MSEE, ASQ-CSQE, ASQ-CMQ/OE, ASQ Fellow Software Quality Methods, LLC. (SQM) www.softwarequalitymethods.com doug.hoffman@acm.org

More information

Exploratory Automated Testing. About Doug Hoffman

Exploratory Automated Testing. About Doug Hoffman CAST 2013 August 27, 2013 Douglas Hoffman, BACS, MBA, MSEE, ASQ-CSQE, ASQ-CMQ/OE, ASQ Fellow Software Quality Methods, LLC. (SQM) www.softwarequalitymethods.com doug.hoffman@acm.org Douglas Hoffman Copyright

More information

Test Automation Beyond Regression Testing

Test Automation Beyond Regression Testing Test Automation Beyond Regression Testing Doug Hoffman, BA, MBA, MSEE, ASQ-CSQE Software Quality Methods, LLC. (SQM) www.softwarequalitymethods.com doug.hoffman@acm.org STPCon Spring 2008 Why Automate

More information

Non-Regression Test Automation

Non-Regression Test Automation Non-Regression Test Automation Douglas Hoffman 8/3/2008 Software Quality Methods, LLC. Doug.Hoffman@acm.org www.softwarequalitymethods.com Experience and qualifications: Douglas Hoffman has over twenty-five

More information

Self-Verifying Data. Douglas Hoffman Software Quality Methods, LLC.

Self-Verifying Data. Douglas Hoffman Software Quality Methods, LLC. Self-Verifying Data Douglas Hoffman Software Quality Methods, LLC. Doug.Hoffman@acm.org Abstract Some tests require large data sets. The data may be database records, financial information, communications

More information

STPCon Fall 2013 October 24, 2013

STPCon Fall 2013 October 24, 2013 SESSION #1001 Improved Testing Using a Test Execution Model DOUG HOFFMAN, Software Quality Methods, LLC. Doug.Hoffman@acm.org, www.softwarequalitymethods.com Models And Testing A model describes the elements

More information

Douglas Hoffman. BACS, MSEE, MBA, ASQ Fellow, ASQ-CSQE, ASQ-CMQ/OE

Douglas Hoffman. BACS, MSEE, MBA, ASQ Fellow, ASQ-CSQE, ASQ-CMQ/OE Douglas Hoffman BACS, MSEE, MBA, ASQ Fellow, ASQ-CSQE, ASQ-CMQ/OE Publications 1, 2 Kaner, Cem, and Hoffman, Douglas, The Domain Testing Workbook Context Driven Press, 2013 Graham, Dorothy, and Fewster,

More information

Douglas Hoffman. BACS, MSEE, MBA, ASQ Fellow, ASQ-CSQE, ASQ-CMQ/OE

Douglas Hoffman. BACS, MSEE, MBA, ASQ Fellow, ASQ-CSQE, ASQ-CMQ/OE Douglas Hoffman BACS, MSEE, MBA, ASQ Fellow, ASQ-CSQE, ASQ-CMQ/OE Publications 1, 2 Self-Verifying Data Pacific Northwest Software Quality Conference (PNSQC) October, 2012 Why Tests Don t Pass Conference

More information

Beyond Regression Testing SSQA 11/14/06

Beyond Regression Testing SSQA 11/14/06 Test Automation: Beyond Regression Testing Douglas Hoffman Quality Program Manager, Hewlett-Packard SSQA 11/14/06 Regression Testing 1. IEEE 610.12: Selective retesting of a system or component to verify

More information

Lecture 15 Software Testing

Lecture 15 Software Testing Lecture 15 Software Testing Includes slides from the companion website for Sommerville, Software Engineering, 10/e. Pearson Higher Education, 2016. All rights reserved. Used with permission. Topics covered

More information

Quality Assurance: Test Development & Execution. Ian S. King Test Development Lead Windows CE Base OS Team Microsoft Corporation

Quality Assurance: Test Development & Execution. Ian S. King Test Development Lead Windows CE Base OS Team Microsoft Corporation Quality Assurance: Test Development & Execution Ian S. King Test Development Lead Windows CE Base OS Team Microsoft Corporation Introduction: Ian King Manager of Test Development for Windows CE Base OS

More information

Black-box Testing Techniques

Black-box Testing Techniques T-76.5613 Software Testing and Quality Assurance Lecture 4, 20.9.2006 Black-box Testing Techniques SoberIT Black-box test case design techniques Basic techniques Equivalence partitioning Boundary value

More information

Douglas Hoffman. CAST) July, Why Tests Don t Pass Conference of the Association for Software Testing (CAST

Douglas Hoffman. CAST) July, Why Tests Don t Pass Conference of the Association for Software Testing (CAST Publications 1, 2 Douglas Hoffman BACS, MSEE, MBA, ASQ Fellow, ASQ-CSQE, ASQ-CMQ/OE Why Tests Don t Pass Conference of the Association for Software Testing (CAST ( CAST) July, 2009 Non-Regression Test

More information

ASTQB Advance Test Analyst Sample Exam Answer Key and Rationale

ASTQB Advance Test Analyst Sample Exam Answer Key and Rationale ASTQB Advance Test Analyst Sample Exam Answer Key and Rationale Total number points = 120 points Total number points to pass = 78 points Question Answer Explanation / Rationale Learning 1 A A is correct.

More information

Software Testing and Maintenance

Software Testing and Maintenance Software Testing and Maintenance Testing Strategies Black Box Testing, also known as Behavioral Testing, is a software testing method in which the internal structure/ design/ implementation of the item

More information

MTAT : Software Testing

MTAT : Software Testing MTAT.03.159: Software Testing Lecture 03: Black-Box Testing (advanced) Part 2 Dietmar Pfahl Spring 2018 email: dietmar.pfahl@ut.ee Black-Box Testing Techniques Equivalence class partitioning (ECP) Boundary

More information

Chapter 9. Software Testing

Chapter 9. Software Testing Chapter 9. Software Testing Table of Contents Objectives... 1 Introduction to software testing... 1 The testers... 2 The developers... 2 An independent testing team... 2 The customer... 2 Principles of

More information

Testing. So let s start at the beginning, shall we

Testing. So let s start at the beginning, shall we Testing Today we are going to talk about testing. Before you all lapse into comas in anticipation of how exciting this lecture will be, let me say that testing actually is kind of interesting. I can t

More information

Sample Exam Syllabus

Sample Exam Syllabus ISTQB Foundation Level 2011 Syllabus Version 2.9 Release Date: December 16th, 2017. Version.2.9 Page 1 of 46 Dec 16th, 2017 Copyright 2017 (hereinafter called ISTQB ). All rights reserved. The authors

More information

Verification, Testing, and Bugs

Verification, Testing, and Bugs Verification, Testing, and Bugs Ariane 5 Rocket First Launch Failure https://www.youtube.com/watch?v=gp_d8r- 2hwk So What Happened? The sequence of events that led to the destruction of the Ariane 5 was

More information

Up and Running Software The Development Process

Up and Running Software The Development Process Up and Running Software The Development Process Success Determination, Adaptative Processes, and a Baseline Approach About This Document: Thank you for requesting more information about Up and Running

More information

Chapter 8 Software Testing. Chapter 8 Software testing

Chapter 8 Software Testing. Chapter 8 Software testing Chapter 8 Software Testing 1 Topics covered Introduction to testing Stages for testing software system are: Development testing Release testing User testing Test-driven development as interleave approach.

More information

Sample Exam. Advanced Test Automation - Engineer

Sample Exam. Advanced Test Automation - Engineer Sample Exam Advanced Test Automation - Engineer Questions ASTQB Created - 2018 American Software Testing Qualifications Board Copyright Notice This document may be copied in its entirety, or extracts made,

More information

Sample Exam. Advanced Test Automation Engineer

Sample Exam. Advanced Test Automation Engineer Sample Exam Advanced Test Automation Engineer Answer Table ASTQB Created - 08 American Stware Testing Qualifications Board Copyright Notice This document may be copied in its entirety, or extracts made,

More information

Examination Questions Time allowed: 1 hour 15 minutes

Examination Questions Time allowed: 1 hour 15 minutes Swedish Software Testing Board (SSTB) International Software Testing Qualifications Board (ISTQB) Foundation Certificate in Software Testing Practice Exam Examination Questions 2011-10-10 Time allowed:

More information

XP: Planning, coding and testing. Planning. Release planning. Release Planning. User stories. Release planning Step 1.

XP: Planning, coding and testing. Planning. Release planning. Release Planning. User stories. Release planning Step 1. XP: Planning, coding and testing Annika Silvervarg Planning XP planning addresses two key questions in software development: predicting what will be accomplished by the due date determining what to do

More information

10. Software Testing Fundamental Concepts

10. Software Testing Fundamental Concepts 10. Software Testing Fundamental Concepts Department of Computer Science and Engineering Hanyang University ERICA Campus 1 st Semester 2016 Testing in Object-Oriented Point of View Error Correction Cost

More information

BECOME A LOAD TESTING ROCK STAR

BECOME A LOAD TESTING ROCK STAR 3 EASY STEPS TO BECOME A LOAD TESTING ROCK STAR Replicate real life conditions to improve application quality Telerik An Introduction Software load testing is generally understood to consist of exercising

More information

Test Automation. Fundamentals. Mikó Szilárd

Test Automation. Fundamentals. Mikó Szilárd Test Automation Fundamentals Mikó Szilárd 2016 EPAM 2 Blue-chip clients rely on EPAM 3 SCHEDULE 9.12 Intro 9.19 Unit testing 1 9.26 Unit testing 2 10.03 Continuous integration 1 10.10 Continuous integration

More information

Quality Assurance = Testing? SOFTWARE QUALITY ASSURANCE. Meaning of Quality. How would you define software quality? Common Measures.

Quality Assurance = Testing? SOFTWARE QUALITY ASSURANCE. Meaning of Quality. How would you define software quality? Common Measures. Quality Assurance = Testing? SOFTWARE QUALITY ASSURANCE William W. McMillan Meaning of Quality Error-free How define an error? Client is happy (we get paid!). User is happy (we are loved!). Stable (we

More information

Why testing and analysis. Software Testing. A framework for software testing. Outline. Software Qualities. Dependability Properties

Why testing and analysis. Software Testing. A framework for software testing. Outline. Software Qualities. Dependability Properties Why testing and analysis Software Testing Adapted from FSE 98 Tutorial by Michal Young and Mauro Pezze Software is never correct no matter what developing testing technique is used All software must be

More information

Part 5. Verification and Validation

Part 5. Verification and Validation Software Engineering Part 5. Verification and Validation - Verification and Validation - Software Testing Ver. 1.7 This lecture note is based on materials from Ian Sommerville 2006. Anyone can use this

More information

18-642: Testing Overview

18-642: Testing Overview 18-642: Testing Overview 9/25/2017 "In September of 1962, a news item was released stating that an $18 million rocket had been destroyed in early flight because "a single hyphen was left out of an instruction

More information

Sample Exam. Certified Tester Foundation Level

Sample Exam. Certified Tester Foundation Level Sample Exam Certified Tester Foundation Level Answer Table ASTQB Created - 2018 American Stware Testing Qualifications Board Copyright Notice This document may be copied in its entirety, or extracts made,

More information

Lecture 17: Testing Strategies. Developer Testing

Lecture 17: Testing Strategies. Developer Testing Lecture 17: Testing Strategies Structural Coverage Strategies (White box testing): Statement Coverage Branch Coverage Condition Coverage Data Path Coverage Function Coverage Strategies (Black box testing):

More information

Software Engineering Testing and Debugging Testing

Software Engineering Testing and Debugging Testing Software Engineering Testing and Debugging Testing Prof. Dr. Peter Thiemann Universitt Freiburg 08.06.2011 Recap Testing detect the presence of bugs by observing failures Debugging find the bug causing

More information

CS 160: Evaluation. Professor John Canny Spring /15/2006 1

CS 160: Evaluation. Professor John Canny Spring /15/2006 1 CS 160: Evaluation Professor John Canny Spring 2006 2/15/2006 1 Outline User testing process Severity and Cost ratings Discount usability methods Heuristic evaluation HE vs. user testing 2/15/2006 2 Outline

More information

CS 160: Evaluation. Outline. Outline. Iterative Design. Preparing for a User Test. User Test

CS 160: Evaluation. Outline. Outline. Iterative Design. Preparing for a User Test. User Test CS 160: Evaluation Professor John Canny Spring 2006 2/15/2006 1 2/15/2006 2 Iterative Design Prototype low-fi paper, DENIM Design task analysis contextual inquiry scenarios sketching 2/15/2006 3 Evaluate

More information

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams Three General Principles of QA COMP 4004 Fall 2008 Notes Adapted from Dr. A. Williams Software Quality Assurance Lec2 1 Three General Principles of QA Know what you are doing. Know what you should be doing.

More information

MicroSurvey Users: How to Report a Bug

MicroSurvey Users: How to Report a Bug MicroSurvey Users: How to Report a Bug Step 1: Categorize the Issue If you encounter a problem, as a first step it is important to categorize the issue as either: A Product Knowledge or Training issue:

More information

Test Automation. 20 December 2017

Test Automation. 20 December 2017 Test Automation 20 December 2017 The problem of test automation Testing has repetitive components, so automation is justified The problem is cost-benefit evaluation of automation [Kaner] Time for: test

More information

HOW TO WRITE USER STORIES (AND WHAT YOU SHOULD NOT DO) Stuart Ashman, QA Director at Mio Global Bob Cook, Senior Product Development Manager, Sophos

HOW TO WRITE USER STORIES (AND WHAT YOU SHOULD NOT DO) Stuart Ashman, QA Director at Mio Global Bob Cook, Senior Product Development Manager, Sophos HOW TO WRITE USER STORIES (AND WHAT YOU SHOULD NOT DO) Stuart Ashman, QA Director at Mio Global Bob Cook, Senior Product Development Manager, Sophos Welcome This presentation will discuss Writing user

More information

Rapid Software Testing Guide to Making Good Bug Reports

Rapid Software Testing Guide to Making Good Bug Reports Rapid Software Testing Guide to Making Good Bug Reports By James Bach, Satisfice, Inc. v.1.0 Bug reporting is a very important part of testing. The bug report, whether oral or written, is the single most

More information

The Integration Phase Canard. Integration and Testing. Incremental Integration. Incremental Integration Models. Unit Testing. Integration Sequencing

The Integration Phase Canard. Integration and Testing. Incremental Integration. Incremental Integration Models. Unit Testing. Integration Sequencing Integration and Testing The Integration Phase Canard Integration Strategy Came from large system procurement phased vs. incremental integration integration processes and orders Testing Strategy numerous

More information

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing. Overview CS 619 Introduction to OO Design and Development ing! Preliminaries! All sorts of test techniques! Comparison of test techniques! Software reliability Fall 2012! Main issues: There are a great

More information

Testing. Prof. Clarkson Fall Today s music: Wrecking Ball by Miley Cyrus

Testing. Prof. Clarkson Fall Today s music: Wrecking Ball by Miley Cyrus Testing Prof. Clarkson Fall 2017 Today s music: Wrecking Ball by Miley Cyrus Review Previously in 3110: Modules Specification (functions, modules) Today: Validation Testing Black box Glass box Randomized

More information

CS 307: Software Engineering. Lecture 10: Software Design and Architecture

CS 307: Software Engineering. Lecture 10: Software Design and Architecture CS 307: Software Engineering Lecture 10: Software Design and Architecture Prof. Jeff Turkstra 2017 Dr. Jeffrey A. Turkstra 1 Announcements Discuss your product backlog in person or via email by Today Office

More information

Software Testing Strategies. Slides copyright 1996, 2001, 2005, 2009, 2014 by Roger S. Pressman. For non-profit educational use only

Software Testing Strategies. Slides copyright 1996, 2001, 2005, 2009, 2014 by Roger S. Pressman. For non-profit educational use only Chapter 22 Software Testing Strategies Slide Set to accompany Software Engineering: A Practitioner s Approach, 8/e by Roger S. Pressman and Bruce R. Maxim Slides copyright 1996, 2001, 2005, 2009, 2014

More information

UNIT OBJECTIVE. Understand what system testing entails Learn techniques for measuring system quality

UNIT OBJECTIVE. Understand what system testing entails Learn techniques for measuring system quality SYSTEM TEST UNIT OBJECTIVE Understand what system testing entails Learn techniques for measuring system quality SYSTEM TEST 1. Focus is on integrating components and sub-systems to create the system 2.

More information

GUIDE. Workshare Troubleshooting Guide

GUIDE. Workshare Troubleshooting Guide GUIDE Workshare Troubleshooting Guide Table of Contents Understanding Troubleshooting...3 System Understanding... 3 Strategic Understanding... 3 Procedural Understanding... 4 Troubleshooting Strategy...5

More information

Breakdown of Some Common Website Components and Their Costs.

Breakdown of Some Common Website Components and Their Costs. Breakdown of Some Common Website Components and Their Costs. Breakdown of Some Common Website Components and Their Costs. The cost of a website can vary dramatically based on the specific components included.

More information

Introduction to Data Science

Introduction to Data Science UNIT I INTRODUCTION TO DATA SCIENCE Syllabus Introduction of Data Science Basic Data Analytics using R R Graphical User Interfaces Data Import and Export Attribute and Data Types Descriptive Statistics

More information

Software Engineering

Software Engineering Software Engineering Lecture 13: Testing and Debugging Testing Peter Thiemann University of Freiburg, Germany SS 2014 Recap Recap Testing detect the presence of bugs by observing failures Recap Testing

More information

XP: Planning, coding and testing. Practice Planning game. Release Planning. User stories. Annika Silvervarg

XP: Planning, coding and testing. Practice Planning game. Release Planning. User stories. Annika Silvervarg XP: Planning, coding and testing Annika Silvervarg Practice Planning game Goal: schedule the most important tasks Which features to implement in what order Supports: simple design, acceptance testing,

More information

Reliable Computing I

Reliable Computing I Instructor: Mehdi Tahoori Reliable Computing I Lecture 9: Concurrent Error Detection INSTITUTE OF COMPUTER ENGINEERING (ITEC) CHAIR FOR DEPENDABLE NANO COMPUTING (CDNC) National Research Center of the

More information

DRVerify: The Verification of Physical Verification

DRVerify: The Verification of Physical Verification DRVerify: The Verification of Physical Verification Sage Design Automation, Inc. Santa Clara, California, USA Who checks the checker? DRC (design rule check) is the most fundamental physical verification

More information

Software Testing TEST CASE SELECTION AND ADEQUECY TEST EXECUTION

Software Testing TEST CASE SELECTION AND ADEQUECY TEST EXECUTION Software Testing TEST CASE SELECTION AND ADEQUECY TEST EXECUTION Overview, Test specification and cases, Adequacy criteria, comparing criteria, Overview of test execution, From test case specification

More information

Sample Exam ISTQB Advanced Test Analyst Answer Rationale. Prepared By

Sample Exam ISTQB Advanced Test Analyst Answer Rationale. Prepared By Sample Exam ISTQB Advanced Test Analyst Answer Rationale Prepared By Released March 2016 TTA-1.3.1 (K2) Summarize the generic risk factors that the Technical Test Analyst typically needs to consider #1

More information

CSE 565 Computer Security Fall 2018

CSE 565 Computer Security Fall 2018 CSE 565 Computer Security Fall 2018 Lecture 16: Building Secure Software Department of Computer Science and Engineering University at Buffalo 1 Review A large number of software vulnerabilities various

More information

People tell me that testing is

People tell me that testing is Software Testing Mark Micallef mark.micallef@um.edu.mt People tell me that testing is Boring Not for developers A second class activity Not necessary because they are very good coders 1 What is quality?

More information

CPSC 444 Project Milestone III: Prototyping & Experiment Design Feb 6, 2018

CPSC 444 Project Milestone III: Prototyping & Experiment Design Feb 6, 2018 CPSC 444 Project Milestone III: Prototyping & Experiment Design Feb 6, 2018 OVERVIEW... 2 SUMMARY OF MILESTONE III DELIVERABLES... 2 1. Blog Update #3 - Low-fidelity Prototyping & Cognitive Walkthrough,

More information

Integration Testing Qualidade de Software 2

Integration Testing Qualidade de Software 2 Integration Testing Integration Testing Software systems are built with components that must interoperate Primary purpose: To reveal component interoperability faults so that testing at system scope may

More information

9 th CA 2E/CA Plex Worldwide Developer Conference 1

9 th CA 2E/CA Plex Worldwide Developer Conference 1 1 Introduction/Welcome Message Organizations that are making major changes to or replatforming an application need to dedicate considerable resources ot the QA effort. In this session we will show best

More information

Usable Privacy and Security Introduction to HCI Methods January 19, 2006 Jason Hong Notes By: Kami Vaniea

Usable Privacy and Security Introduction to HCI Methods January 19, 2006 Jason Hong Notes By: Kami Vaniea Usable Privacy and Security Introduction to HCI Methods January 19, 2006 Jason Hong Notes By: Kami Vaniea Due Today: List of preferred lectures to present Due Next Week: IRB training completion certificate

More information

CA Test Data Manager Key Scenarios

CA Test Data Manager Key Scenarios WHITE PAPER APRIL 2016 CA Test Data Manager Key Scenarios Generate and secure all the data needed for rigorous testing, and provision it to highly distributed teams on demand. Muhammad Arif Application

More information

What's that? Why? Is one "better" than the other? Terminology. Comparison. Loop testing. Some experts (e.g. Pezze & Young) call it structural testing

What's that? Why? Is one better than the other? Terminology. Comparison. Loop testing. Some experts (e.g. Pezze & Young) call it structural testing Week 9: More details of white-box testing What is it? Comparison with black-box testing What we should not validate Automated versus interactive testing Testing conditional and loop constructs COMP 370

More information

AXIOMS OF AN IMPERATIVE LANGUAGE PARTIAL CORRECTNESS WEAK AND STRONG CONDITIONS. THE AXIOM FOR nop

AXIOMS OF AN IMPERATIVE LANGUAGE PARTIAL CORRECTNESS WEAK AND STRONG CONDITIONS. THE AXIOM FOR nop AXIOMS OF AN IMPERATIVE LANGUAGE We will use the same language, with the same abstract syntax that we used for operational semantics. However, we will only be concerned with the commands, since the language

More information

Computer Science and Software Engineering University of Wisconsin - Platteville 9-Software Testing, Verification and Validation

Computer Science and Software Engineering University of Wisconsin - Platteville 9-Software Testing, Verification and Validation Computer Science and Software Engineering University of Wisconsin - Platteville 9-Software Testing, Verification and Validation Yan Shi SE 2730 Lecture Notes Verification and Validation Verification: Are

More information

FUNCTIONAL BEST PRACTICES ORACLE USER PRODUCTIVITY KIT

FUNCTIONAL BEST PRACTICES ORACLE USER PRODUCTIVITY KIT FUNCTIONAL BEST PRACTICES ORACLE USER PRODUCTIVITY KIT Purpose Oracle s User Productivity Kit (UPK) provides functionality that enables content authors, subject matter experts, and other project members

More information

Software Development and Usability Testing

Software Development and Usability Testing Software Development and Usability Testing Shneiderman, Chapter 4 Preece et al, Ch 9, 11-15 Krug, Rocket Surgery Made Easy Rubin, Handbook of Usability Testing Norman Neilsen Group www HCI in Software

More information

SE 2730 Final Review

SE 2730 Final Review SE 2730 Final Review 1. Introduction 1) What is software: programs, associated documentations and data 2) Three types of software products: generic, custom, semi-custom Why is semi-custom product more

More information

The Bizarre Truth! Automating the Automation. Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER

The Bizarre Truth! Automating the Automation. Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER The Bizarre Truth! Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER By Kimmo Nupponen 1 TABLE OF CONTENTS 1. The context Introduction 2. The approach Know the difference

More information

SOLUTION BRIEF CA TEST DATA MANAGER FOR HPE ALM. CA Test Data Manager for HPE ALM

SOLUTION BRIEF CA TEST DATA MANAGER FOR HPE ALM. CA Test Data Manager for HPE ALM SOLUTION BRIEF CA TEST DATA MANAGER FOR HPE ALM CA Test Data Manager for HPE ALM Generate all the data needed to deliver fully tested software, and export it directly into Hewlett Packard Enterprise Application

More information

A Case Study of Model-Based Testing

A Case Study of Model-Based Testing A Case Study of Model-Based Testing Using Microsoft Spec Explorer and Validation Framework Carleton University COMP 4905 Andrew Wylie Jean-Pierre Corriveau, Associate Professor, School of Computer Science

More information

Sample Question Paper. Software Testing (ETIT 414)

Sample Question Paper. Software Testing (ETIT 414) Sample Question Paper Software Testing (ETIT 414) Q 1 i) What is functional testing? This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type

More information

Topics in Software Testing

Topics in Software Testing Dependable Software Systems Topics in Software Testing Material drawn from [Beizer, Sommerville] Software Testing Software testing is a critical element of software quality assurance and represents the

More information

The Salesforce Migration Playbook

The Salesforce Migration Playbook The Salesforce Migration Playbook By Capstorm Table of Contents Salesforce Migration Overview...1 Step 1: Extract Data Into A Staging Environment...3 Step 2: Transform Data Into the Target Salesforce Schema...5

More information

Finding Firmware Defects Class T-18 Sean M. Beatty

Finding Firmware Defects Class T-18 Sean M. Beatty Sean Beatty Sean Beatty is a Principal with High Impact Services in Indianapolis. He holds a BSEE from the University of Wisconsin - Milwaukee. Sean has worked in the embedded systems field since 1986,

More information

This course supports the assessment for Scripting and Programming Applications. The course covers 4 competencies and represents 4 competency units.

This course supports the assessment for Scripting and Programming Applications. The course covers 4 competencies and represents 4 competency units. This course supports the assessment for Scripting and Programming Applications. The course covers 4 competencies and represents 4 competency units. Introduction Overview Advancements in technology are

More information

Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms

Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms Standard Glossary of Terms used in Software Testing Version 3.2 Foundation Extension - Usability Terms International Software Testing Qualifications Board Copyright Notice This document may be copied in

More information

Write perfect C code to solve the three problems below.

Write perfect C code to solve the three problems below. Fall 2017 CSCI 4963/6963 Week 12 David Goldschmidt goldschmidt@gmail.com Office: Amos Eaton 115 Office hours: Mon/Thu 1:00-1:50PM; Wed 1:00-2:50PM Write perfect C code to solve the three problems below.

More information

ADVANCED DIGITAL IC DESIGN. Digital Verification Basic Concepts

ADVANCED DIGITAL IC DESIGN. Digital Verification Basic Concepts 1 ADVANCED DIGITAL IC DESIGN (SESSION 6) Digital Verification Basic Concepts Need for Verification 2 Exponential increase in the complexity of ASIC implies need for sophisticated verification methods to

More information

Lee Copeland.

Lee Copeland. Lee Copeland lee@sqe.com SQE 2015 What Is An Innovation? in no va tion (ĭn'ə-vā'shən) 1. Something new or different 2. Something newly introduced or adopted 3. A creation (a new device or process) resulting

More information

Specifying and Prototyping

Specifying and Prototyping Contents Specifying and Prototyping M. EVREN KIYMAÇ 2008639030 What is Specifying? Gathering Specifications Specifying Approach & Waterfall Model What is Prototyping? Uses of Prototypes Prototyping Process

More information

Learning outcomes. Systems Engineering. Debugging Process. Debugging Process. Review

Learning outcomes. Systems Engineering. Debugging Process. Debugging Process. Review Systems Engineering Lecture 9 System Verification II Dr. Joanna Bryson Dr. Leon Watts University of Bath Department of Computer Science 1 Learning outcomes After both lectures and doing the reading, you

More information

Bridge Course On Software Testing

Bridge Course On Software Testing G. PULLAIAH COLLEGE OF ENGINEERING AND TECHNOLOGY Accredited by NAAC with A Grade of UGC, Approved by AICTE, New Delhi Permanently Affiliated to JNTUA, Ananthapuramu (Recognized by UGC under 2(f) and 12(B)

More information

Software Quality. Chapter What is Quality?

Software Quality. Chapter What is Quality? Chapter 1 Software Quality 1.1 What is Quality? The purpose of software quality analysis, or software quality engineering, is to produce acceptable products at acceptable cost, where cost includes calendar

More information

Innovations in Test Automation When Regression Testing is Not Enough

Innovations in Test Automation When Regression Testing is Not Enough Innovations in Test Automation When Regression Testing is Not Enough John Fodeh Cognizant Technology Solutions john.fodeh@cognizant.com HP Test Brugergruppen Konference d. 11-4-2013 Outline Innovation

More information

Lethbridge/Laganière 2005 Chapter 9: Architecting and designing software 6

Lethbridge/Laganière 2005 Chapter 9: Architecting and designing software 6 Trying to deal with something big all at once is normally much harder than dealing with a series of smaller things Separate people can work on each part. An individual software engineer can specialize.

More information

Chapter 2.6: Testing and running a solution

Chapter 2.6: Testing and running a solution Chapter 2.6: Testing and running a solution 2.6 (a) Types of Programming Errors When programs are being written it is not surprising that mistakes are made, after all they are very complicated. There are

More information

Ready to Automate? Ready to Automate?

Ready to Automate? Ready to Automate? Bret Pettichord bret@pettichord.com www.pettichord.com 1 2 1 2. Testers aren t trying to use automation to prove their prowess. 3 Monitoring and Logging Diagnostic features can allow you to View history

More information

How to Break Software by James Whittaker

How to Break Software by James Whittaker How to Break Software by James Whittaker CS 470 Practical Guide to Testing Consider the system as a whole and their interactions File System, Operating System API Application Under Test UI Human invokes

More information

CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation

CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation Lecture 11: Inspection Tuesday / Thursday 12:00 to 1:20 James Fogarty Kailey Chan Dhruv Jain Nigini Oliveira Chris Seeds

More information

Ext3/4 file systems. Don Porter CSE 506

Ext3/4 file systems. Don Porter CSE 506 Ext3/4 file systems Don Porter CSE 506 Logical Diagram Binary Formats Memory Allocators System Calls Threads User Today s Lecture Kernel RCU File System Networking Sync Memory Management Device Drivers

More information

TestComplete 3.0 Overview for Non-developers

TestComplete 3.0 Overview for Non-developers TestComplete 3.0 Overview for Non-developers Copyright 2003 by Robert K. Leahey and AutomatedQA, Corp. All rights reserved. Part : Table of Contents Introduction 1 About TestComplete 1 Basics 2 Types of

More information

CONFERENCE PROCEEDINGS QUALITY CONFERENCE. Conference Paper Excerpt from the 28TH ANNUAL SOFTWARE. October 18th 19th, 2010

CONFERENCE PROCEEDINGS QUALITY CONFERENCE. Conference Paper Excerpt from the 28TH ANNUAL SOFTWARE. October 18th 19th, 2010 PACIFIC NW 28TH ANNUAL SOFTWARE QUALITY CONFERENCE October 18th 19th, 2010 Conference Paper Excerpt from the CONFERENCE PROCEEDINGS Permission to copy, without fee, all or part of this material, except

More information

Chapter 9 Quality and Change Management

Chapter 9 Quality and Change Management MACIASZEK, L.A. (2007): Requirements Analysis and System Design, 3 rd ed. Addison Wesley, Harlow England ISBN 978-0-321-44036-5 Chapter 9 Quality and Change Management Pearson Education Limited 2007 Topics

More information

Black Box Software Testing (Academic Course - Fall 2001) Cem Kaner, J.D., Ph.D. Florida Institute of Technology

Black Box Software Testing (Academic Course - Fall 2001) Cem Kaner, J.D., Ph.D. Florida Institute of Technology Black Box Software Testing (Academic Course - Fall 2001) Cem Kaner, J.D., Ph.D. Florida Institute of Technology Section: 24 : Managing GUI Automation Contact Information: kaner@kaner.com www.kaner.com

More information

UNIT-4 Black Box & White Box Testing

UNIT-4 Black Box & White Box Testing Black Box & White Box Testing Black Box Testing (Functional testing) o Equivalence Partitioning o Boundary Value Analysis o Cause Effect Graphing White Box Testing (Structural testing) o Coverage Testing

More information

Pearson Education 2007 Chapter 9 (RASD 3/e)

Pearson Education 2007 Chapter 9 (RASD 3/e) MACIASZEK, L.A. (2007): Requirements Analysis and System Design, 3 rd ed. Addison Wesley, Harlow England ISBN 978-0-321-44036-5 Chapter 9 Quality and Change Management Pearson Education Limited 2007 Topics

More information