QA Best Practices: A training that cultivates skills for delivering quality systems Dixie Neilson QA Supervisor Lynn Worm QA Supervisor Maheen Imam QA Analyst Information Technology for Minnesota Government mn.gov/mnit
Introduction Dixie K Neilson: QA Supervisor, MNIT DHS Lynn Worm: QA Supervisor, MNIT DHS Maheen Imam: QA Analyst, MNIT DHS 11/6/2018 2
Getting the Instructions Right 3
What are QA Best Practices
Overview Consistent, yet flexible approach Set of QA testing objectives and deliverables Follows recommended processes Focus is on quality control 11/6/2018 5
SDLC QA Phases In Relation to PMO and SDLC 11/6/2018 6
QA Best Practice Phases Following the SDLC guideline in the graphic,the QA Best Practices (QABP) are divided into four phases: 11/6/2018 7
PLANNING
SDLC Phases QA Phases In Relation to PMO and SDLC 11/6/2018 9
Kick-Off Meeting Review Project Artifacts Participants Project QA lead BA Project Manager Developer Business 11/6/2018 10
Determine the test team Project QA lead determines the: Scheduling Estimation Duration and Size Resources Performance, Automation and Accessibility 11/6/2018 11
Scheduling is a breakdown of tasks and deliverables within a timeframe 11/6/2018 12
Estimation The QA team must provide estimates for budgeting and scheduling of the software testing effort QA needs to understand the size of the project being estimated 11/6/2018 13
Duration Duration is a very important part of Estimation Duration lets the Program Manager and Project Managers know how many resources are needed 11/6/2018 14
Estimation vs Duration Determines how many QA resources are needed for the testing effort Estimate 6.5 hours per day of actual QA time per person Take the project s LOE Development hours and divide by 6.5 hours This result will give the total number of days needed to complete the testing effort 11/6/2018 15
Estimation vs Duration Example Using a calendar, mark the days when QA members are out of the office due to holidays, vacation, other reasons (i.e., training) during the release schedule LOE Dev hours = 580 QA hours in a day = 6.5 580 / 6.5 = 89 days needed to complete the QA tasks 11/6/2018 16
Resources Identify the skills needed for the testing effort Acquire the team members who possess those skills Train the assigned resources, if necessary 11/6/2018 17
Requirements Review Team Dev and QA:. 11/6/2018 18
Requirement Reviews When do we review Testable Requirements? 11/6/2018 19
Requirements What is a Testable Requirement? The business team has knowledge of what shall / shall not be required in order to meet the end users needs. The Business Analyst Lead and/or Business Analyst uses that knowledge to write requirements Testable requirements have been broken down so they clearly define what shall / shall not occur. 11/6/2018 20
Testable Requirements Most testable requirements could be described in terms where: The statement describes the capabilities or characteristics of a system and data elements (i.e., inputs, such as customer number or account number) in order for it to have value to the end user. Requirements are code-able as they tell the development team what to code. Requirements are testable as they tell the testing team what to test. 11/6/2018 21
Why We Need Testable Requirements 1. Software development can fail 2. Test execution can fail 11/6/2018 22
Why Testable Requirements are Essential Thus when testable requirements are written, the development and test teams gain better understanding of what to code and how to test as well as what to test. This provides the expected results of that test. It provides a roadmap to the QA team on how to create negative test cases 11/6/2018 23
Tips for Writing Requirements Look for complete sentences to describe the requirement: Avoid ambiguous text Look for consistent writing Requirements should be measurable They should be finite 11/6/2018 24
Tips for Reviewing Requirements The requirements shall be traced to test cases. Each requirement shall be numbered. Ensure requirements do not contradict each other. Look for specifics when describing processing time (e.g., page shall display within 15 seconds) 11/6/2018 25
Requirements: The Good, The Bad, and The Ugly An Ugly Requirement: Clicking the login button should take the user to a page in the app. A Bad Requirement: Clicking the Login button should take the user to the Home page in the app if their login name and password are correct. A Good Requirement: The Home Page shall display when user has inputted a valid login name and associated valid password, and clicked the Login button. 11/6/2018 26
Clearly Define Requirements Developers and testers rely on BAs to write requirements that are: clear concise correct unambiguous 11/6/2018 27
Who Approves Requirements? 11/6/2018 28
Design Review Team BA and QA 11/6/2018 29
QA Planning Phase Primary Deliverables: QA Test Strategy and QA Test Plan Responsibilities: Program QA Lead and Project QA Lead Role 11/6/2018 30
QA Test Strategy Definition: The Strategy is the overarching design of what the testing effort shall be for the project. QA documentation is like an umbrella it covers the strategy for all releases. 11/6/2018 31
QA Test Strategy Guidelines Start early Ensure the strategy provides general statements (leave the details for the QA Test Plan) Review the QA Test Strategy often to see if scope has change Send the QA Test Strategy as an attachment to approvers When an Approver replies with suggested edits, review the suggestions and apply changes if needed 11/6/2018 32
QA Test Planning Test planning is a major component of testing software, and includes: Selecting appropriate techniques for validation. Assessing risks or constraints Planning how to minimize those risks / constraints Outlines testing schedule, resources Entrance and exit criteria 11/6/2018 33
QA Test Planning Advantages Improves test coverage Improves test efficiencies Improves communications Enables feedback on the plan Provides education about relevant test details Improves accountability 11/6/2018 34
QA Test Plan The QA Test Plan: Provides the details of how QA testing shall be accomplished for a release. Begins early in the development process. Defines the processes necessary to ensure the QA tests are repeatable, controllable, and that adequate test coverage has been executed Is written in conjunction with the QA Test Strategy. 11/6/2018 35
What is NOT in the QA Test Plan The QA Test Plan does NOT contain: A repository of every testing standard A list of the test cases Business validation testing information (BAs / Business shall create their own documentation) 11/6/2018 36
QA Test Plan Guidelines Start early Keep the QA Test Plan flexible Review the QA Test Plan often Keep the QA Test Plan concise and readable 11/6/2018 37
Test Construction
Test Construction Phase of SDLC QA Phases In Relation to PMO and SDLC 11/6/2018 39
QA Construction Phase QA Test case scenario QA Test case writing Add details to the QA test plan 11/6/2018 40
QA Test Scenarios Test scenarios tell us what is to be tested: One Test Scenario has many Test Cases 11/6/2018 41
QA Test Case Construction Test Cases are a set of input and output given to the system to verify or validate an expected result that is based upon Requirement[s] A QA Test Case provides the detailed steps of how testing shall be executed, followed by the Expected Result based upon at least one Requirement 11/6/2018 42
Designing Test cases Identify the Test Conditions Design Test Techniques Build the Test Cases Test Coverage Write a Test case 11/6/2018 43
Identifying Test Conditions Five methods to help determine testable conditions include: 1. System Specifications (Specification decomposition) 2. Production environment (Population Analysis) 3. Predefined List of Typical Conditions to be Tested (Test Transactions Types) 4. Business Case Analysis (Business Process Analysis) 5. Structural Analysis 11/6/2018 44
Method 1. System Specifications In system specification, look for items, such as: 11/6/2018 45
Method 2. Production Environment This data represent the types of transactions that will be processed by the applications under test. 11/6/2018 46
Method 3. Predefined Lists of Typical Conditions The third approach to identifying test conditions is based on the reality that software has certain characteristics unique to the discipline 11/6/2018 47
Transaction Types 11/6/2018 48
More Transaction Types 11/6/2018 49
Method 4. Business Case Analysis The fourth approach is to review real world business cases, where they have been broken down to their lowest levels 11/6/2018 50
Method 5. Structural Analysis Structural Test Design Techniques The final approach is to focus on white box testing, or validate if the system does what it is supposed to do well at the application or code levels. 11/6/2018 51
DESIGN TEST CASES
Design Test Techniques The test objectives established in the Test Plan are now decomposed into individual test conditions Those test conditions are further broken down into individual test cases Those tests are correlated to a test matrix to validate the software function works as specified 11/6/2018 53
Parts of a Test Case Test Objective Test Condition Operator/User Action Input specifications Output specifications Pass or Fail Comments 11/6/2018 54
Build the Test Cases Identify the Test Conditions Design Test Techniques Build the Test Cases Test Coverage Test Case Writing 1. Identify the conditions to be tested 2. Rank test conditions 3. Select conditions for testing 4. Determine correct results of processing 5. Create test cases 11/6/2018 55
Test coverage Identify the Test Conditions Design Test Techniques Build the Test Cases Coverage is defined as the assurance about the test process that it has covered the application against system and business requirements. Test Coverage Test Case Writing 11/6/2018 56
Write a Test Case Identify the Test Conditions Design Test Techniques Build the Test Cases Test Coverage Test Case Writing How do I get started writing test cases? You need to know who the actor (user) is for the test case. For example, is the Consumer executing the steps or a Case Worker? This will make a difference when you write your expectations, as permissions for one user may not be the same for the other user. Select terminologies that are consistent through out the test case, e.g., Click, Display, Advance /Return, Navigation/Navigate, Press, Select, Successful(ly), Valid/Invalid, etc. 11/6/2018 57
Test Case Writing: Boundary Tests If minimum / maximum values are applied to a field, you ll write: 1. A test case that will test a value less than the minimum value, 2. Another test at the minimum value, 3. Another test at the maximum value, and 4. Another test over the maximum value 11/6/2018 58
Test Case Writing: GUI Tests If the field is alpha only, input special characters and numbers; expect an error. If the field is numeric only, input special and alpha characters; expect an error. If the field length is 30, input characters less than 30 (expected result = pass), equal to 30 (pass), and more than 30 (error shall display 11/6/2018 59
When to Start Writing Test Cases When QA has attended and approved the Requirements, QA begin writing test cases. Design reviews also help QA understand how the code shall work 11/6/2018 60
Test Case Writing Style Test Steps are written in present tense, for example: Log into the app with the following valid credentials: Login Name = xyz; Password = 123. Validate the data is successfully inputted. Click the [Login] button; validate the Home page displays. From the Home page, click the [Submit] button; validate the transaction screen displays. Expected Results are written in past tense, for example: Valid credentials were successfully inputted. The Home page displayed when user clicked the [Login] button. The transaction screen displayed when user clicked the [Submit] button. 11/6/2018 61
Writing Steps There should be at least two items per step: one is an action, and the second is the validation / verification. Number each step, where there is only one action per step. If you put too many things in the step and only one piece of it fails, you have to fail the entire step including the part(s) that passed. 11/6/2018 62
Example of Writing Steps Using the criteria from the previous page, let s say we wrote the step as: Step 1: Log into the app with the following valid credentials: Login Name = xyz; Password = 123. Click the [Login] button to advance to the Home page. Validate the Home page displays. Therefore, it would be better to write the previous example as: STEP 1: STEP 2: 1) Log into the app with the following valid credentials: a. Login Name = xyz b. Password = 123 2) Verify the login credentials are successfully inputted. 1) Click [Login]. 2)Validate the Home page displays. STEP 3: 1) With the Home page displaying, click [Contact Us]. 2) Validate user advances to the Contact Information screen. 11/6/2018 63
Terminology is Important: Verify and Validate We use the terms verify and validate when writing test steps (see Terminology section). Each step should contain one of those words. After all, that s what we re testing: validating that the button advanced the user to a specific page or that the text was successfully inputted into the fields, etc. Examples: Step 1: 1) Log into the app with the following invalid credentials: a. Login Name = nonuser 2) Input the following valid Password = 123 3) Validate the data is successfully inputted. Step 2: 1) Click [Login] 2) Validate an error message displayed: Username and/or password are incorrect. 11/6/2018 64
Why Write Test Cases in Such Detail? Isn t there a shortcut??? It seems very time-consuming to write each step in detail. We write in great detail so that: Anyone can execute the test case Our test steps are repeatable Our test steps can be automated. For defects, copy the steps leading up to the defect and paste those steps in the defect work item. Anyone can retest the defect. 11/6/2018 65
Can I Reuse a Test Case? Absolutely! Only if the Tester has used a consistent tool and template with details and proper terminology. 11/6/2018 66
QA Test Closure Depending upon the team dynamics, BAs and/or QA will create a Requirements Traceability Matrix to ensure at least one requirement has been tied to a Test Case The Matrix helps to ensure the overall quality of the product being tested 11/6/2018 67
DATA PREPARATION
Data Preparation Phase in SDLC QA Phases In Relation to PMO and SDLC 11/6/2018 69
Data Sets There are three distinct sets of data required to test most applications: Happy path Negative testing Crash the application 11/6/2018 70
Test Data TEST DATA: Test data might be created in a variety of ways: Systematic High-volume, randomized Production Data 11/6/2018 Optional Tagline Goes Here mn.gov/websiteurl 71
Considerations When Creating Test Data Ensure the test data represents the real world Ensure data integrity Work to reduce the size of the test data Ensure all test conditions are covered Ensure any security concerns regarding the test data are addressed early in the process Ensure test data is available when needed 11/6/2018 72
Production Data Impediments to using production data for testing: Transactions missing Same transaction used on multiple tests Data Security laws 11/6/2018 73
Data Obfuscation Data obfuscation is the process of de-identifying (masking) data elements The purpose of obfuscation is to protect data that is classified as personally identifiable data, personal sensitive data or commercially sensitive data 11/6/2018 74
Manually Created Data Commonly used in QA. Allows specific data points to be included in datasets that test the function using predefined test conditions. Advantages Test results can be readily checked Happy path, alternate path, negative testing data are controlled Types and combinations of transactions or procedures to be tested are known Test results are reliable without actually tracing data through the processing stages Disadvantages Test data is valid only for the single application for which it has been specifically created Test procedures are valid only for a given point in time; therefore, must be updated to incorporate any occurrences that would affect the validity of the tests (e.g., changes in file structure, statutes, rules, or regulations 11/6/2018 75
Guidelines: Creating Manual Test Data Choose records that will be changed only by a single condition Predetermine data input and expected results Store a master copy of test data in its initial state so it can be reused throughout the life of the system Add, change and delete test data as the system changes Secure the master set to control changes and prevent inadvertent deletion 11/6/2018 76
Using and Maintaining Test Data Using test data: Determine test cases to be run Add new test data, as needed Backup test data Perform tests Maintaining test data: Remove obsolete data Update to align with current version Test additional functionality Correct errors found in test data Evaluate tests 11/6/2018 77
Archiving/Destroying Test Data Archive test data when it is no longer needed but is used to support artifacts used in review Destroy test data when it is no longer needed following security processes 11/6/2018 78
TEST EXECUTION
Test Execution Phase in SDLC QA Phases In Relation to PMO and SDLC 11/6/2018 Optional Tagline Goes Here mn.gov/websiteurl 80
Steps in Test Execution Assess System Readiness Assess QA Test Team Readiness Log QA Testing Progress Review Defects Request Builds 11/6/2018 81
What are Defects? Operationally, it is useful to work with two definitions of a defect: Producer s viewpoint: A product requirement that has not been met, or A product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product Customer s viewpoint: Anything that causes customer dissatisfaction, whether in the statement of requirements or not 11/6/2018 82
Defect Management Objectives QA s main objective of testing is to discover defects When discovered, defects shall be recorded and tracked until appropriate action has been taken QA s job is to find defects as quickly as possible so their impact is minimal General Principles Prevent defects Defect management process is risk driven Integrate defect measurement into the development process As much as possible, the capture and analysis of the information should be automated Improve the process through defect information 11/6/2018 83
The Cost of Quality Whether we build an application that is missing important functionality or discover bugs in Production that require emergency fixes, the cost of quality increases exponentially with the application's progression through the development life cycle and the environment in which the application resides. 11/6/2018 84
Type of Test Execution Issues Issues that can be reported as defects: Smoke test failures Requirements are incomplete, ambiguous, not testable System/functional tests do not meet requirements Deviation from standards Errors in procedural logic 11/6/2018 85
Steps to Deal With Issues WHAT are the steps to deal with an issue discovered and later resolved? 1. Retest to ensure the issue is repeatable 2. Determine the Severity of the defect (how it affects the testing effort) 3. Create a Defect Report. 11/6/2018 86
Defect Reports Defect reports should include: 1. The title, brief description, who discovered it and the date discovered. 2. Severity (or how it affects testing; i.e., Blocking, Critical, High, Medium, Low), 3. Status (i.e., New, Open, In Progress, RFT, Closed) 4. Environment 5. Build number / name / date 6. System information (i.e., Win 7 with IE 9) 7. Steps to reproduce 8. Actual result 9. Expected result (based upon requirement) 10. Screenshot(s) and/or video(s) 11/6/2018 87
Defect Management Process 11/6/2018 88
Defect Prevention 11/6/2018 89
Deliverable Baseline Baselines are deliverables. They are work products that have reached a predefined milestone in development. Baselining is important because it requires an organization to decide both the level of formality that is appropriate and the point in the process when the formality takes effect. A deliverable should be baselined when changes to the deliverable, or defects in the deliverable, can have an impact on deliverables on which other people are working Deliverable baseline includes the following activities: Identify key deliverables Define standards for each deliverable 11/6/2018 90
Test Execution- Defect Discovery 11/6/2018 91
Defect Resolution 11/6/2018 92
QA Test Closure: The QA Status Report QA Status Reports provide the project team and stakeholders information on where the testing is at: Green: Able to complete all (or most) of the planned test case testing; minor defects reported and fixed; test execution is on schedule Yellow: Some issues are not resolved; Blocking / Critical defects are not fixed; however, there are some work-arounds; test execution has slipped from the planned schedule, but appears to be recoverable Red: System problems; issues are not resolved; Blocking / Critical / High defects are not fixed; there are no work-arounds; testing effort is behind schedule; schedule cannot be extended to include resolutions in a timely manner 11/6/2018 93
QA Status Report Example 11/6/2018 94
QA Test Closure- QA Status Report 11/6/2018 95
Distributing the Status Report Status Reports may be created and distributed daily or weekly, depending upon where the test team is in the testing process 11/6/2018 Optional Tagline Goes Here mn.gov/websiteurl 96
Root Cause Analysis Process Improvement/Root Cause Analysis: Going back in the process to where the defect originated to understand what caused the defect Going back to the validation process, which should have caught the defect earlier Implement process Improvement. 11/6/2018 97
QA TEST CLOSURE PHASE
Test Closure Phase in SDLC 11/6/2018 99
Reporting the Testing Results: Test Closure Finalize QA Test Documentation Prepare QA Test Closure report Archive QA Test Artifacts Recommendations for future projects (Lessons Learned) 11/6/2018 100
What s Included The Test Closure reports the results of testing as defined by the Test Plan. It is used as an input for Go/No-Go meeting. The Test Closure template usually includes: Test Team Test Types QA Test Results Defects Enhancements Risks Recommendation 11/6/2018 101
Writing The Test Closure Document When writing the Test Closure, it is acceptable to include URLs where appropriate, as in the Test Case location for: System Integration Testing (SIT) Automation Performance Defect location 11/6/2018 102
Questions?
Any queries regarding the training material, feel free to contact! DIXIE NEILSON dixie.neilson@state.mn.us 11/6/2018 Information Technology for Minnesota Government mn.gov/mnit 104
Thank You! Dixie Neilson Lynn Worm Maheen Imam 11/6/2018 Information Technology for Minnesota Government mn.gov/mnit 105