Meetu Arora Sr V.P. Quality Assurance Naukri.com
Testing Team Journey @Naukri.com Automation WATIR 2008 2007 2006 Agile testing team 2010 2015 2012 Automation QTP Inception of Testing Team Automation Selenium Entire team: Manual & Automation (Selenium)
Identify Need Identify Metrics/What Identify Path/How Prerequisites Implementation Our Measurements Results
Identify Metrics/What
Identify Path/How
Prerequisites Testing Team capable of doing automation Test case consolidation & management Dedicated scrum teams v/s shared resources Metric Baselines Post live defect seepage Test case coverage Automation coverage UT, IT, FT Automation test flakiness Automation Execution time Velocity/Delivery Planning Efficiency Build Quality
Implementation Implementation
Transition to Automation 35 Manual testers & 3 Automation testers TO 38 Manual + Automation = POWER testers
Transition to Automation - Challenges Skill mismatch Team dynamics Lack of inclination to move towards automation High investment in terms of time and effort
Transition to Automation - Path Perseverance Don t fall back Very Small Steps Low hanging fruit first - ROI Tester empowerment through automation Focus on frameworks We created Selenium POM code generator, which has been open sourced: https://github.com/naukriengineering/seleniumcodegenerator TestNG and XSLT for reporting Generic Function library Coding Guidelines and Code Review Process Contests for motivation Provide migration channels
Test Case Consolidation and Management To know more about how to integrate automation scripts with Jenkins visit our blog: http://engineering.naukri.com/2015/05/integrate-your-automation-with-jenkins/
Testing Throughout Testing individual stories and Integration testing as you go along Progressive/Parallel automation testing Minimizing Automated Tests Flakiness Reducing our automated tests execution time Run automated regression suites periodically using Jenkins To know more about Continuous Testing @ Naukri read our blog: http://engineering.naukri.com/2016/03/continuous-testing-naukri/
No. of Test cases Automation Coverage No. of Test cases Automation Coverage Automation Coverage 16000 14000 12000 10000 8000 6000 4000 2000 0 Naukri India 75.67% 81.56% 84.55% 86.29% 90.82% 74.32% 68.76% 96.65% 92.10% 95.96% 96.99% 92.72% 7746 8153 8318 8679 8925 9313 10062 10283 10491 10846 10542 10641 596 554 611 876 929 1228 1354 1508 1859 2249 2636 3047 April May June July August Septemebr October November December January February March Not Automatable Automatable %Automation Coverage out of Automatable 100% 80% 60% 40% 20% 0% 7000 6000 5000 4000 3000 2000 1000 0 Naukri Gulf 84.60% 88.10% 92.90% 93.00% 93.20% 100% 71.50% 76.80% 80% 58.00% 46.30% 60% 29.29% 5145 5658 5784 5861 5885 6242 27.80% 25.00% 4436 4576 40% 3859 3072 3077 20% 1818 124 309 347 317 336 339 422 417 402 410 415 417 0% April May June July August Septemebr October November December January February March Not Automatable Automatable %Automation Coverage out of Automatable
Automation Flakiness Reduction 15% to 5% in Naukri 45% to 7% in Mobile Apps 17% to 7% in NaukriGulf To know more about how we were able to optimize our tests and reduce flakiness visit our blog: http://engineering.naukri.com/2016/03/reducetest-automation-flakiness/
Automation Execution Time Reduction By selenium grid implementation we were able to reduce our execution time to 1/5 th From 25Hrs to 5Hrs To know more about Selenium Grid implementation visit our blog: http://engineering.naukri.com/2015/10/parallel-testing-at-naukri/
Preventing Bugs Tester, developer, product owner, architect are all part of the backlog grooming Entire team focuses on defining what and how Product backlog grooming is done one iteration in advance Testers contribute test cases upfront during this period and add them to the user stories in form of acceptance criteria or alternate paths
Preventing Bugs Peer testing at developer end Automated Build verification tests have been created and are run before providing builds to testers Progressive Automation testing approach is used.
Testing Understanding Put yourself in the customer shoes We encourage our scrum teams to have direct interaction with actual customers proactive & reactive Effective feedback loops Our tech support team regularly shares reports on issue patterns which are used by scrum teams as inputs to design/test/improve systems Measure the customer usage pattern and use it to design test cases We regularly analyse user data patterns to come up with and refine our test strategy
Building the best system Build Implicit Requirements Focus more on building positive product scenarios Focus on Bug Causal Analysis Peripheral testing: Focus v/s Defocus To know more about peripheral testing please visit our blog http://engineering.naukri.com/2016/03/peripheral-testing/
Team responsibility for quality Everybody tests as and when needed Measure Quality at various levels Build Quality Meter Post Production Issue Seepage
Build Quality Meter Bug Severity Bug Type: Functional, UI, product design, Implicit, Validation, Incomplete requirement, Insufficient impact analysis, Integration Environment, Configuration, Inadequate testing,doa
Indicative Data has been used for illustration purpose Our Measurements Velocity Report 50 45 40 35 30 25 20 15 10 5 0 Sprint15-28Dec Sprint29Dec-11Jan Sprint12-25Jan Sprint26Jan-8Feb Sprint9-22Feb Sprint23Feb-7March Sprint8-21Mar Commitment 39 40 44 42 44 42 46 Completed Unplanned P1/P2 2 3 2 5 3 4 2 Completed Unplanned 6 5 1 2 4 2 5 Planned Live/Staging 6 1 2 10 5 3 4 Completed Planned 19 27 39 25 30 32 34 Target 41 41 41 41 41 41 41
BUGS BUILD QUALITY % Our Measurements Indicative Data has been used for illustration purpose Build Quality Trend 100 90 80 70 Excellent 93 80.86 Good 70.68 100 90 80 70 Excellent [90 +] Good [80-90] Average [70-80] 60 50 Average Bad 60.76 60 50 Bad [Less than 70] 40 40 30 30 20 10 0 0 11 9 5 7 4 4 0 0 1 1 2 3 5 0 0 0 0 0 0 1 0 0 1 26th Jan'16-08th Feb'16 9th Feb'16-22nd Feb'16 23rd Feb'16-7th March'16 8th March'16-21st March'16 ITERATION 20 10 0 Functional Bugs UI Bugs Validation Live Implicit Integration Build Quality
NO. OF STORIES Our Measurements Indicative Data has been used for illustration purpose Functional Automation Coverage 18 17 16 14 14 13 13 12 11 11 10 9 8 6 4 4 4 2 2 1 1 0 26th Jan'16-08th Feb'16 9th Feb'16-22nd Feb'16 23rd Feb'16-7th March'16 8th March'16-21st March'16 Total Stories Automatable Automated
BUG COUNT Our Measurements Bugs Reported on Live 10 9 8 7 6 5 4 3 1 4 2 1 3 0 1 2 2 3 3 1 2 1 0 26 Jan - 8 Feb 9-22 Feb 23 Feb-7 Mar 8-21 March'16 Client Reported Issues (Data Fixes) 2 4 3 1 Client Reported Issues (Code Fixes) 1 2 0 1 Live issues (Except client issues) 2 1 3 3 Indicative Data has been used for illustration purpose
Results Post live defect seepage: 50% reduction Test Cases: 10K increase Automation Coverage: 44% increase Automation Scripts Execution time: 20% reduction Automation flakiness: 20% reduction Velocity: 25% increase 15% Build quality improvement 30 % Planning Efficiency improvement
Key Take Away s Defining the Right Metric Measure, Review, Improve. For Agile testing through out focus should be on progressive/parallel automation testing along with creating reliable tests that take minimal time to execute. Defect Prevention is the Key Team v/s Individual mind-set