Key testing metrics that could influence test release decisions

Testing Metrics Influencing Major Test Release Decisions

Metrics are the quantitative measures that provide a high-level understanding of the performance of a project or application. In a software project, it is crucial to measure the quality, cost, and effectiveness of the project. Testing metrics provide visibility to the overall health and readiness of the product, which helps in the decision-making for the next phase of activities. They also help in identifying the gaps in the testing process, thereby improving the efficiency of testing.

Why track testing metrics?

It is always recommended to define appropriate metrics to monitor, manage, and report the progress of testing. Without metrics being defined and tracked, the team won’t be able to understand the test coverage and quality of the application, thereby making it very difficult to take any strategic decisions on further steps like releases, process improvements, etc.

Key Testing Metrics

Identification of the test metrics depends on the project’s testing strategy and the development methodology being used. We have listed a few, but not exhaustive, key metrics that can be used on most testing projects.

 

Metric

Purpose

Formula

Test Coverage

Requirement Coverage To measure the extent to which the requirements are covered by the test cases.
This metric helps in ensuring proper test coverage for the requirements.
[No. of requirements mapped to Test Cases / Total no. of requirements] *100
Test Execution Coverage To measure the extent to which the application is tested – what is completed, what is pending at any given point of time. This metric would help in the timely completion of test execution by adjusting resources, prioritizing team tasks, etc. [No. of test cases executed / Total no. of test cases] * 100
Test Execution Summary The metric helps in understanding the health of the application through test case execution results – how many test cases were executed, how many of them failed, etc. · No. of Test Cases Passed
· No. of Test Cases Failed
· No. of Test Cases that were Not Executed
· No. of Test Cases Blocked

Quality

Total percentage of critical defects Identify the percentage of critical defects.
The metric helps in understanding the quality of a product by identifying the number of critical defects.
[No. of critical defects /Total no. of defects reported] * 100
Defect Aging Calculate the average duration for which the defects for a release/project are in the OPEN state.
The metric helps in agreeing on SLAs for defect fixing, depending on priority.
Average of [Defect CLOSED date – Defect submitted date] for all the defects in a project/release
Defect Density It is the number of defects confirmed in software divided by the size of the software/module. The higher the defect density, the poorer is the quality of the software/module.
The metric helps in monitoring the health of the application at each module level.
Total no. of defects/Total no. of modules
Defect Slippage into Production Identify the percentage of defects that slipped into the production environment.
The metric helps in understanding the effectiveness of the testing team.
[No. of defects found in Production] / [Total no.  of defects] * 100
Environment Downtime Calculate the number of hours for which any environment (QA or Production) or any important module of the application was not available.
The metric will help in understanding the stability of any environment.
No. of hours for which the environment was not available

Productivity

Schedule Variance The deviation of planned vs actual estimates.
This metric indicates how ahead or behind the project schedule is.
Positive Schedule Variance – Ahead of the schedule
Negative Schedule Variance – Behind the schedule
Zero Schedule Variance – On schedule
Schedule Variance = Actual value – Estimated value
Effort Variance The deviation of planned vs actual efforts.
This metric indicates the difference between the estimated and actual efforts in hours.
Positive Effort Variance – Took extra time (effort) to complete the planned work.
Negative Effort Variance – Took less time (effort) to complete the planned work – on the estimated effort.
Effort Variance = Actual effort – Estimated effort
Test Case Creation Productivity To measure the Test case creation productivity of the team. No. of Test cases prepared Per Person per day
Test Case Execution Productivity To measure the Test case execution productivity of the team. No. of Test cases executed Per Person per day

Test Automation

Test Automation ROI To calculate the effort savings realized in Manual Testing due to efforts spend on Automation over a period.
The metric helps in understanding the value-add that the project gets through test automation.
[Manual execution time * No. of iterations] – [Development efforts + Maintenance efforts]
Test Automation Coverage The test automation coverage of the product. [No. of automated test cases / Total no. of test cases] * 100
Test Automation Efforts To identify the efforts, spend on test automation activities (Development and Maintenance). No. of person-days spent on test automation development/maintenance
Test Execution Time To understand the turn-around time for automated test script execution. Test automation script execution time in hours
Test Automation Stability Rate To understand the stability of the test scripts. [No. of false failures / Total no. of test scripts] * 100

How to track the metrics?

Though the testing team may need to put some effort to track these metrics, however, the advantages outweigh the efforts. To make the tracking of these metrics easy, there are many project management tools like Jira, Azure DevOps, and Rally available in the market, which provide live dashboards for many of these metrics. Along with these project management tools, there are a few reporting tools available like Power BI, Tableau, etc., which help in making these metrics intuitive, and easily understandable using graphs and charts.

By: Srivalli Kolli and Venu Gangishetty