QA metrics: beyond discovering defects Lauren Gilmore , Dog owner, expat, gin lover. Allegedly wise to the ways of digital marketing, PR, and social media. Currently waging a war on mediocrity in communication and storytelling. November 11th, 2020 There are numerous test metrics or criteria to measure the success of mobile applications. But in a world of agile software testing, it’s more than just discovered defects. What other results matter in software quality and for end-users? Which QA metrics help validate the efficacy of your efforts? When it comes to QA metrics, there is no one size fits all. Just like in software development, you’ll want to align your benchmarks against your goals for the project. While a vast array of metrics exist, there are seven baseline points to look at: Test coverage: how far and wide tests are spread over the codebase Flakiness: broken and unreliable automated tests that aren’t providing value Time to Test: the amount of time it takes to run and report on a set of tests Time to Fix: the amount of time between when something is broken and when it’s fixed Escaped defects: the number of defects that make it to customers Fixed Issue Rate: the ratio of resolved bugs to reported bugs as a measure of the quality of bugs reported and domain knowledge of testers NPS (net promoter score): the percentage rating of a customers likelihood to recommend the app Many of these agile QA testing metrics are weighted differently based on the project, but it can give each project a basic framework to look at before the project starts. You can then align your tester expertise to measure the metrics that matter most, not only to the quality of the build but to the user’s expectations of the final product. How? Consider the following suggestions. See How SAP Improved Their Fixed Issue Rate by 3x in Half the Time Not all software is equal And neither is the measurement of success. As developers are looking at building a product, they are usually using a market fit approach first. Often in these early stages, the Minimal Viable Product (MVP) is enough to test the market and validate an idea. On the user side, there is an inherent value in being the first to look at a new product, and the fact that initial users will find bugs is a given. In these early stages, testers tend to try to root out the “showstoppers.” The code breaks that could make the software unusable. But as software moves through the product life cycle, quality and features should both increase significantly. For this reason, it’s essential to have clear communications between developers and the testing teams to make sure that they’re aligned on what success looks like in the end project. What are the baseline QA metrics? Bounce it In a mature project, QA teams reduce the time between testing and are more efficient at detecting escape defects. A proper measurement for defects could be the bounce rate, which indicates user engagement and app stickiness based on the percentage of users who navigate away from the app. Do you know how many of your users are sticking around after running into a bug? Significant bounce means that you’ll need to fine-tune your testing to find even smaller bugs. If they’re persisting, you can focus on efficiencies in your development to QA communications to help shorten the time between tests. This allows you to complete more tests at lower hours per testing cycle. Where they are Hitting the right device or OS is also a consideration when developing for success. If 90 percent of your customer base is on Android devices, you’ll want to test more heavily there. The measurement could be the number of users affected by a defect across different platforms. Your Android matrix may be .5 percent of users because you want to keep them from bouncing, while a more significant tolerance could be given to other systems. If you have one percent of your customers using Linux, for example, you’d want to focus less time there but still catch those original “showstoppers.” Using this model, you could add additional weight to bugs found in each system. While you may label a specific bug on the desktop as minor, it could get a critical rating on a mobile device. Consider using a Defect Severity Index which looks something like this: (1= critical 8pts) vs. (2 = high 6 pts) vs. (3 = medium 3 pts) vs. (4 = low 1 pt). Divide the number of issues per category by assigned number (No. of critical defects *8) + (Number of High Defects *6). Then divide by the total issue count. Meticulous and monitored The software testing process must be meticulously planned out and monitored to be successful. But like any project, there’s an evolution to QA testing. Using the correct metrics helps you effectively track the efficacy of QA activities. Though what you start with at the early stages of development will certainly not be used later in the life cycle. Establish the markers of success in the planning stage, and match them with how each metric stands after the actual process. Fine-tune each testing phase not only to match the maturity of the product but also to the customers that you’re serving. Each development and QA team should be on the same page about what they’re trying to achieve, and what success looks like in each phase. That helps everyone realize their expectations, which ultimately leads to better satisfaction. Ready to turn metrics into success stories? Contact Testlio today.