Request info

QA metrics: beyond discovering defects

  1. Test coverage: how far and wide tests are spread over the codebase
  2. Flakiness: broken and unreliable automated tests that aren’t providing value
  3. Time to Test: the amount of time it takes to run and report on a set of tests
  4. Time to Fix: the amount of time between when something is broken and when it’s fixed
  5. Escaped defects: the number of defects that make it to customers
  6. Fixed Issue Rate: the ratio of resolved bugs to reported bugs as a measure of the quality of bugs reported and domain knowledge of testers
  7. NPS (net promoter score): the percentage rating of a customers likelihood to recommend the app

Not all software is equal

And neither is the measurement of success.

As developers are looking at building a product, they are usually using a market fit approach first. Often in these early stages, the Minimal Viable Product (MVP) is enough to test the market and validate an idea. On the user side, there is an inherent value in being the first to look at a new product, and the fact that initial users will find bugs is a given.

In these early stages, testers tend to try to root out the “showstoppers.” The code breaks that could make the software unusable. But as software moves through the product life cycle, quality and features should both increase significantly.

For this reason, it’s essential to have clear communications between developers and the testing teams to make sure that they’re aligned on what success looks like in the end project. What are the baseline QA metrics?

Bounce it

In a mature project, QA teams reduce the time between testing and are more efficient at detecting escape defects. A proper measurement for defects could be the bounce rate, which indicates user engagement and app stickiness based on the percentage of users who navigate away from the app.

Do you know how many of your users are sticking around after running into a bug?

Where they are

Your Android matrix may be .5 percent of users because you want to keep them from bouncing, while a more significant tolerance could be given to other systems. If you have one percent of your customers using Linux, for example, you’d want to focus less time there but still catch those original “showstoppers.”

Using this model, you could add additional weight to bugs found in each system. While you may label a specific bug on the desktop as minor, it could get a critical rating on a mobile device.

Meticulous and monitored

Using the correct metrics helps you effectively track the efficacy of QA activities. Though what you start with at the early stages of development will certainly not be used later in the life cycle. Establish the markers of success in the planning stage, and match them with how each metric stands after the actual process. Fine-tune each testing phase not only to match the maturity of the product but also to the customers that you’re serving.

Each development and QA team should be on the same page about what they’re trying to achieve, and what success looks like in each phase. That helps everyone realize their expectations, which ultimately leads to better satisfaction.

Jumbotron image