How QA Testing impacts the Product Manager’s KPIs

Metrics drive the modern software development workplace. That’s as equally true for coders as it is for product managers. And while developers may be primarily judged on their ability to check off items on their to-do lists, those on the product management administrative side of things are often viewed through a business lens.

Although QA focuses on software quality and overall usability, it inevitably produces positive end results for the business, helping create products that satisfy end-users. For product managers, knowing how to leverage the power of testing can go a long way to ensuring they meet their targets.

Target acquired

Product managers are often judged on their ability to attract and capture new users. One major KPI employed is conversion rate: either the ability to attract a new sign-up, or to convince a free user to upgrade to a paid plan.

Houston, we have a problem

This presents an opportunity to gather data. In some cases, product managers are graded on the number of tests that pass or fail. And while this can be indicative of personnel shortcomings, it can also suggest a cognitive disconnect between developers and the specifications they are working from.

This is one part of the equation where QA testing can negatively harm KPIs. The more tests that exist, the more tests that can be failed. But we’d argue that’s the wrong way of looking at things. Instead, it should suggest that an employee requires more support or training, or the product requirements require further clarification.

Networked Testing leverages testing software, automated scripts, and expert human testers working in concert to burst upon the testing surface only when necessary and in compressed testing windows.

Identifying issues

But on a more granular level, the same level of analysis can be applied to individual features. 

Quality assurance testing is crucial here. By eliminating usability and technical factors, product managers can better diagnose the reason why a feature is failing to take off.

Sometimes, there are causes that fall outside the typical QA purview. The feature could, for example, be extremely niche and only solve a problem faced by a handful of users. By eliminating the usual suspects, PMs are better positioned to understand their users, and ultimately guide the development of their product

Click it or ticket

One metric frequently employed to gauge the quality of a product is the number of support tickets filed by users. A higher number of support requests are almost always indicative of a product that’s rife with problems: either technical or in terms of usability.

Further down the line, glitchy software inevitably places a burden on technical support staff, forcing them to contend with an ever-growing number of support requests. And if the problem proves too difficult for them to resolve, they may be forced to escalate the ticket, entrenching the customer’s dissatisfaction and harming another crucial PM KPI.

Properly employed, QA testing can stop those tickets from being issued by catching defects before the faulty code is deployed. Unit and integration testing can ensure the technical side of things remains robust, while usability and exploratory testing can ensure the user-facing aspect of a product works smoothly. If users are reporting problems, the QA team can reproduce those problems and give the developers actionable bug reports to facilitate fixes.

It’s all about the users

Product managers are ultimately judged on the merits of their work. KPIs merely quantify whether something meets standards or desperately needs sprucing up. QA testing is an inevitable part of this process, either by highlighting flaws or stopping them from being deployed in the first place. PMs shouldn’t fear it — they should embrace it.

Jumbotron image