Continuous testing has revolutionized software quality over the past decade, and has been further enabled by automation. Automated software testing presents an opportunity for a much broader testing coverage while being fast and cost-effective. 

However, software companies still encounter a major need for manual testing, for testing types that require more human insight and judgement, such as usability and exploratory testing. While automated testing is growing, manual testing still plays a critical role in Quality Assurance. Each type is valuable and required for specific purposes. So what is the best use for each and how can companies resourcefully integrate the two?

To answer those questions, we interviewed Quality Assurance industry veteran John Ruberto. John has 20 years of experience leading QA teams at companies like Intuit, Boeing, before he joined Testlio as Senior QA Project Manager. 

Key Topics:

  • Applied examples of how Quality Assurance teams currently use automated suites, such as SauceLabs and others, while also leveraging a global network of expert testers for location, language, usability testing and 24/7 coverage.
  • Best use cases for automated software testing and manual testing and how to determine which is the better fit for your software tests
  • Practical examples where leading companies have succeeded by using a combination of automated and manual testing as part of their pipeline
  • A 3-tier Continuous Integration pipeline, highlighting where manual testing can be incorporated alongside automated testing

Interview with Testlio Senior QA Project Manager, John Ruberto.

Hi John, can you let our readers how you got started in QA?

A few (well many) years ago, I was a development manager and was offered the position of test manager at a new company, which is where I started to learn about the challenges of QA. The first release was easy enough, but soon the testing effort began to grow exponentially with each release, while the time allocated was shrinking. I learned about regression testing the hard way. 

Since then, I’ve been leading testing teams in a huge range of industries: Telcom (Alcatel), Finance (Intuit), Travel (SAP-Concur), and Payments (Clover/First Data).   

Before we get into pros/cons of each, would you give an overview of where automated testing is most commonly used in software testing today?

Automated tests are software that is written to test software, so generally, the expected results should be predictable. Automated tests are generally used in three main areas: Unit tests, API tests, and UI-driven tests.  

Unit tests are usually written in the same language as the underlying code and are tightly coupled to that code. The unit test often will test a single function (the unit) and replaces the rest of the system with other code called “mocks.” For example, a unit might calculate the sales tax on an item given the base price and the zip code of the customer. 

The purpose of unit testing is to make sure the code works. Since the rest of the system is mocked out, unit tests can generally inject every permutation possible for that function – including errors and exceptions. Another advantage is they run fast and are often executed each time the code is built.  For all these reasons, unit tests are generally a good fit for automated testing.

API tests test the communication with another app, and make sure the other API is functioning as expected. API tests run pretty fast, and they are also very reliable — they tend to run often without providing false results (also known as flaky tests). Thus, it’s another area we commonly see automated testing. 

UI tests simulate users interacting with the app. They type information into fields, press buttons, click the mouse and read information off the screen to verify its proper behavior. It’s also feasible now to check the results visually, in a smart way, across many different browsers/devices in parallel. 

Since the UI-driven tests interact with the app like a user, it’s often thought of as a replacement for human-based testing. However, there are several limitations to fully replacing humans.  Since the entire app is involved, the UI-driven tests tend to run the slowest. With teams making many improvements to the UI, these tests often indicate a failure when the real cause was a UI improvement, not a true bug.  

So, while automated testing does so much, where do you see manual testing still playing a strong role?

We know that computers do an excellent job with well-defined activities, but people still need to determine the goals, interpret results, and check for solutions. So, while automated testing does a lot of the heavy lifting, a human still needs to write the automated scripts and to interpret the test results to make sure they match the goals.

Humans have the advantage that they can adapt to the inputs and outputs: you can give a skilled tester an outline on what the app should do, and they can figure how to best test that capability. Also, only humans can provide judgment: they can judge the reasonableness of a result, when the result might not be deterministic, such as making a recommendation based on a learning algorithm that is not completely predictable.

There are a few areas where humans are much better equipped for testing than automated scripts: 

  • Exploratory testing: Exploring the app beyond the steps given in a test case. This is often where the most interesting things happen. 
  • Usability testing: Skilled testers know the principles of usability design and can give qualitative feedback on a particular design, an area almost impossible to be replicated with automated tests.
  • Localization testing: Language localization works best when you use native speakers of that language, who can tell you much more than if the correct words are used, they can tell you if the tone is appropriate and culturally relevant. 

Related: 13 reasons why manual testing can never be replaced

What about regression testing? Is this usually manual or automated?

Regression tests are good candidates for automation, but in reality, I would estimate 10-20% of regression cases are automated across the industry.  The things that get in the way are time, talent, technology, and priorities.

Related: Let’s talk about automated regression testing

Can you give us an example of how a Testlio client integrates both automated and manual testing services?

One of our clients is a national TV network that produces apps for all of the major platforms. They need a mix of manual testing to ensure a great user-experience, as well as automated testing to verify the extensive tracking data that they collect.  

Today, we expect to watch a TV show anywhere, on any of our devices. These expectations add tremendous complexity to testing mobile apps. Think of all the smart TVs, mobile phones, tablets, gaming consoles… Plus, the content is changing literally every minute, and the UI has to be easy and enjoyable. All of these factors are challenges for automated tests to validate, so humans are more effective here. 

In other words, in the cases of incredible complexity and quickly changing environments, manual testing, whether functional, usability or regression testing, is often better suited. Humans can adapt right away, and use their brain to see what the app should be doing, or how it should operate. For automated tests, it takes some programming effort to adapt the tests, which takes time and money.

However, this same client also uses automated testing for some of the more repeatable work. Their TV networks rely on big data to better serve users. Each event in the app can add 50+ individual data points… Verifying all of this data would be extremely tedious, and error-prone if people did the verification manually. Verifying volumes of data like this is where automated tests shine. 

See How Testlio Works by Combining Services + Expert Testers + Testing Software

How does the client merge both the manual testing with automated testing cycles, all while adhering to a continuous testing pipeline? 

The client uses a continuous integration pipeline to fire off a set of tests each time the app is built, and a more comprehensive suite runs nightly. Each evening, the CI system sends the build to a device farm where many automated tests are run in parallel. The initiation of the tests is completely automatic and controlled by the CI system. When the test results are ready, the automated suite sends the results back to the client. 

The way they navigate this process? Using a QA vendor like Testlio, who helps them manage both the automated and manual testing. At Testlio, we manage the manual testing in parallel to the automated tests. Pretty much the same thing happens: the CI system sends the build to the Testlio platform, then the testers start their test cycle and run it overnight. Testlio sends the results for review first thing in the morning. With both methods —  automated tests running in the cloud and Testlio manually testing — they run overnight and provide results in the morning.  

So what are the major takeaways for clients who need to use both automated and manual testing? What have you learned along the way?

One main lesson is that automated testing is software development. The difference is that the software you develop tests other software, and is probably used internally.  So, it’s very important to use software development principles in your approach to automation: source code management, using design patterns, investing in frameworks, and yes, testing the tests are all important in automation (and in software development).  

In that vein, one big takeaway is for testing teams to take an incremental approach to automation, rather than the “big bang” approach.

One time, I had an automation team go off and create a regression test suite for an app that I was leading.  They took several months, and the plan included things like researching the best tools, creating a framework, designing the tests, coding and testing the tests. All pretty good, except for the “several months” part. By the time they were done, the original app had gone through a dozen releases – and the automated tests did not work with the new version of the app. The automated tests worked with the older version.  

Now, my approach is to ask for one test to be automated first, then incrementally add to that test, keeping the test suites running on a daily basis, and updated alongside the app as it’s updated. 

Can you think of another use case where leading companies have succeeded using a combination of automated and manual testing as part of their pipeline?

One of Testlio’s clients is a very successful financial services app, with over a million active users. The app, and its backend, comprise at least a million lines of code, and they release updates every two weeks.  

So, every two weeks they are changing a part of the app. But, they also expect the rest of the app to continue working fine. So they are adding or changing 10,000 lines of code, but need to ensure that the other 1 million lines continue working as expected.  

To do this, they have a tremendous amount of automated tests that ensure the baseline behavior of the app remains the same. Financial data and workflows are natural allies of automated tests because the results are predictable. It’s also vital that they do not introduce bugs, because they are working with their users’ money. A financial bug leaking into production could be catastrophic. 

Adding new tests and keeping the others is a very difficult job, and the client’s internal QA team is mostly focused on their automation suite. But, they have a release every two weeks and need to do extensive testing, so that is where they rely on a QA vendor like Testlio.  

We bring a team of 20-30 expert testers to do all of the human-based testing in one day, which allows the client team to concentrate on automation.

These manual tests include exploratory testing, regression testing, and usability testing. 

Let’s say you’re a CTO of a consumer app company, how would you determine where to use manual vs. automated testing, and how would you approach integrating them?

Let’s take an example like an online store that manages a shopping cart. Remember the definitions of unit test, API tests, and UI driven tests that I mentioned earlier? 

  • A unit test would verify that a function gives the correct sales tax for a given a zip code
  • API tests verify the shopping cart total cost is calculated correctly
  • The UI-driven test verifies that the customer can add and delete items from the cart. 

To test the shopping cart functionality in total, I would use a combination of automated and manual tests. This allows us to optimize the speed, accuracy, and test coverage for the type of testing. 

The unit tests might verify that the correct sales tax is used for each and every zip code. There are over 41,000 zip codes in the US. With a very fast unit test, it’s possible to actually test all of the combinations at some point using a data-driven approach. With the thousands of state, county, and city combinations, it’s not economically feasible to test these manually, so this is an ideal use case for automated checks

Related: How to Develop an Automated Testing Strategy

An API test might calculate the total amount due for the contents and configuration of the shopping cart. The API test might implement a number of scenarios where various items are added to the cart, all with different prices, taxable status, shipping distance (and weight), and discounts/gift cards applied. Pretty quickly, this adds up to many thousands of permutations.  The good news, an API test can test each combination in less than a second.  

Finally, the UI-driven tests check that items can be added and removed from the cart in the UI. For the scenarios checked, the UI-driven automated test can verify the correct price is calculated. However, the weakness of the UI-driven test is that it may take many seconds to a minute for each combination, and UI-driven tests are prone to being flaky.  It’s much more efficient to check “the math” behind the shopping cart at the unit or API level, and leave the automated UI tests for basic UI functionality. 

With modern testing tools that use AI, it’s feasible to check some of the visual aspects automatically. The same test can be run across many browsers and an AI-driven visual check will make sure that each browser renders the UI properly. 

Now, what can the human test? Testers, either in-house or with an outsourced QA vendor, should manually check that the user experience is solid and matches your brand. Does the item list scroll smoothly as you add an item? Does the price update quickly? If the shopping cart makes additional item recommendations, are these recommendations relevant to the customer?  

While the automated tests (the unit & API tests) take care of validating the “math” behind the shopping cart, professional human testers can concentrate on the overall experience. 

Thanks so much John. This is a wonderful overview of how automated testing and manual testing work together to assure a better customer experience!

This is part of our series of articles titled “Ask a Quality Assurance Project Manager”. Do you have a question for us?  Or, would you like to hear more about how Testlio can help you elevate your testing? Drop us a line at hi@testlio.com or schedule a free demo.

 

Tim Ryan serves as the Director of Marketing for Testlio and spends his time between Austin, TX and New Orleans, LA.
Get our monthly email newsletter covering testing trends, insights, and events.

Get our monthly email newsletter covering testing trends, insights, and events.Sign up for blog updates.