The new norm in testing is to automate aggressively. It makes sense from a time-management perspective. Automated testing allows teams to increase testing cadence and volume by avoiding repetitive manual task load that piles up each release, in turn saving vast amounts of time and money, and getting to market faster. These benefits have us looking for more and more opportunities to automate, and can even create pressure to automate “everything”.
Many times, a trained human eye is the best way to get the results you want. The art of mobile app testing is complex. Despite the many advantages of automation, humans are still the best at manually evaluating the vast number of bugs and errors that inevitably crop up. There are many specific cases where automated testing might not be the best option. So how do you know where that line is?
The increasing prevalence of the Internet of Things (IoT) brings the real world into software testing more often. Fitness trackers, health monitors, voice-activated devices, automotive diagnostics, and media players are all used to interact with the real world, making their interactions difficult to simulate with automated test scripts.
User interactions with a physical device vary from person-to-person. Things like the force one uses to press a button, or the duration spent holding it down, varies wildly. Plus each individual will have a different perception of the usability. These are all difficult factors to program into a script.
Any type of manual manipulation of physical objects should be tested by people, at least for functional testing. Of course, if you are testing the durability of a button, you don’t want a person to press it 10,000 times (perfect for a robot). For most functional and usability testing with software that interacts in the real world, manual testing can’t be beaten.
When you’re writing scripts for automation, I’ve often said to keep it simple. In fact, “hermetic tests” or “atomic checks” are considered a test automation best practice. Writing elaborate scripts, especially scripts that span multiple platforms, takes far more time. And these scripts are prone to flakiness.
By breaking up complex testing into shorter scripts, you can then re-use those scripts on later tests. It increases the shelf-life of your existing scripts with minimal maintenance. So, before creating an elaborate end-to-end system-level automated test, try to see if simpler tests can help the team be more effective.
Related: What to do about flaky tests?
Testing across Apps
Switching between apps during testing can be difficult in most test automation frameworks. For example, imagine testing a marketplace app, where a buyer & seller each have to take some action in their respective apps to complete a transaction. This scenario can be automated, but often the investment to make a repeatable test is pretty high.
If you’ve ever changed your profile picture on Facebook and then used that same picture on Twitter, you’ll see why this has to be done manually. You’re interacting with different platforms, each with their own parameters. In order to ensure that your media works correctly with each, you’ll need to test manually to find the optimal format. Media formats and display surfaces vary widely, so it’s much easier and more cost-effective to test for each format manually.
The human eye is still the most effective visual processor on the planet. Once you’ve established a visual baseline, there are tools that help check that your UI is consistent across different platforms – but it’s the human that needs to establish that baseline.
Very early in your product’s cycle
You have a great idea for a product (or feature in an existing product), so you build it. In the very early stage, you believe its a great idea but you don’t know if your users or customers share your enthusiasm. In this early phase of the product, it’s most important to get your product into the hands of some customers and get their feedback. In feature/product ideation and growth phase, your product is about to change very often based on rapid feedback from the customers or by the team itself. By introducing automation testing investment in this phase of the SDLC, you will probably double the engineering cost to areas that will change drastically – would you be ok to maintain changes or even refactor on weekly basis your feature and automated test scripts simultaneously?
Exploratory testing is usually the optimal approach to rapidly test the app. Another factor is that you are usually enhancing the product rapidly based on customer feedback, and keeping an automation suite up to date with a very dynamic design is expensive. Your test automation strategy may benefit by waiting for when the product is more stable and regression testing becomes a factor.
Say yes to a combination
It is not about manual versus automation testing, but instead, it is a question of IF and WHEN to automate a test case. In many instances, a manual test makes more sense due to cost or testing cadence, but that doesn’t mean you should avoid test automation. Both manual testing and automation testing are investment decisions, therefore ROI should be kept in mind when building combined strategy. There are many scenarios where automated testing is preferable for stable and routine testing tasks. It can be a logical step forward from manual testing. By combining manual and automation into single strategy teams can experience an increase in overall test coverage by the number of tests and devices used in a single test session while keeping costs relatively low.
A nice blend of both automation (to increase coverage and depth of testing without increasing overall costs,) along with a set of manual tests (for more complex and exploratory testing), is the best way to achieve engineering excellence that improves the end-user experience.