Some development teams jump into automated testing like it’s the holy grail. And it is—kind of. Automated testing is a great safety net for regression testing and for checking in on redundant components.
But we’re strong believers in manual, exploratory testing. The best testing results come from QA professionals. Even as automated suites become more sophisticated, they still require human drivers. Actually, automated tests are often converted from initially manual efforts.
Here’s why developers need manual testers, whether external help or in-house.
1. There’s a whole bunch of testing that simply must be manual
User experience is probably the biggest reason why manual testing is important. We all could use valuable criticism from time to time (even developers!). When it comes not just to functionality but also to first impressions, there’s no replacement for the human eye.
While smoke tests can be automated, they too are better left for manual testing. It’s far quicker for a tester to poke around your app and see if it’s ready for hardcore testing than for a tester to write scripts that would do the same. And early-phase scripts won’t last, anyway.
Plus, only a human can double-check language use and other key localization factors in a product targeting multiple regions.
Related: What to do about flaky tests?
2. Automated testing empowers human testers
Like cars that break for you in an emergency, automated testing is busy while you’re looking away.
Automated software testing saves time with repetitive jobs, so that manual testing efforts can center around coming up with creative use cases.
The most successful use of automated testing isn’t about trying to get it behave like humans, but in enhancing overall product coverage by creating new, unique scripts.
3. Bugs are found where you least expect them
Even when testing for specific use cases, testers can still find bugs that they weren’t necessarily looking for.
That’s a big deal. For some projects, the majority of bugs are actually found by testers that were looking for something else entirely. Automated testing can’t notice errors it wasn’t programmed to find.
4. Humans are creative and analytical
While we all like to bemoan the downfalls of being human (why can’t we fly?!), we do have our good qualities.
The skills and experience that testers bring to the table helps them strategize every time they start a new session. At this point in time, there’s no replacement for our quick mental processing speeds and our dope analysis!
5. Testing scripts have to be rewritten in agile
Working with constant feedback in an agile environment means fluid changes to the product flow, the UI, or even features. And nearly every time, a change entails a rewrite of your automated scripts for the next sprint.
New changes also affect the scripts for regression testing, so even that classic automation example requires a lot of updating in agile. That amount of work warrants consideration when a development team is trying to figure out where to invest resources.
6. Automation is too expensive for small projects
Not only do you have automation software to pay for, but you also have higher associated maintenance and management costs, due to script writing and rewriting, as well as set up and processing times.
For long term projects and big products, the higher costs can be worth it. But for shorter, smaller projects it’s a monumental waste of both time and money.
When calculating the potential ROI for an automation purchase, you have to factor in added man-hours, as well.
7. Unless tightly managed, automation has a tendency to lag behind sprints
There’s a difference between what we hope technology can do for us, and the reality of what we can do with it.
With the constant script-updating, it is very hard to keep automated testing on track with sprints. It’s worthless to test fixes that are no longer current. Successful automation starts early on and never falls more than one sprint behind.
If a development team doesn’t have the resources to make that happen, it might be better not to try (unless that team is making a long term investment with plans to improve the process).
8. Manual testers learn more about the user perspective
Humans learn all day long, even faster than AI. You wouldn’t want to waste that knowledge would you?
Because human testers often act like a user, they provide a lot more value than just knowledge of how the product is currently performing. Testers can also help steer products in new directions with their deliveries of issues and suggestions.
While automated tests focus on codifying knowledge we have today, exploratory testing helps us discover and understand stuff we need tomorrow.
9. Automation can’t catch issues that humans are unaware of
This goes back to point #3, that bugs are often found where we aren’t looking. But beyond that, there are also whole use cases and large risks that we may not be immediately aware of.
This natural ignorance can be mitigated with exploratory testing or with exploratory testing that results in the development of new scripts.
No matter the forms of testing a team relies on, upfront strategizing is always necessary. But we can never expect to come up with everything on the first go around. For most of what was missed, manual testing is a much faster catch-all.
10. Good testing is repeatable but also variable
The most successful testing has a mix of two factors: repetition and variation. Automated testing is great for the continual checking process, but it’s just not enough. You also want variation, and some wild card use cases.
Combined, these two factors give the highest chance of achieving full product coverage.
11. Mobile devices have complicated use cases
Device compatibility and interactions can’t be covered with automated scripts. Things like leaving and reentering wi-fi, simultaneously running other apps, device permissions, and receiving calls and texts can all potentially wreak havoc on the performance of an app.
Changing swipe directions and the number of fingers used for tapping can also affect mobile apps. Clearly you need a manual tester to get a little touchy-feely if you want your app to have the minimum number of defects.
12. Manual testing goes beyond pass/fail
Pass/fail testing is super cool. We ask our network testers to conduct use cases with set outcomes all the time. But for most projects, more complicated (and yes variable) scenarios are desirable.
Web forms are a prime example of this. While an automated script could easily input values into a web page, it can’t double-check that the values will be saved if the user navigates away and then comes back.
And what about the speed of submission? A human will definitely notice if a web form submits abnormally slow while other websites are loading at top speed.
But speed isn’t something that fits into pass/fail.
13. Manual testers can quickly reproduce customer-caught errors
While you hope you catch all bugs before deploying, you also hope that your customers will kindly let you know of any errors.
Hotfixes are a must for cloud-based products. A manual tester can use the information submitted by the customer to submit a bug report that will be helpful to the engineer.
The time between a customer issue and a fix is way faster with manual testing. Like way.
Yes, automation is awesome. But manual testing is above all a service — one that can’t be automated.