Dev teams are stacked against an incredibly high bar of software quality, consistent end-UX, and tight release windows. While manual testing remains a cornerstone of quality, leveraging test automation enables enterprises to scale testing and meet demands for coverage and capacity. Low-code test automation provides a great place to start; it circumvents current challenges in talent pools, increases coverage, and iterates quickly.
In this Q&A from our recent webinar, David Sawyer, Director of Client Services at Testlio, and Damon Paul, Quality Engineer at mabl, dive headfirst into tackling strategies for leveraging low-cost test automation to scale.
Editor’s note: The following has been edited for clarity and brevity.
How can I identify where to use low code automation?
Sawyer: The best automation solutions come from a robust regression suite, so I always advise my clients to start with building that out. Try it out, and understand the core flows. Then, identify where the challenges or weaknesses are in your current product. Look at your metrics and understand those core or most repeated flows. Those become your first candidates for automation.
After that? If you’ve got ten bodies taking eight hours a day, or a single body taking two weeks to deliver a regression suite, you’ve probably got a prime automation candidate. Do you need to run multiple permutations of the same test? That can be simpler when handled by automation. And, if you need to execute the same test on multiple systems, devices, O/S, or browsers, you can harness a parallel effort of QA power with manual and automated.
Paul: Sometimes it will make sense to automate, and sometimes it won’t. When we find ourselves with this new functionality and want to make sure that it’s tested to the best of our ability, what questions should we be asking to help us determine what are the best next steps forward?
- Does it require more than one tool to automate?
- Does it cross multiple systems? I fill out a form on my phone and then go to a desktop later. Or maybe I need to validate that someone’s received the results of that form on a computer. Crossing multiple systems can sometimes make it hard to automate because you need tooling to handle those cross systems.
- Do I have control over test data? This is a huge one. Automation is typically very deterministic; it’s going through the same steps and expects certain things to be certain weights. But if your data is more dynamic and harder to control, then a computer may not know what to look for.
- How complex is the functionality? Forms are moderately static, so that might be a great thing to leverage low code test automation for. But if (depending on how you answer that form) it shows 37 different potential following pages, that’s a lot of complexity. Then, having someone go in and manually test might make sense.
- Will the functionality change soon? Or often? As a quality engineer, I find myself thinking about this over and over and over again. Are there parts of it that are always going to be the same? I can automate those parts and manually verify the other details. It probably doesn’t make sense to create a full regression suite for that new functionality when you’re still working on it.
Hype aside, what are the real benefits of implementing low-code test automation?
Paul: Coverage. If you are only manually testing, you have a net group of manual testers to validate new/existing functionality and have a set coverage scope. But if you’re trying to release quickly, you can’t get that balance because executing those [tests] takes a while. From personal experience, when I switched from manual QA to QA automation, I was writing Ruby Selenium tests. I found that while I could start adding in these automated tests, I increased our coverage, so the manual testers didn’t have to test as much of the same thing. A lot of my time was spent maintaining: fixing old broken tests, and updating the framework to work with new browser versions.
So why not high-code automated tests like Damon did?
Sawyer: There’s an increasing population of highly-educated but underemployed workforces. We all work with many technically intelligent people, but the skills don’t necessarily align with scripting automation frameworks. Even when you’ve got your own workforce developers, for instance, they don’t necessarily want to shift into developing QA. The path to advancing into these areas without retraining is difficult. So very few of us are in a fortunate position where we can hand in our day job and retrain, start from the ground up, and learn a new discipline from scratch.
Paul: With low code test automation, you’re taking away a lot of that technical aspect and opening it up to more individuals, allowing you more test coverage when more people can create tests. And it takes less understanding of like a framework to create the tests themselves, which can make things easier and faster so you can add more coverage.
But, nothing at the end of the day will beat when you have manual testing that can go in and cover functionality that’s too complex or time-consuming to automate while also having initial automation coverage.
What are other challenges facing those who want to implement low code test automation?
Sawyer: The fallacy of the zero-sum game. Even when automation is introduced to an SDLC or a quality pipeline, the end game isn’t that manual testing and support disappear. It’s not a one or the other scenario. The ideal QA scenario is a well-architected and incremental automation solution with real people behind it; people who determine where, when, and how to automate.
Many people have stepped away from automation because they haven’t had the right resources or put enough effort into standing up their solution, and therefore it’s failed. Process and governance, with the process being the tools and governance being the people, are a harmonious synergy that works together to produce the best automation offering you can get.
Where do you still need manual QA staff while implementing automation?
Sawyer: Let’s talk about where the manual support wraps around the automation framework in a fused testing model. The governance of manual assistance ensures your automated solution has all gaps covered from dev through to production. We’re talking about backlog verification, localization, accessibility, and ensuring you’re WCAG 2.1. And later this year, 2.2, compliance, IoT testing, mobile testing, and then right into production, when we’re through to release validation and payment testing, coupon validation, and sending testers to brick and mortar stores or infield and ensuring that those products work in the market as intended.
Manual testers investigate common issues like: Can I use my QR codes? Get into a stadium to find my seat? Does the app allow me to order concessions? And if not, what’s that feedback? Feedback is experienced reviews and sentiment. Sentiment ties into your app store reviews, and the ratings that your company gets, all of that is the outcome of quality. So you know, this is where we frame that quality, from a manual perspective, sitting around that core discipline of your automated flow from dev through to production.