In QA, testing automation is seen as one of the biggest promoters of speed. Testing automation is critical to maintaining quality during fast release cycles. Software tools can run automated scripts that help testers reduce repetitive tasks and shorten the time it takes to produce quality testing results. The speed and reliability of automated testing have made it an essential DevOps practice. It keeps software development processes agile and lean. New techniques like predictive analytics and self-learning platforms are just getting started. While new technologies allow teams to reduce the overall number of test cases—instead of repeating them—many teams still struggle with the earliest capabilities of test automation: repetition and regression testing. Mainly because of the difficulty of writing scripts with longevity and maintaining them over time. Furthermore, there are just too many tools and approaches that make test automation a challenge. In fact, 50% of IT teams face challenges in applying test automation at appropriate levels. Here are some essential steps and requirements when developing an automation testing strategy that will increase throughput and free up teams to focus on quality enhancements that drive revenue.Choosing the right test cases to automateWriting automated test scripts can be time-consuming. It’s impossible to automate everything, so the key to getting maximum ROI from time and money spent on automation is developing a test strategy that increases velocity in the short and long term. Keyword-based tests allow QA engineers to boost their ROI on time spent. These tests are a lot faster to write because they run on keywords understood by the app and the automation software instead of using a complex scripting language.But because keyword-based tests must be maintained and updated over time, you have to be just as smart about choosing which test cases to automate:Repetitive tests Large data sets High risk Tests for different browsers or environmentsOverall, the test cases that should be automated will depend on the software and the team’s capabilities. Still, the one constant is identifying those that will heighten quality while freeing up time. Writing test cases for base functionality allows for a more thorough manual exploration of new features.Running test automation throughout a sprintTesting “early and often” is the most central tenet of QA in an agile team.The goal for quick releases in agile is for everyone to finish the sprint at the same time — QA wraps up at the same time as development. Of course, this isn’t always possible, even in the leanest of teams.But by planning test automation strategically, it is possible to approach this goal.Before the sprintHigh-level automated tests can be written using keywords that correlate with business requirements. That allows testing to begin before developing new functionality, so long as each keyword corresponds with a known command. Keyword-based tests can be as simple as “login, upload a file, logout.” These should be task-oriented and not focused on the details.During the sprintAs soon as new functionality is ready, unit testing comes into play. Unit testing simply means testing one unit (an even smaller breakdown than function or feature) at a time. It’s a key method to employ when attempting to extract value from automation. Detailed scripts can be written one unit at a time, as the code becomes available from the development team.After the sprintIn the event that testing does start to lag a sprint behind the development, automated testing can help QA catch up. That is when an external solution can really come to the rescue. Having off-site QA engineers develop scripts for past sprints can bring in-house teams together – the importance of which can’t be overstated. When teams are working on the same sprint, they can speak the same language and help improve the product’s testability as it is being built.Developing automated tests that lastAutomated testing can get messy.Tests that were once relevant can become useless. Individual scripts can have too many validation steps, convoluting the most critical results. If choosing test cases is the “what” and developing and running scripts continuously is the “when,” then creating test cases built to last is the “how.”Success in automation is not as much a technical challenge as it is a test design challenge. Hans BuwaldaWriting small test casesAutomated test cases should be small. With scripted manual testing, it’s common to write dozens of steps to perform a single critical action. In fact, many manual scripts always start from the top: with logging in. They detail all the navigation steps before stating the core action to be tested.However, with automated scripts, it’s necessary to break things down into sequences of steps and test those sequences individually. So, navigation would come separate from interaction or task completion. Underwriting instead of overwriting scripts protects them in the event of inevitable app changes. The more broken up and small individual test cases are, the less likely they will have to be tossed out or rewritten. Instead, it will be easier to target the few test cases that are affected by changes to the app.Writing test cases independent of UIThe second way to keep automated tests flexible is to not make them dependent on UI. This is a lot easier when using keyword-based tests as opposed to a scripting language like JScript. Whenever possible, tests should be written in action terms supported by backend functions in a domain language approach, rather than using the name of UI elements or pathways that may change as the creative process moves along.This is particularly important for agile teams writing scripts for the current sprint. Because then app changes aren’t a case of the script becoming unusable in the future, but of being unusable now.Integrating automated scripts with manual testingToday’s testing environment must be burstable and scalable—enabling release candidates to move from engineering to points of distribution quickly. Automated software testing is only as revolutionary as it’s designed to be, so any time spent strategizing automation efforts is time well spent. Ultimately, there are many situations where manual testing provides a quicker or more cost-effective way to execute test cases. So for a good automation testing strategy, the key is to identify test cases that will stay relevant over time and write scripts in a way that protects them from the inevitable change as much as possible. Where Automation ExcelsWhere Humans ExcelUnit tests and integration tests when the functionality under test is very stableUI and UX testing to test the look and feel of an appSupporting DevOps with repeatable tests running in parallel to improve results velocity and to provide development teams fast feedbackThinking of quality as a solution rather than acting like a robot. For example, taking the time to identify negative reviews in the app store and understanding the user needs and voice enough to build thoughtful test plansRepetitive and data-intensive testsJudgment: if an automated test fails, it’s usually a human to judge whether the test or the product code is at faultHappy path testing using known inputs and a clearly defined expected output.Humans test combinations not anticipated in the automated test cases. They also conduct exploratory testing for higher-level assessment of complex business flow and real-life situations such as interruptions and display image orientation. Testlio provides a scalable, reliable QA solution to enterprises including test strategy and automation. To learn how we can free up your QA team to stay on sprint with automated testing, talk to one of our QA experts.