Request info

How to Develop an Automated Testing Strategy

In QA, testing automation is seen as one of the biggest promoters of speed. Testing automation is critical to maintaining quality during fast release cycles. Software tools can run automated scripts that help testers reduce repetitive tasks and shorten the time it takes to produce quality testing results.

Here are some essential steps and requirements when developing an automation testing strategy that will increase throughput and free up teams to focus on quality enhancements that drive revenue.

Choosing the right test cases to automate

But because keyword-based tests must be maintained and updated over time, you have to be just as smart about choosing which test cases to automate:

  • Repetitive tests
  • Large data sets
  • High risk
  • Tests for different browsers or environments

Running test automation throughout a sprint

Testing “early and often” is the most central tenet of QA in an agile team.
The goal for quick releases in agile is for everyone to finish the sprint at the same time — QA wraps up at the same time as development. Of course, this isn’t always possible, even in the leanest of teams.

But by planning test automation strategically, it is possible to approach this goal.

Before the sprint

High-level automated tests can be written using keywords that correlate with business requirements. That allows testing to begin before developing new functionality, so long as each keyword corresponds with a known command. Keyword-based tests can be as simple as “login, upload a file, logout.” These should be task-oriented and not focused on the details.

During the sprint

After the sprint

Developing automated tests that last

Automated testing can get messy.

Tests that were once relevant can become useless. Individual scripts can have too many validation steps, convoluting the most critical results. If choosing test cases is the “what” and developing and running scripts continuously is the “when,” then creating test cases built to last is the “how.”

Success in automation is not as much a technical challenge as it is a test design challenge.

Hans Buwalda

Writing small test cases

However, with automated scripts, it’s necessary to break things down into sequences of steps and test those sequences individually. So, navigation would come separate from interaction or task completion. Underwriting instead of overwriting scripts protects them in the event of inevitable app changes. The more broken up and small individual test cases are, the less likely they will have to be tossed out or rewritten. Instead, it will be easier to target the few test cases that are affected by changes to the app.

Writing test cases independent of UI

The second way to keep automated tests flexible is to not make them dependent on UI. This is a lot easier when using keyword-based tests as opposed to a scripting language like JScript. Whenever possible, tests should be written in action terms supported by backend functions in a domain language approach, rather than using the name of UI elements or pathways that may change as the creative process moves along.

This is particularly important for agile teams writing scripts for the current sprint. Because then app changes aren’t a case of the script becoming unusable in the future, but of being unusable now.

Integrating automated scripts with manual testing

Where Automation Excels Where Humans Excel
Unit tests and integration tests when the functionality under test is very stable UI and UX testing to test the look and feel of an app
Supporting DevOps with repeatable tests running in parallel to improve results velocity and to provide development teams fast feedback Thinking of quality as a solution rather than acting like a robot. For example, taking the time to identify negative reviews in the app store and understanding the user needs and voice enough to build thoughtful test plans
Repetitive and data-intensive tests Judgment: if an automated test fails, it’s usually a human to judge whether the test or the product code is at fault
Happy path testing using known inputs and a clearly defined expected output. Humans test combinations not anticipated in the automated test cases. They also conduct exploratory testing for higher-level assessment of complex business flow and real-life situations such as interruptions and display image orientation.
Jumbotron image