Exploratory testing fits in with just about any testing strategy. This format catches more bugs and makes for happier testers.

But it isn’t as easy to manage. Or at least, management isn’t built in.

We’ve covered the difference between scripted and exploratory testing (to sum that up real quick: exploratory testing means coming up with test cases on the fly) and we’ve refuted some of its common misconceptions.

As a quality assurance manager, you want to be flexible and allow your testers to be creative. But you also need structure to ensure coverage and great results.

Here’s how to manage exploratory efforts that strike the balance.

Any testing approach is manageable when you choose to manage it.

Michael Bolton

1. Strategize coverage before the test cycle

Take a look at the app or site, strategize, break it down, and then assign the specifics to your team.

The biggest tip is of course to be proactive.

If you don’t assign specific tasks or charters ahead of time, here’s what could happen: You get back a bunch of issues, but you have no idea what wasn’t tested versus what was tested and had no bugs.

Or maybe you get back a bunch of notes, only to find overlaps and missed features.

Either way is bad news.

Instead, you have to give specific requirements when you assign projects. There are lots of ways to get started, but here a few ways we handle this at Testlio:

  • Focus points
  • Task lists
  • Pass/fail tests

Focus points: Our test cycles last 24 hours and they typically have specific focus points. If a new test cycle is devoted to taking video, then there’s the focus. Other examples are new login integrations or updated user settings. We put these focuses in the cycle details.

Task lists: Another way we guide testers is to give them test suites separated by areas or features. Within each test suite is a task list that covers everything within that function—think of it like a check list of everything a user can do within that area of the product.

Pass/fail tests: No, tests with predefined steps and only pass/fail results are NOT exploratory. But even when using a primarily exploratory approach, it can be important to have specific, pre-written tests for some areas of an app, like the key functions. While some testers are going through these critical actions, others can cover possible combinations.

Each project dictates its own unique testing recipe.

If a client asks that we be 100% certain that one function works, we’ll write scripts for it, and then also allow for exploratory testing of its usage.

Some apps are so simple, we can do a fully exploratory approach with every feature housed in one test suite. Other apps need to be broken down into ten suites with 60 tasks each.

Other ways that QAMs can provide frameworks? You can also consider giving testers:

  • A goal that they should accomplish
  • A user persona that they should pretend to be
assigning.jpg

2. Have clear requirements for documentation

Absolutely make sure your testers know how to submit bugs. We ask Testlions to include steps with resulting behavior, a screen shot or video, and a log (unless it’s a UI issue). We want to make it easy for engineers to reproduce errors.

Having clear submittal formats means you don’t have to track your testers down for more info at the end of a cycle.

Beyond issue tickets, you need some form of notes. This way, your testers can tell you everything they did.

Jon Bach’s Session-Based Test Management approach requires that testers submit a time log showing how much of their testing matches the charter, a session sheet, an oral report, a debrief, any data files, and the output logs for the entire session.

For your project, that might or might not be overdoing it.

In our platform, Testlions can write a note for each specific task or test case. They also have a space to comment on their broader exploratory assignments.

Simply talking is another option. When you have a remote team, oral debriefs and reports won’t likely be your go-to, but if you’re located in a central office, you can have a ten minute conversation with each tester following their session, when everything will be fresh in their minds.

3. Review submitted notes at the end of the cycle

Now that the testing is done, you get to double check what has been tested. Very quickly check that each tester explored the focus-points or features they were assigned.

Then step back and compare the logs for coverage. If you find that a couple combinations are missing, write up scripted tests and assign them in the next round.

The best coverage comes from multiple types of testing. You likely missed something in your preliminary strategy session. So just catch it now and make changes to your plan.

This is also your chance to see where are the most problematic areas lie. Each site or application has its troubles. Lots of overlapping bugs and issues deserve a note in your final report.

At Testlio, we make this really easy to visualize. Each new issue requires an affected feature label that gets added to the list of issues and to the mindmap we created in the Testlio platform.

We make a mindmap for every single project, so that both areas with no issues and areas with lots of issues quickly stand out.

4. Maintain open communication between testers and engineers

Excellent communication skills are a necessity for great testers.

Once they’ve clearly reported all issues, bugs, and session notes, the communication doesn’t stop there. Encourage them to respond to comments from the engineers.

The back-and-forth dialogue will help them make adjustments to future testing cycles or projects. They’ll learn what irks the engineers, gain new insights into the project, and clarify the lingo of the app’s features.

This is a particularly important step for exploratory testing because it’s so fluid. You want your team to continue to adapt.

Dayana is a QA engineer turned technology writer living in Milan, Italy. She's always down for a smoothie.