How to create a good test plan Software testing is the process of evaluating and verifying a software program or application —with the intent of preventing software defects, reducing development costs, improving performance, and delivering a superior product. To ensure adequate product coverage and keep testing in line with project objectives, it’s imperative to have a sound test plan. The blueprint of a good test plan Test planning should be strategic and to the point. A well-defined test plan starts by understanding what to test and how much testing time is available. It then zeros in on more important areas that guide the testing activity, such as: The scope of testing: what to include and what to exclude Determining the platform(s) and devices to run tests on Writing test cases Allocating resources for testing Creating test data Logging and validating bugs Delivering feedback Scope of testing The scope of a test plan forms the basis for every decision that will be made about testing. So defining a clear testing scope is crucial as it sets the goals and expectations for everyone involved in the software testing project. For a build delivery, feature release, or increment under test, the scope would include the functionality that is the targeted delivery, the platforms to test on, and the areas of the product impacted by development. Scope in terms of testing platforms can be extended to device types (e.g., phones and not tablets) and minimum OS versions. At this stage, the items in the testing scope can be prioritized based on specific testing needs, timeframe, and budget considerations. As a good practice, development and testing teams should review the testing scope together to uncover potentially overlooked details or eliminate unnecessary tests. Platforms for testing After establishing the scope, determining the testing platform is the next important step. The platform for testing will often define itself through the product—e.g., mobile apps are tested on a mobile device running a mobile operating system. For each operating system, app usage will inform which version(s) should be included in tests. For web apps, it’s important to determine which browsers and desktop operating system combinations must be included in a test. For example, Chrome is the most popular web browser today, especially on Windows OS. Users of macOS are generally more inclined towards Safari. So at the very least, test cases must run on Windows 10/Chrome and Mac/Safari. A cross-browser test between Chrome and Safari, as well as a smoke pass for a newly developed UI on Chrome in macOS can provide even greater security in terms of overall user experience. Testing devices Good device coverage is critical to maintaining quality. Once the test platforms have been defined, a variety of devices that support the platform must be included in the test run. As a general rule, devices that are part of a test run must include: A variety of user-configurations Most used popular devices (e.g., flagship phones) Devices that run on different types of networks Devices with different hardware capabilities Ideally, a test plan should curate a list of popular devices, running the latest version of their respective operating system, across different types of networks. Related: How Many Mobile Device And OS Combinations Should You Test? Resource allocation Product knowledge is key to quality testing. Testers with better domain and product knowledge can quickly grasp inner software workings and uncover potential issues. For new features, particularly big ones, assigning dedicated expert testers can help product and development teams release faster with confidence. These testers understand how one feature can impact the rest of the product, making them an excellent fit for new exploratory testing and functional testing, for example. Furthermore, as time is an important factor to consider when allocating resources, an experienced test team is more likely to resolve blocks quickly and deliver on pressing product releases. Writing test cases Writing test cases is arguably the most time-consuming part of a test plan creation. It is also the most vital piece when it comes to executing a successful test run. Test cases provide a clear, defined path for testers, and they specify the acceptance criteria for passing a test. Each feature that is in the scope of testing must first be broken down into test cases. Multiple test cases can be executed at once, but the test plan should have a separate test case for individual components, including their acceptance criteria. Test cases from older test runs can be recycled and reused—however, if you plan to reuse test cases, review and update them to match changes that have been made to the product over time. Reusing a test case isn’t poor practice but keeping existing test cases updated is what ensures value in their recycling. Creating test data Testers can create test data when they are running tests—however, this will cut into the time of the test run. So creating test data before the test run starts is ideal. In some cases, the test data that is needed will require special access from a client i.e., testers cannot create it themselves. When too much test data is missing from the test plan, it can block testers from running test cases or even end a test run prematurely. To prevent this: Validate the test data before a test run begins. Make sure everything is accessible by all testers. Ensure the test data matches the requirements of the test case. Clearly mark what data set is meant for a test case and indicate which information can be reused or recycled for test cases. Determine the environment for testing and communicate it to the testers. Each tester should confirm they have correctly set up the test environment before testing begins. Include tools and guides for setting up a test environment in the test plan. Logging bugs The test plan must include all the information needed for logging issues. Testers log bugs and developers fix them. To make this relationship effective, a bug report must contain all the relevant information that will help engineering teams reproduce bugs and understand their underlying cause. The platform for logging bugs should be set up before testing begins, all testers must have access to the platform, and they should have a comprehensive understanding of how the platform works i.e., how to attach logs, add screencasts, add images, attach version numbers, add reproduction steps, etc. Validation of bugs Bug validation, as a testing activity, should be part of the test plan. It doesn’t take as long as an entire test run since it focuses on very specific cases i.e., logged bugs but it should still be planned. When a bug is logged for a specific device, validating if it’s an actual bug or if it recurs on similar devices will help prioritize issues better. Delivering meaningful feedback The end of a successful test run will include quantitative and qualitative feedback. Testers, especially those with wide product knowledge, can provide an additional layer of valuable insights and suggestions for improvements. E.g., if there is a feature that is particularly unstable and is breaking the other components of a product; if there is an un-intuitive feature in terms of functionality or design; if there is redundancy in the flow of a feature, etc. Testlio’s testing management platform and global network of testers provide a holistic testing environment that combines solutions and services to meet variable projects needs. Contact us today for strategic test planning that delivers quality results.