Though often misquoted, we all understand the thought behind this common paraphrase of the famous quote by philosopher George Santayana, “those who fail to learn from history are doomed to repeat it.” While not typically associated with quality assurance, this quote is very apropos because understanding QA testing’s history and evolution can help software teams adapt to continually changing methodologies, rather than being entrenched in the old way of testing.
A quick history of Quality Assurance
Product testing is an essential task in software, guaranteeing performance objectives before being placed in users’ hands. But the idea of ensuring minimum quality dates back further than the modern software testing we think of.
- 1950s: Alan Turing creates the Imitation Game, or Turing Test, to test a machine’s ability to exhibit predetermined and expected behaviors
- 1970s: Fred Brooks publishes The Mythical Man-Month arguing that programming is more straightforward when testing is done before releasing products to the public.
- 1980s: Programmers faced increasing pressure to release software that worked on all computers advertised as PC-compatible.
- 1990s: Linux develops complex software by releasing code to the public and asking users to help find defects.
- 2000s: Software companies (like IBM and Linux) ensure versions work under different hardware and software configurations.
The modern evolution of software QA testing
The waterfall development cycle is a linear-sequential life cycle model in which the whole process of software development was divided into separate phases. The testing started post-development, and there was no room for making any changes to the requirements.
Agile software testing
In Agile, testing is a part of the development lifecycle from the beginning to ensure that quality is built in from the start. An Agile tester is a part of the development team, looped into all development planning and activities in close coordination with the product owner.
A DevOps environment is one of constant communication and iterative improvements. With this cyclical development, automation is critical to executing test cases when creating and deploying new code. Aligning testers with both development and operations is fundamental.
Continuous Integration (CI) and Continuous Delivery (CD)
The CI/CD pipeline implements a set of operating principles and a collection of practices that enable application development teams to deliver code changes more frequently and reliably. Continuous integration and continuous delivery require continuous testing because it provides quality applications to users.
Cloud-based performance testing
Outsourcing was spurred through the ultimate goal of achieving better business objectives. Using a third-party testing team answers the call of companies seeking to improve their applications’ quality, reduce business overhead, and improve upon current testing processes.
Meant to replace in-house teams, outsourcing QA testing to offshore firms has its own set of pitfalls, including the lack of control over the process and outcome. Simply put, traditional outsourcing organizations lack the deep product knowledge needed to yield the best results. And in a one-off project, outsourced teams will learn just enough about the system to return the contract requirements and then move on. That’s merely the nature of the business model, so it should be considered when deciding to outsource. This causes many companies to start looking for a way to better manage and be more hands-on in the testing phase.
With the advent of crowd testing, mobile apps are put directly into the hands of the testers. The crowds QA model harnesses the power of testing communities to find the right demographics needed to test a product.
Crowdsource testing also has its own set of challenges. There are obvious security concerns whenever you give a group access to client systems, but the more significant issue is the one-off feel of the work being done. These testers will not have an in-depth knowledge of the product they’re testing, nor will they have a strong understanding of how that product is used in the wild.
While it remains a decent lower-cost alternative, low skilled testers mixed with incentivized non-essential bug finds, bring extra work and wasted resources, lowering the cost-benefit ratio of working with crowdtesting companies.
The future is networked testing
Learning from the pros and cons of in-house, outsourcing and crowdtesting methods, networked testing is the next evolution of these best practices. This approach seamlessly integrates into current business practices with the agility to either become a company’s QA team or extend their team’s resources.
Networked testing connects development routines with distributed testers through efficient and sustainable processes, ensuring businesses keep up with continuous deployment and discover the real issues that affect users.
But what happens when the pressure is on, and a new release is due? Well, this is where networked testing really shines.
Leveraging the distributed testing approach of on-demand, remote teams give companies the flexibility and coverage they need only when it’s needed. These real-time swarm teams are dedicated to building and releasing great products at speed, all the while reducing costs.
And it doesn’t matter what types of testing are required. Expert, remote testers perform functional, usability, localization, and other forms of manual software testing. They also complement automated testing by filling in the gaps for an effortless fusion of synergetic, full coverage testing.
The evolution of software testing has allowed testing environments to scale as needed, making it possible to execute testing throughout the lifecycle of development. Regardless of what advancements come next, several things are guaranteed to remain the same: companies will need to gain efficiencies to release great products, faster. And Testlio will continue adding to the value proposition by offering unparalleled testing software, services, and testers.