Though often misquoted, we all understand the thought behind this common paraphrase of the famous quote by philosopher George Santayana, “those who fail to learn from history are doomed to repeat it.” While not typically associated with quality assurance, this quote is very apropos because understanding QA testing’s history and evolution can help software teams adapt to continually changing methodologies, rather than being entrenched in the old way of testing. 

A quick history of Quality Assurance

Product testing is an essential task in software, guaranteeing performance objectives before being placed in users’ hands. But the idea of ensuring minimum quality dates back further than the modern software testing we think of. 

  • 1950s: Alan Turing creates the Imitation Game, or Turing Test, to test a machine’s ability to exhibit predetermined and expected behaviors
  • 1970s: Fred Brooks publishes The Mythical Man-Month arguing that programming is more straightforward when testing is done before releasing products to the public.
  • 1980s: Programmers faced increasing pressure to release software that worked on all computers advertised as PC-compatible.
  • 1990s: Linux develops complex software by releasing code to the public and asking users to help find defects.
  • 2000s: Software companies (like IBM and Linux) ensure versions work under different hardware and software configurations.

The modern evolution of software QA testing


The waterfall development cycle is a linear-sequential life cycle model in which the whole process of software development was divided into separate phases. The testing started post-development, and there was no room for making any changes to the requirements.

Agile software testing


Continuous Integration (CI) and Continuous Delivery (CD)

Cloud-based performance testing


Outsourcing was spurred through the ultimate goal of achieving better business objectives. Using a third-party testing team answers the call of companies seeking to improve their applications’ quality, reduce business overhead, and improve upon current testing processes.

Meant to replace in-house teams, outsourcing QA testing to offshore firms has its own set of pitfalls, including the lack of control over the process and outcome. Simply put, traditional outsourcing organizations lack the deep product knowledge needed to yield the best results. And in a one-off project, outsourced teams will learn just enough about the system to return the contract requirements and then move on. That’s merely the nature of the business model, so it should be considered when deciding to outsource. This causes many companies to start looking for a way to better manage and be more hands-on in the testing phase. 


With the advent of crowd testing, mobile apps are put directly into the hands of the testers. The crowds QA model harnesses the power of testing communities to find the right demographics needed to test a product.

While it remains a decent lower-cost alternative, low skilled testers mixed with incentivized non-essential bug finds, bring extra work and wasted resources, lowering the cost-benefit ratio of working with crowdtesting companies.

The future is networked testing

Networked testing connects development routines with distributed testers through efficient and sustainable processes, ensuring businesses keep up with continuous deployment and discover the real issues that affect users.

But what happens when the pressure is on, and a new release is due? Well, this is where networked testing really shines. 

Jumbotron image

Dog owner, expat, gin lover. Allegedly wise to the ways of digital marketing, PR, and social media. Currently waging a war on mediocrity in communication and storytelling.