Two years ago, I began a contract position with one of the world’s biggest banks.

The bank has an old application with three million strings of legacy code, which must still be kept in use because it is profitable. The delivery cycle took five months and the application itself required additional product support work and cost tons of resources.

Before I joined, testing was only done manually based on pre-written test cases. The testing team consisted of three people who were not technical enough to automate workflows, but could still conduct quality assurance with excellent coverage and bring value to the project.

At the end of 2015, especially after Brexit, banks decided to reign in costs. Development teams stayed the same size, but they had to take on more work. Of course, quality declined.

The best solution was to switch to an agile development approach. But this required a rebuild of the whole infrastructure. Continuous delivery needed to be enabled, automation framework introduced, quality checked by everyone at various steps both manually and automatically.

Behavior driven testing needed to be introduced, and it was essential to not run regression tests manually each time.

Let’s consider simple BDD testing for an exchange trading application and try to describe all functionality which a test automation framework requires:

Given: Tradable stock option with negotiable price
When: Stock option value sold out
Then: Stock option removed from the exchange

  1. Given step required: find the appropriate stock option in DB. Search for this option in GUI by ID, that means we need to manipulate by GUI. And GUI is not web based and we are not able just to open and start to use Selenium, so we need to prepare a robot that will be able to manipulate with our desktop GUI.
  2. Next step required: here we need to be able to send Sell and Buy Messages to the RESTful API, so we need some kind of automatic HTTP Client.
  3. Further steps: When options should be removed from the exchange, the application should send an HTTP request to the exchange, so we need to connect Listener to the API and also be able to pull this message from the log. The “Then” step is our expected result, so we need to make assertions here, at the same time we can add assertions to every step to be sure that our test cases execute everything correctly.
  4. Each framework needs to have a runner to start testing (invoke it from Continuous Integration system) and build execution report, to give a feedback to stakeholders.
  5. The testing framework needs to have detailed logging system, to be able to control if something goes wrong.

Let’s see what the framework looks like:

Everything starts from the feature file, where test engineers store their scenarios.

Automation engineers prepare definitions for scenario’s steps using Java. Java is selected because the application is written in Java, however, any language might be selected.

Cucumber helps keep this together and run (it has a few options how it might be run). Cucumber will prepare execution report for us as well.

Step definitions methods will use core libraries for testing actions.

JDBC communicates with DB. GUI robot was written by us and gives us possibilities for manipulating the GUI. HTTP client is helpful for RESTfull WebServices communication. Log creeper connects to the remote server and can find required information for us in server logs, prepared by ourselves as well. SLF4J library is used for logging. And Hamcrest is helpful for rich assertions functionality.

Let’s see how it is built into the continuous delivery infrastructure and how the system all works together.

Stakeholders like QAs, Managers, and Testers can add user stories and test cases to the test management system in feature file format. Based on these, developers make new features, and Automation QA covers them with steps defined by the code. In the end of the day, they commit their changes to Version control.

TeamCity pulls these changes and the build using Maven. JUnit runs unit tests.
After the build is done, Sonar returns the result of the static analysis, such as if the code is fit for development standard requirements and if the coverage of unit tests meet our expectation.

If everything is correct, Cucumber will run integration and end-to-end tests and place report to test management system. We use nightly builds and found it good enough for our needs. However, systems can be built on each commit or manually as well. If the build fails on any of the steps, it will immediately notify the team and revert changes to the previous successful build

This approach saves our resources immensely while decreasing the time it takes to receive feedback about the level of quality.

This way, we are able to release every month with small, but stable versions. The QA team still consist of three people, who have strengthened their abilities and are able to cover testing on seven different applications.


This post by Sasha Lykhodinov is part of a new series written by our community of testers.

Want to join our community of expert testers? Drop us a line: