20 May 2011 A Continuous Delivery of Business Value.
The goal of this article is to discuss how improving the automated testing aspects of a continuous delivery project led to dramatic improvements in quality and delivered real business value to a leading bank in Melbourne Australia.
It will cover how the automated testing was integrated into the continuous delivery process to support Scrum, to empower testers and to shorten testing cycles.
This article is targeted at any CIO or Project Lead/Manager who wants to improve quality and gain maximum business value from their technical teams.
This article will discuss the requirements of the project and the problems with the existing testing processes before detailing the solution in terms of :
- integrating the automated testing into the continuous delivery process
- the custom development of a summary report
- supporting multiple testing targets
- supporting on demand and scheduled jobs
Finally the benefits achieved by these improvements are discussed to highlight the end business value provided to the client.
Project Brief
The project was a combination of integration and migration work, where market data in an existing system was to be migrated across into a new third party system.
This was for a flagship agile project being run using Scrum, one of very few agile projects within the bank.
All builds and deployments were performed via an automated process which was run via Jenkins from a dedicated build server.
Early in the project it was clear that a solid regression test suite would be essential so that each iteration could focus on the data being migrated in that iteration.
Testing tasks for an iteration would include updating the regression suite. The regression suite would then cover all previously migrated data. Testing for an iteration could then be focused on new rates being migrated.
JMeter was selected as the automated testing tool due to the following reasons
- Java API was available to the 3rd party software
- JMeter GUI allowed non technical testers to configure tests
- Easy to create data driven tests from CSV files
- Flexibility to use functional tests for performance testing
- Proven Open Source tool
Some customisations were required to enable JMeter to authenticate and converse with the 3rd Party Software. Once the customisations were complete, testers could download the customized version of JMeter and run the automated tests from their machines.
As part of each iteration, tests were developed by BA’s and developers working with testers, executed by testers from their local machines and the results updated in the Quality Centre defect management tool.
The Problem
This was seen to be a good set up. Builds, deployments and testing were all automated and being actively maintained over iterations.
However the testing cycles were far too slow.
Testing tasks were spilling over iterations, the team was beginning to become frustrated with missing promised deliverables and testing became rushed. As a result more defects were missed and quality went down. This was particularly evident during the UAT process for each release when customers were quickly finding issues.
Not all of these were caught and production issues were becoming more frequent. Confidence internally and externally was seriously being affected and something had to be done.
Diagnosis
The problem at this point was that there was still a separation in the continuous delivery process.
Only the build and deployment aspects had been integrated into the build server, the JMeter testing was being run locally on the tester’s desktops.
The problem with this was as follows
- Automated tests had to be run manually!
- Only one test was being run at a time
- Tests had to be run on testers PC which limited them doing other tasks
- Results had to be saved and manually archived
- Versioning of tests was not adequate.
This all added up to a lengthy test cycles and extended lead time in getting a tester to an issue.
Solution Overview
To address these problems JMeter needed to be integrated into the Jenkins continuous build server. This would allow automated tests to be scheduled, run in parallel and provide automatic archiving of test results which could then be referenced from Quality Centre.
This made it much easier to run the full regression suite in an overnight job. However this introduced another problem. The volume of the test results was proving difficult to navigate. Testers were trawling through a lot of passed test results to get to the failures before diagnosis could even begin.
A simple custom summary report was developed which allowed for quick and easy navigation of a large amount of test results. As part of this process, failure reports were generated from the test results so that a tester could be brought to an issue as quickly as possible.
Finally support was added to allow tests to be developed and run at user story, release or production targets.
Solution Details
In this section the solution is discussed at a lower level by explaining the architecture used, the algorithm employed and how the tester’s processes were improved.
To integrate the automated JMeter tests with Jenkins, the final solution architecture involved a Source Code Management (SCM) tool, a bash script and Ant.
This architecture is shown below highlighting the use of the build server to co-ordinate the activities. Jenkins runs on the build server and checks out the source code to compile, build and package up a deployable software package . This package is then deployed remotely to the target machine. The JMeter testing is run on the build server making calls over the network to the 3rd party application.
The solution algorithm at a high level is as follows.
- Jenkins job is run prompting the user for test options
- Directory structure is checked out/updated from the SCM repository
- Any temporary files from prior jobs are deleted
- A summary report XML file is opened
- For each test to be run
- Build up CSV path to CSV target using input parameters
- Run the JMeter test
- Generate standard and failure HTML reports
- Add entry to summary report XML file
- Close the summary report XML file
- Generate summary report HTML file
- Return success or fail to Jenkins
The process once the job is run is illustrated at a high level in the diagram below:
This process is kicked off by running the Jenkins job which asks the tester to set various test values from a Jenkins HTML form. The tester must specify the machine to run the test on, choose whether they are running production, release or user story tests and then specify which tests are to be actually from this job.
The repository is used to store the various files used in the testing and has the following structure :
The root directory holds the Ant build.xml file and the JMeter JMX files. The cfg directory holds configuration files which are used to run nightly jobs. The results directory holds the various XSL and CSS files used to generate the HTML reports.
The CSV files which drive the tests are stored in the same structure in the Production, Release_X.X or User Story folders.
Once the Jenkins job has checked out and updated this repository, it then calls the bin/runAutomatedTests.sh bash script. This script calls to a number of ant tasks to run the specified tests and create the necessary reports. It creates a summary report, failure reports and a standard report which has both success and failures details.
These reports are created in the results directory of the checked out repository.
When the job is complete, Jenkins archives these reports by copying them over to a directory which is related to the Jenkins job number. How many reports to archive is configurable in Jenkins.
A tester need only open the summary report which gives a summary of the performed tests and provides links to failure reports. This easy to navigate report gets testers to failures as quickly as possible.
This summary report is based on the XSL file which is distributed with the JMeter Ant Task available on Programmer Planet.
That is the solution explained at a very high level. There are a couple of details which are worthy of a closer look.
The following sections will discuss
- supporting testing of user story, release and production level code bases
- how to run scheduled jobs without any tester interaction to support nightly jobs and test runs over the weekend.
User Story, Release and Production level tests
As the project was using Scrum, the testers would begin each iteration with a set of user stories. Once these user story tests were complete they were then considered part of the release regression suite and merged from the user story folders. This prevented testers from committing incomplete tests into the release regression suite. Once a test was merged, it should be a valid test. Any failure from that test should be an indication of an introduced defect and not an incomplete or incorrectly configured test.
There was also value in running the regression suite over a build of production every day due to the fact that the build’s imported a data dump from production every night. If the production tests failed then something had changed on production and would need to be investigated or accommodated in the new code base.
This lead to three identifiable testing targets : user story, release level and production.
To support this the repository was structured so that each target had it’s own folder. The same structure was followed beneath each target which allowed for easy merging.
Once a user story test was complete and merged, the user story folder would be disregarded and the tests were then maintained at the release level.
Once a release went live that release directory became the production baseline until the next release. There was little overhead to this structure at all.
When running the tests through Jenkins or directly with JMeter, the tester has to specifiy which target they are running the tests on as well as the path to the CSV File Name. In the Jenkins job front end, this choice is provided as a drop down list. If the User Story choice is chosen, the corresponding user number must be provided to drill down to the correct folder to find and run the test.
To make this even quicker a UserStory_XX.zip file was added to the repository which held an empty testing directory structure. When a tester started a new user story, they just unzipped the file, changed to their story number and started adding in tests.
Configuration file driven tests
Once the automated tests were running with Jenkins, the next step was to take advantage of Jenkins’s scheduling capabilities by running scheduled tests overnight or over weekends.
This is slightly different to the solution discussed above as no tester is present to interact with the GUI to start the job.
There are two options here.
Create a new Jenkins job with no GUI components and set all required variables in the Jenkins job right before you call the bash script.
Alternatively create in the repository a simple configuration file which exports all required variables before running the runAutomatedTests.sh script.
This was the preferred solution as it kept all changes being made by a tester in the repository where they could be audited and versioned and left the Jenkins job configuration (which are not audited or versioned) relatively stable.
With this process in place it was easy to set up various nightly and weekend tests for all environments. Soon dev, test, clustered, staging and UAT environments all were running nightly tests on different targets.
Benefits Achieved
Significant business value was delivered by an increase in the quality of the software delivered to the business customer. This is the greatest measure of the improvements made as it relates directly to customer satisfaction and return on investment.
Choice to run single or multiple tests
Testers could now run more than one test at a time either by creating new Jenkins jobs and running them in parallel, or by grouping many tests into one job. Such efficiency improvements maximise the ability to deliver more business requirements in the same time frame.
Summary report fastest route to issue
The Summary Report made it very quick for a tester to navigate to test failures which was a significant benefit given the number of tests that were being run. The cost of fixing a defect rises every day from the day that it was introduced. Getting testers to errors as soon as possible results in defects being identified and fixed as quickly as possible. Quick resolution of such issues minimises their cost to the business.
Acceptance testing
Having the automated tests in Jenkins allowed any build to be tested, on demand or scheduled. This was used for automated acceptance testing of deployments and of production imports before use. Such simple validations prevent work from being wasted working on invalid platforms. As a result the business only funding meaningful development which is working towards their business goals.
Quicker test execution and more productive testers
The tests ran quicker as the build server was a more powerful machine. Testers suddenly had all the resources of their machine back to themselves which meant that they could be more productive while tests were executing.
Tester empowered
The testers were happier with this setup and interestingly enough were running their test development through Jenkins instead of locally. The turn around time to commit a CSV run a job through Jenkins was slightly more than if done locally but the tests ran quicker through the build server so there was still an increase in productivity. It was also interesting that as the project matured the development of the tests moved away from changing JMeter files to more using the JMeter to drive the tests as defined in the CSV files. This led to more tests being driven by the CSV’s which had a much quicker learning curve and lower barrier to entry.
Confidence improved
Confidence improved to the point where the continuous delivery process was extended all the way to production. This was a first for the client and provided a massive savings in deployment time and engagement of operation staff . A subset of the automated tests were used for acceptance testing. This reuse of testing effort saved further time in verifying the deployment. These improved efficiencies reduced the cost of a weekend production deployment and saved the business money.
Summary
This article has discussed how improving the automated testing aspects of a continuous delivery project led to dramatic improvements in quality and delivered real business value to a leading bank in Melbourne.
It has covered a discussion of the initial problems, a break down of the changes made to the continuous delivery process and the business value that these changes delivered to the business customer.
A drastic improvement in quality was seen once this work was put in place, achieving the ultimate goal of providing business value to the customer.
Shine Technologies has gone on at this client to set up other projects with the same infrastructure and processes that were put in place on this project
Additional Resources
Pingback:New Continuous Delivery Thought Article | The Shine Technologies Blog
Posted at 17:07h, 04 December[…] ← A Continuous Delivery of Business Value. Node.js From the Enterprise Java Perspective → […]