Everyone talks about unit and functional tests when adopting Devops practices, but how do you know if one of your changes has introduced a major performance degradation? Performance testing is the missing link to having a truly Continuously Deliverable pipeline.
In this blog post, you will see how performance tests can be integrated into your Continuous Delivery pipeline. We will first go over why performance tests should be included in your Continuous Delivery pipeline. Next, we will look at how to design your tests to get the maximum benefit from them and how to run them. Once the tests built and running, we will show how to put it all together.
Why do I need to do performance tests?
If you are following the DevOps approach of software development, you probably have unit tests and functional tests to ensure all different parts of your application work as they should. But do you know how your application will react when 10 concurrent users are trying to access the same page? 100? 1000? While your application may seem to work well with just a couple users, it is important to know how your application performs under load.
Enter load tests! Load testing an application generally means that you’re measuring performance with a specific predetermined amount of load. By incorporating load tests early on in the development cycle you can make sure you always have a performant application as well as the added benefit of being able to understand the performance of your application early on. Even introducing performance testing late in the development cycle will help protect you from future changes that can impact performance. In the event that a change has caused your application to significantly slow down, or even crash, your pipeline will automatically rollback the changes. This helps the developers get feedback earlier and iterate quicker thus closing the feedback loop.
Creating the Test Plan
When designing the test plan, the goal is not to test every single page of the application. This would cause very long running, as well as expensive, tests. Our goal is to identify the parts of the application that are most likely to cause problems and test those on every build. As the application grows more tests are added.
For this post, we will take a simple Ruby on Rails application that has a basic sign in page. We will be using the gem ‘ruby-jmeter’ to construct our load tests. Ruby Jmeter comes with a very easy to use Domain Specific Language which makes writing tests much easier. We will piggyback on Flood.io’s API to carry out the load tests. You will need to create a free account and get an API key which they provide you.
You can find the repository with all the code examples and templates for setting up the pipeline at https://github.com/stelligent/load-testing-example
We start by creating a test file, ‘load_test.rb’, and setting a few options. See the gist on github.
The test is then broken up into 2 separate transactions that simulate how a user would use the site.
- The first ‘transaction’ simply issues a GET request to the home page and using the ‘assert’ method, checks that the home page contains certain text.
- The second transaction uses the ‘submit’ method to send a form to the login page via a POST request using the ‘fill_in’ method to enter the requisite field
Integrating into a Continuous Delivery Pipeline
When integrating performance tests into a pipeline, the goal is to ensure that any issues are caught before they get released into a production environment. Since our tests are designed to determine if our application is able to handle requests under load we can’t test our live production environment. Our solution then is to create a staging environment that is a copy of our production environment. This way we can test under the exact same conditions without adversely affecting production.
To accomplish this, we will add a new phase to our ‘CodePipeline’ called ‘Staging’. Within this phase we add a CodeBuild step and provision it to match production with the added step of running our load test.
We then add a shell script that runs on each build that programmatically runs the load test and either passes or fails the build based on predefined criteria.
We launch the load test and take note of the flood uuid
See the gist on github. We then gather the test status and we wait for it to finish
See the gist on github. After the test completes we measure key metrics and can fail the build if the error rate or response time is above a certain threshold
See the gist on github. With this pipeline, we were able to create an exact copy of our production environment, call an external API to simulate load to our application, and either promote the release candidate to production or fail the build based on whether or not it passed the load test.
Gotchas and Other Things to Consider
One of the main reasons that performance testing is so often neglected is due to the fact that performance tests can take a long time and be resource intensive. Instead of testing every single part of an application, we test the most important parts. What we lose in completeness we make up for in practicality and speed. One of the most common ways to speed up your performance tests is to run them in parallel with your unit and integration tests. This way you will not be forced to wait for the other tests to finish before you start your performance tests.
Conclusion
As we have shown, performance testing doesn’t need to be hard or expensive. Integrating these tests early on and on every build we are able to catch problems earlier and improve the reliability and stability of our releases.
Stelligent Amazon Pollycast
|