There are several automated steps you might include in a capacity stage. The capacity stage usually gets triggered after the software has successfully completed the acceptance stage. It often occurs in parallel with the exploratory stage.  While the reference implementation calls an empty shell script (named, typically you’d have capacity-related steps to include in your pipeline. Some of the steps you might implement in a capacity stage include:

  • Automatically launching an environment based on AMI(s) generated in the acceptance stage. This environment should look similar in scale to a production environment
  • You might add some post-configuration steps to include loading a database (from versioned code) on scale and similarity to the production database
  • Another step might be to run automated load and performance tests against the application and environment
  • Apply chaos and other stress tests against an environment. If you haven’t heard of Netflix’s chaos monkey tool, this tool randomly terminates AWS resources in order to emulate real-world conditions. Netflix actually runs their chaos monkey on their production systems during the engineer’s business hours. I don’t recommend you doing this kind of testing day one. But, I do recommend you include this kind of a testing and making architectural changes to enable a more resilient architecture. At Stelligent, we also provide a SaaS-based service that wreaks this kind of havoc on our AWS resources and it’s called Havoc.
  • Another step you might include in a capacity stage is automated dynamic security analysis. You might run tools like Veracode, AppScan and Fortify in this step
  • Finally, once all these tests have run, you might choose to automatically terminate the environment

Scorecard (Source: