On a routine basis, we get asked which tools we use at Stelligent in delivering our solutions. Sometimes it’s a company interested in our services. Other times, it’s someone going though our hiring process (yes, we are hiring!). So, I’ve put together a list of the tools that we use in implementing Continuous Delivery in Amazon Web Services (AWS). All of our work is in AWS so if AWS provides a service in a particular category of tools, we tend to use their service. Furthermore, we help our customers embrace the DevOps mindset. So, while what we provide our customers is much more sweeping than “just” automating software infrastructure and systems, the focus of this particular article is only on the tools we use and how we typically use them on engagements. As with most tools, all of this work is a means to an end. For our customers, the end is the ability to deliver new features to users whenever they choose to do so without anxiety that something will go wrong.
We are very customer driven so what I’m describing is what we might do on a typical engagement.
On a typical engagement, we use something like 20+ AWS services. We typically interface with these services through AWS’ Developer & Management tools such as CloudFormation, OpsWorks the SDKs and CLI along with some non-AWS tools. We’ve avoided “lift and shift” efforts as everything we do is about fully automating our customers’ infrastructure and their software delivery process – not moving from one opaque configuration to another. Here’s a run down of the tools that we typically use on engagements (in no particular order):
- Jenkins – Jenkins has the largest and most thriving community. We treat Jenkins the same way as other system assets and automate its installation and configuration. So, we often use CloudFormation (and, sometimes, OpsWorks) along with Chef to make changes to the Jenkins server(s) that are configured for our customers.
- Git/GitHub – When we use Git, it’s often along with GitHub. GitHub is a hosted source control system.
- AWS CodePipeline and AWS CodeCommit – We’ve done extensive experimentation on using CodePipeline for orchestrating Continuous Delivery along with its integration with its integration with CodeCommit to securely store anything from source code to binaries. CodeCommit works seamlessly with your existing Git tools. 
- AWS CodeDeploy – CodeDeploy helps automate the deployment workflow. It’s particularly useful for rolling out deployments across a fleet of AWS resources. We used CodeDeploy during its beta stage and plan to begin using it with customers in the near future. CodeDeploy is part of a new breed of application-lifecycle management (ALM) tools provided by AWS that includes CodePipeline and CodeCommit.
- AWS CloudFormation – We use CloudFormation to provision most AWS resources. It provides a JSON-based DSL for provisioning AWS resources. While it’s possible to perform things like node configuration using CloudFormation, we don’t recommend it. Instead, we use automated configuration management (e.g. Chef, Ansible, Puppet) for configuration, deployment and so forth. With CloudFormation, we provision AWS resources such as EC2, OpsWorks, Route 53, VPC, security groups, ELBs, RDS, S3, Auto Scaling, IAM, etc. using a JSON template. We commit these templates to source control with the rest of the software system assets.
- AWS OpsWorks – On certain engagements, we’ve used OpsWorks for infrastructure automation, configuration and deployment. OpsWorks doesn’t provide the same amount of flexibility as CloudFormation, but there are built-in models for event-driven actions, deployment that make our customer’s deployment processes much more standardized and repeatable. OpsWorks interfaces with Chef.
- Amazon CloudWatch (& CloudWatch Logs) – CloudWatch provides monitoring and logging of thousands of actions. In particular, you can configure it take action based upon certain events like scaling up or down resources based upon usage. We configure CloudWatch in CloudFormation.
- AWS Support – AWS Support is an invaluable resource. We recommend all of our customers sign up for at least Business-level support (provides 24/7 access). You can get an answer to your AWS questions day or night. There’s even an API that we’ve used, minimally.
- Janitor Monkey – We’ve used Janitor Monkey to terminate AWS resources based on configurable rules. We’ve also used an open-source tool that we developed called CloudPatrol that has a GUI. Janitor Monkey is part of the NetflixOSS.
- New Relic – Provides application and system monitoring. There are many tools in this space so the specific tools often vary. Lately, there seems to be a lot of requests for using New Relic.
- AWS CloudTrail – We use CloudTrail for AWS usage auditing.
- Cloudability – We’ve used both Cloudability and AWS (w/ CloudWatch) for monitoring our AWS usage costs.
- AWS SDKs – We’ve used various AWS SDKs, but tend to predominantly use the Ruby SDK and to a lesser extent, the Python SDK. We use the SDKs and custom code (i.e. in Ruby or Python) to orchestrate the deployment pipeline.
- AWS CLI – We use the CLI to make calls to CloudFormation and other AWS resources. Typically, this occurs from the CI environment.
- Others – Based on our customers’ existing toolsets, we’ve also used Continuous Integration (CI) and build management tools such as IBM’s Urbancode suite, TeamCity, ElectricCloud’s suite and others.
- Amazon EC2 – EC2 is the cornerstone of all the compute capacity we deliver for customers. We use CloudFormation, SDK and CLI to automate the management of EC2 instances.
- AWS Auto Scaling – We use Auto Scaling to scale our customers’ AWS infrastructure out and in based on usage. We configure Auto Scaling in CloudFormation.
- AWS Elastic Load Balancing (ELB) – We use ELB as a way to balance load across multiple instances, provide health checks and as part of a high-availability strategy.
- Docker – We’ve used Docker on several engagements with the use of ECS. Docker provides the capability to automate the creation of a container and move that container through a pipeline on its way to production. It also increases the speed of infrastructure development as it can often take only 10-15 seconds to launch a base Linux environment. We’re also looking into using AWS’ EC2 Container Service as it uses Docker.
- Packer – We use Packer to standardize environment configuration. We’ve used it on a couple of engagements and anticipate using it more often.
- Vagrant – We use Vagrant to run local development environments to speed up the time is takes to write and test infrastructure code.
Storage & Content Delivery
- Amazon Simple Storage Service (S3) – S3 is where all files get stored. We typically provision S3 usage and configuration using CloudFormation. We also use S3 server-side encryption to encrypt certain files at rest.
- Amazon CloudFront – CloudFront is a content-delivery network for increasing the performance of web applications. We configure CloudFront usage in CloudFormation.
- AWS ElastiCache – Our customers use ElastiCache to increase performance so we automate the provisioning of it as part of a deployment pipeline. Furthermore, we use ElastiCache to manage session state. This is particularly useful when switching the actual resources that get switched based on endpoints.
- Amazon Relational Database Service (RDS) – We’ve used RDS for PostgreSQL and MySQL. RDS provides a manage relational database so that you don’t have to maintain security patches, backups, etc. We provision RDS usage using CloudFormation.
- Amazon DynamoDB – We’ve used DynamoDB for various solutions, but it tends to be the choice these days for – mostly – transient deployment pipeline activity (e.g. endpoints, instance ids, source control ‘tags’, etc.) that help in change management
- Amazon Virtual Private Cloud (VPC) – We use VPCs to define the configuration of networks. With VPCs, we define subnets, security groups, NACLs, routers, connection to VPN gateways, etc. Typically, we use CloudFormation to automate the provisioning of the different VPC configurations.
- Amazon Route 53 – Route 53 is a managed DNS. It also provides domain registration. We use Route 53 to automatically modify endpoints (e.g. ELB) and to assist in failover strategies.
- AWS Direct Connect – Direct Connect is a way to connect your offices(s) and/or data center(s) with AWS’ data centers using a dedicated connection. Currently, we do not configure this in an automated manner and with many organization this connection can take weeks to establish. We are considering different ways to streamline this activity with customers, though.
Security & Identity
- AWS Identity and Access Management (IAM) – With IAM, we’re able to create users, groups, access keys and other resources for controlling access to AWS resources. We typically define IAM resources within CloudFormation. Often, the IAM users we create might only last for the life of the CloudFormation stack.
- AWS Trusted Advisor – Trusted Advisor provides best-practices monitoring. For example, it’ll provide warnings on things like security group usage, balancing resources across Availability Zones, etc. There’s an API that we’ve used in creating dashboards for customers. We tend to use Trusted Advisor passively as a reporting tool to assess our customers’ AWS infrastructure.
- AWS Config and Config Rules – “AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. Config Rules enables you to create rules that automatically check the configuration of AWS resources recorded by AWS Config.” 
- Amazon Inspector – “Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices.” 
- AWS WAF – “AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.” 
Application Services & Testing
- Amazon Simple Email Service (SES) – We use SES to send email notifications from our CI server. We configure SES in our workflow scripts.
- Others – SNS, SQS, SWF – We use these AWS application services on a case-by-case basis for notifications, asynchronous processing and orchestrating workflows.
- Configuration Management – Chef, Ansible, and Puppet – Many of our engagements use Chef, some use Ansible and others use Puppet for automating the configuration of nodes (e.g. Operating System, web/app/database servers and other configuration). So, we often use a combination on engagements.
- Infrastructure Testing – ServerSpec, TestKitchen and Cucumber – We tend to use Serverspec for test-driven infrastructure along with Cucumber and, in some cases, TestKitchen.
- Stress Testing – Chaos Monkey – We use Netflix’s Chaos Monkey to automatically terminate instances to test system availability. Netflix uses Chaos Monkey during business hours for their production environments to verify continued system availability against the real resources.
- Load Testing – JMeter – We’ve used JMeter for automated load & performance testing
- XUnit – Many of our customers’ application development teams use some XUnit variation as part of their development activities. We ensure that these tests get run as part of a deployment pipeline. We also provide advice on how to properly apply test-driven development.
- Build – Tools might include Maven, Ant, Rake, Gradle, Gulp, etc.
- Static Analysis – Since we’ve provided solutions in Java, Ruby/Rails, Node and to a much lesser extent, .NET, the list of static analysis tools is varied. We’ve used tools like CodeClimate and Sonar. These tools get automatically configured as part of the build scripts and the CI server
In conclusion – and to be clear – there are many more tools we have used on engagements so I’ve limited this article to what we might commonly use and I’ve probably missed a few obvious tools that we do use regularly. Each customer has different needs and tool constraints.