AWS CodeBuild is Here

At re:Invent 2016 AWS introduced AWS CodeBuild, a new service which compiles from source, runs tests, and produces ready to deploy software packages.  AWS CodeBuild handles provisioning, management, and scaling of your build servers.  You can either use pre-packaged build environments to get started quickly, or create custom build environments using your own build tools.  CodeBuild charges by the minute for compute resources, so you aren’t paying for a build environment while it is not in use.

AWS CodeBuild Introduction

Stelligent engineer, Harlen Bains has posted An Introduction to AWS CodeBuild to the AWS Partner Network (APN) Blog.  In the post he explores the basics of AWS CodeBuild and then demonstrates how to use the service to build a Java application.

Integrating AWS CodeBuild with AWS Developer Tools

In the follow-up post:  Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite, Stelligent CTO and AWS Community Hero,  Paul Duvall expands on how to integrate and automate the orchestration of CodeBuild with the rest of the AWS Developer Tools suite – including AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline using AWS’ provisioning tool, AWS CloudFormation.  He goes over the benefits of automating all the actions and stages into a deployment pipeline, while also providing an example with detailed screencast.

In the Future

Look to the Stelligent Blog for announcements, evaluations, and guides on new AWS products.  We are always looking for engineers who love to make things work better, faster, and just get a kick out of automating everything.  If you live and breathe DevOps, continuous delivery, and AWS, we want to hear from you.

Provision a hosted Git repo with AWS CodeCommit using CloudFormation

Recently, AWS announced that you can now automate the provisioning of a hosted Git repository with AWS CodeCommit using CloudFormation. This means that in addition to the console, CLI, and SDK, you can use declarative code to provision a new CodeCommit repository – providing greater flexibility in versioning, testing, and integration.

In this post, I’ll describe how engineers can provision a CodeCommit Git repository in a CloudFormation template. Furthermore, you’ll learn how to automate the provisioning of a deployment pipeline that uses this repository as its Source action to deploy an application using CodeDeploy to an EC2 instance. You’ll see examples, patterns, and a short video that walks you through the process.

Prerequisites

Here are the prerequisites for this solution:

These will be explained in greater detail in the Deployment Steps section.

Architecture and Implementation

In the figure below, you see the architecture for launching a pipeline that deploys software to an EC2 instance from code stored in a CodeCommit repository. You can click on the image to launch the template in CloudFormation Designer.

  • CloudFormation – All of the resource generation of this solution is described in CloudFormation  which is a declarative code language that can be written in JSON or YAML.
  • CodeCommit – With the addition of the AWS::CodeCommit::Repository resource, you can define your CodeCommit Git repositories in CloudFormation.
  • CodeDeploy – CodeDeploy automates the deployment to the EC2 instance that was provisioned by the nested stack.
  • CodePipeline – I’m defining CodePipeline’s stages and actions in CloudFormation code which includes using CodeCommit as a Source action and CodeDeploy for a Deploy action (For more information, see Action Structure Requirements in AWS CodePipeline).
  • EC2 – A nested CloudFormation stack is launched to provision a single EC2 instance on which the CodeDeploy agent is installed. The CloudFormation template called through the nested stack is provided by AWS.
  • IAM – An Identity and Access Management (IAM) Role is provisioned via CloudFormation which defines the resources that the pipeline can access.
  • SNS – A Simple Notification Service (SNS) Topic is provisioned via CloudFormation. The SNS topic is used by the CodeCommit repository for notifications.

CloudFormation Template

In this section, I’ll show code snippets from the CloudFormation template that provisions the entire solution. The focus of my samples is on the CodeCommit resources. There are several other resources defined in this template including EC2, IAM, SNS, CodePipeline, and CodeDeploy. You can find a link to the template at the bottom of  this post.

CodeCommit

In the code snippet below, you see that I’m using the AWS::CodeCommit::Repository CloudFormation resource. The repository name is provided as parameter to the template. I created a trigger to receive notifications when the master branch gets updated using an SNS Topic as a dependent resource that is created in the same CloudFormation template. This is based on the sample code provided by AWS.

    "MyRepo":{
      "Type":"AWS::CodeCommit::Repository",
      "DependsOn":"MySNSTopic",
      "Properties":{
        "RepositoryName":{
          "Ref":"RepoName"
        },
        "RepositoryDescription":"CodeCommit Repository",
        "Triggers":[
          {
            "Name":"MasterTrigger",
            "CustomData":{
              "Ref":"AWS::StackName"
            },
            "DestinationArn":{
              "Ref":"MySNSTopic"
            },
            "Events":[
              "all"
            ]
          }
        ]
      }
    },

CodePipeline

In this CodePipeline snippet, you see how I’m using the CodeCommit repository resource as an input for the Source action in CodePipeline. In doing this, it polls the CodeCommit repository for any changes. When it discovers changes, it initiates an instance of the deployment pipeline in CodePipeline.

        "Stages":[
          {
            "Name":"Source",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"Source",
                "ActionTypeId":{
                  "Category":"Source",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"CodeCommit"
                },
                "OutputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Configuration":{
                  "BranchName":{
                    "Ref":"RepositoryBranch"
                  },
                  "RepositoryName":{
                    "Ref":"RepoName"
                  }
                },
                "RunOrder":1
              }
            ]
          },

You can see an illustration of this pipeline in the figure below.

cpl-cc

Costs

Since costs can vary widely in using certain AWS services and other tools, I’ve provided a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. The AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost
  • CodeCommit – If you’re using on small project of less than six users, there’s no additional cost. See AWS CodeCommit Pricing for more information.
  • CodeDeploy – No additional cost
  • CodePipeline – $1 a month per pipeline unless you’re using it as part of the free tier. For more information, see AWS CodePipeline pricing.
  • EC2 – Approximately $15/month if you’re running once t1.micro instance 24/7. See AWS EC2 Pricing for more information.
  • IAM – No additional cost
  • SNS – Considering you probably won’t have over 1 million Amazon SNS requests for this particular solution, there’s no cost. For more information, see AWS SNS Pricing.

So, for this particular sample solution, you’ll spend around $16/month iff you run the EC2 instance for an entire month. If you just run it once and terminate it, you’ll spend a little over $1.

Patterns

Here are some patterns to consider when using CodeCommit with CloudFormation.

  • CodeCommit Template – While this solution embeds the CodeCommit creation as part of a single CloudFormation template, it’s unlikely you’ll be updating the CodeCommit repository generation with every application change so you might create a template that focuses on the CodeCommit creation and run it as part of an infrastructure pipeline that gets updated when new CloudFormation is committed to it.
  • Centralized Repos – Most likely, you’ll want to host your CodeCommit repositories in a single AWS account and use cross-account IAM roles to share access across accounts in your organization. While you can create CodeCommit repos in any AWS account, it’ll likely lead to unnecessary complexity when engineers want to know where the code is located.

The last is more of a conundrum than a pattern. As one my colleagues posted in Slack:

I’m stuck in a recursive loop…where do I store my CloudFormation template for my CodeCommit repo?

Good question. I don’t have a good answer for that one just yet. Anyone have thoughts on this one? It gets very “meta”.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region
  3. Create a key pair. To do this, in the navigation pane of the Amazon EC2 console, choose Key Pairs, Create Key Pair, type a name, and then choose Create.

Step 2. Launch the Stack

Click on the Launch Stack button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, security, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 7 minutes

The template includes default settings that you can customize by following the instructions in this post.

Create Details

Here’s a listing of the key AWS resources that are created when this stack is launched:

  • IAM – InstanceProfile, Policy, and Role
  • CodeCommit Repository – Hosts the versioned code
  • EC2 instance – with CodeDeploy agent installed
  • CodeDeploy – application and deployment
  • CodePipeline – deployment pipeline with CodeCommit Integration

CLI Example

Alternatively, you can launch the same stack from the command line as shown in the samples below.

Base Command

From an instance that has the AWS CLI installed, you can use the following snippet as a base command prepended to one of two options described in the Parameters section below.

aws cloudformation create-stack --profile {AWS Profile Name} --stack-name {Stack Name} --capabilities CAPABILITY_IAM --template-url "https://s3.amazonaws.com/stelligent-public/cloudformation-templates/github/labs/codecommit/codecommit-cpl-cfn.json"
Parameters

I’ve provided two ways to run the command – from a custom parameters file or from the CLI.

Option 1 – Custom Parameters JSON File

By attaching the command below to the base command, you can pass parameters from a file as shown in the sample below.

--parameters file:///localpath/to/example-parameters-cpl-cfn.json
Option 2 – Pass Parameters on CLI

Another way to launch the stack from the command line is to provide custom parameters populated with parameter values as shown in the sample below.

--parameters ParameterKey=EC2KeyPairName,ParameterValue=stelligent-dev ParameterKey=EmailAddress,ParameterValue=jsmith@example.com ParameterKey=RepoName,ParameterValue=my-cc-repo

Step 3. Test the Deployment

Click on the CodePipelineURL Output in your CloudFormation stack. You’ll see that the pipeline has failed on the Source action. This is because the Source action expects a populated repository and it’s empty. The way to resolve this is to commit the application files to the newly-created CodeCommit repository. First, you’ll need to clone the repository locally. To do this, get the CloneUrlSsh Output from the CloudFormation stack you launched in Step 2. A sample command is shown below. You’ll replace {CloneUrlSsh} with the value from the CloudFormation stack output. For more information on using SSH to interact with CodeCommit, see the Connect to the CodeCommit Repository section at: Create and Connect to an AWS CodeCommit Repository.

git clone {CloneUrlSsh}
cd {localdirectory}

Once you’ve cloned the repository locally, download the sample application files from SampleApp_Linux.zip and place the files directly into your local repository. Do not include the SampleApp_Linux folder. Go to the local directory and type the following to commit and push the new files to the CodeCommit repository:

git add .
git commit -am "add all files from the AWS sample linux codedeploy application"
git push

Once these files have been committed, the pipeline will discover the changes in CodeCommit and run a new pipeline instance and both stages and actions should succeed as a result of this change.

Access the Application

Once the CloudFormation stack has successfully completed, go to CodeDeploy and select Deployments. For example, if you’re in the us-east-1 region, the URL might look like: https://console.aws.amazon.com/codedeploy/home?region=us-east-1#/deployments (You can also find this link in the CodeDeployURL Output of the CloudFormation stack you launched). Next, click on the link for the Deployment Id of the deployment you just launched from CloudFormation. Then, click on the link for the Instance Id. From the EC2 instance, copy the Public IP value and paste into your browser and hit enter. You should see a page like the one below.

codedeploy_before

Commit Changes to CodeCommit

Make some visual changes to the index.html (look for background-color) and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned the local version of your CodeCommit repo (in the directory created by your git clone command). To push these changes to the remote repository, see the commands below.

git commit -am "change bg color to burnt orange"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser. You’ll see that the color of the index page of the application has changed.

codedeploy_after

How-To Video

In this video, I walkthrough the deployment steps described above.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post, you learned how to define and launch a stack capable of launching a CloudFormation stack that provisions a CodeCommit Git repository in code. Additionally, the example included the automation of a CodePipeline deployment pipeline (which included the CodeCommit integration) along with creating and running the deployment on an EC2 instance using CodeDeploy.

Furthermore, I described the prerequisites, architecture, implementation, costs, patterns and deployment steps of the solution.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/blob/master/labs/codecommit/. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Microservices Platform with ECS

Architecting applications with microservices is all the rage with developers right now, but running them at scale with cost efficiency and high availability can be a real challenge. In this post, we will address this challenge by looking at an approach to building microservices with Spring Boot and deploying them with CloudFormation on AWS EC2 Container Service (ECS) and Application Load Balancers (ALB). We will start with describing the steps to build the microservice, then walk through the platform for running the microservices, and finally deploy our microservice on the platform.

Spring Boot was chosen for the microservice development as it is a very popular framework in the Java community for building “stand-alone, production-grade Spring based Applications” quickly and easily. However, since ECS is just running Docker containers you can substitute your preferred development framework for Spring Boot and the platform described in this post will be still be able to run your microservice.

This post builds upon a prior post called Automating ECS: Provisioning in CloudFormation that does an awesome job of explaining how to use ECS. If you are new to ECS, I’d highly recommend you review that before proceeding. This post will expand upon that by using the new Application Load Balancer that provides two huge features to improve the ECS experience:

  • Target Groups: Previously in a “Classic” Elastic Load Balancer (ELB), all targets had to be able to handle all possible types of requests that the ELB received. Now with target groups, you can route different URLs to different target groups, allowing heterogeneous deployments. Specifically, you can have two target groups that handle different URLs (eg. /bananas and /apples) and use the ALB to route traffic appropriately.
  • Per Target Ports: Previously in an ELB, all targets had to listen on the same port for traffic from the ELB. In ECS, this meant that you had to manage the ports that each container listened on. Additionally, you couldn’t run multiple instances of a given container on a single ECS container instance since they would have different ports. Now, each container can use an ephemeral port (next available assigned by ECS) making port management and scaling up on a single ECS container instance a non-issue.

The infrastructure we create will look like the diagram below. Notice that there is a single shared ECS cluster and a single shared ALB with a target group, EC2 Container Registry (ECR) and ECS Service for each microservice deployed to the platform. This approach enables a cost efficient solution by using a single pool of compute resources for all the services. Additionally, high availability is accomplished via an Auto Scaling Group (ASG) for the ECS container instances that spans multiple Availability Zones (AZ).

ms-architecture-3
Setup Your Development Environment

You will need to install the Spring Boot CLI to get started. The recommended way is to use SDKMAN! for the installation. First install SDKMAN! with:

 $ curl -s "https://get.sdkman.io" | bash

Then, install Spring Boot with:

$ sdk install springboot

Alternatively, you could install with Homebrew:

$ brew tap pivotal/tap
$ brew install springboot

Scaffold Your Microservice Project

For this example, we will be creating a microservice to manage bananas. Use the Spring Boot CLI to create a project:

$ spring init --build=gradle --package-name=com.stelligent --dependencies=web,actuator,hateoas -n Banana banana-service

This will create a new subdirectory named banana-service with the skeleton of a microservice in src/main/java/com/stelligent and a build.gradle file.

Develop the Microservice

Development of the microservice is a topic for an entire post of its own, but let’s look at a few important bits. First, the application is defined in BananaApplication:

@SpringBootApplication
public class BananaApplication {

  public static void main(String[] args) {
    SpringApplication.run(BananaApplication.class, args);
  }
}

The @SpringBootApplication annotation marks the location to start component scanning and enables configuration of the context within the class.

Next, we have the controller class with contains the declaration of the REST routes.

@RequestMapping("/bananas")
@RestController
public class BananaController {

  @RequestMapping(method = RequestMethod.POST)
  public @ResponseBody BananaResource create(@RequestBody Banana banana)
  {
    // create a banana...
  }

  @RequestMapping(path = "/{id}", method = RequestMethod.GET)
  public @ResponseBody BananaResource retrieve(@PathVariable long id)
  {
    // get a banana by its id
  }

}

These sample routes handle a POST of JSON banana data to /bananas for creating a new banana, and a GET from /bananas/1234 for retrieving a banana by it’s id. To view a complete implementation of the controller including support for POST, PUT, GET, PATCH, and DELETE as well as HATEOAS for links between resources, check out BananaController.java.

Additionally, to look at how to accomplish unit testing of the services, check out the tests created in BananaControllerTest.java using WebMvcTest, MockMvc and Mockito.

Create Microservice Platform

The platform will consist of a separate CloudFormation stack that contains the following resources:

  • VPC – To provide the network infrastructure to launch the ECS container instances into.
  • ECS Cluster – The cluster that the services will be deployed into.
  • Auto Scaling Group – To manage the ECS container instances that contain the compute resources for running the containers.
  • Application Load Balancer – To provide load balancing for the microservices running in containers. Additionally, this provides service discovery for the microservices.

ms-architecture-1.png

The template is available at platform.template. The AMIs used by the Launch Configuration for the EC2 Container Instances must be the ECS optimized AMIs:

Mappings:
  AWSRegionToAMI:
    us-east-1:
      AMIID: ami-2b3b6041
    us-west-2:
      AMIID: ami-ac6872cd
    eu-west-1:
      AMIID: ami-03238b70
    ap-northeast-1:
      AMIID: ami-fb2f1295
    ap-southeast-2:
      AMIID: ami-43547120
    us-west-1:
      AMIID: ami-bfe095df
    ap-southeast-1:
      AMIID: ami-c78f43a4
    eu-central-1:
      AMIID: ami-e1e6f88d

Additionally, the EC2 Container Instances must have the ECS Agent configured to register with the newly created ECS Cluster:

  ContainerInstances:
    Type: AWS::AutoScaling::LaunchConfiguration
    Metadata:
      AWS::CloudFormation::Init:
        config:
          commands:
            01_add_instance_to_cluster:
              command: !Sub |
                #!/bin/bash
                echo ECS_CLUSTER=${EcsCluster}  >> /etc/ecs/ecs.config

Next, an Application Load Balancer is created for the later stacks to register with:

 EcsElb:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Subnets:
      - !Ref PublicSubnetAZ1
      - !Ref PublicSubnetAZ2
      - !Ref PublicSubnetAZ3
  EcsElbListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      LoadBalancerArn: !Ref EcsElb
      DefaultActions:
      - Type: forward
        TargetGroupArn: !Ref EcsElbDefaultTargetGroup
      Port: '80'
      Protocol: HTTP

Finally we have a Gradle task in our build.gradle for upserting the platform CloudFormation stack based on a custom task named StackUpTask defined in buildSrc.

task platformUp(type: StackUpTask) {
    region project.region
    stackName "${project.stackBaseName}-platform"
    template file("ecs-resources/platform.template")
    waitForComplete true
    capabilityIam true
    if(project.hasProperty('keyName')) {
        stackParams['KeyName'] = project.keyName
    }
}

Simply run the following to create/update the platform stack:

$ gradle platformUp

Deploy Microservice

Once the platform stack has been created, there are two additional stacks to create for each microservice. First, there is a repo stack that creates the EC2 Container Registry (ECR) for the microservice. This stack also creates a target group for the microservice and adds the target group to the ALB with a rule for which URL path patterns should be routed to the target group.

The second stack is for the service and creates the ECS task definition based on the version of the docker image that should be run, as well as the ECS service which specifies how many tasks to run and the ALB to associate with.

The reason for the two stacks is that you must have the ECR provisioned before you can push a docker image to it, and you must have a docker image in the ECR before creating the ECS service. Ideally, you would create the repo stack once, then configure a CodePipeline job to continuously push changes to the code to ECR as new images and then updating the service stack to reference the newly pushed image.

ms-architecture-2.png

The entire repo template is available at repo.template, an important new resource to check out is the ALB Listener Rule that provides the URL patterns that should be handled by the new target group that is created:

EcsElbListenerRule:
    Type: AWS::ElasticLoadBalancingV2::ListenerRule
    Properties:
      Actions:
      - Type: forward
        TargetGroupArn: !Ref EcsElbTargetGroup
      Conditions:
      - Field: path-pattern
        Values: [“/bananas”]
      ListenerArn: !Ref EcsElbListenerArn
      Priority: 1

The entire service template is available at service.template, but notice that the ECS Task Definition uses port 0 for HostPort. This allows for ephemeral ports that are assigned by ECS to remove the requirement for us to manage container ports:

 MicroserviceTaskDefinition:
    Type: AWS::ECS::TaskDefinition
    Properties:
      ContainerDefinitions:
      - Name: banana-service
        Cpu: '10'
        Essential: 'true'
        Image: !Ref ImageUrl
        Memory: '300'
        PortMappings:
        - HostPort: 0
          ContainerPort: 8080
      Volumes: []

Next, notice how the ECS Service is created and associated with the newly created Target Group:

 EcsService:
    Type: AWS::ECS::Service
    Properties:
      Cluster: !Ref EcsCluster
      DesiredCount: 6
      DeploymentConfiguration:
        MaximumPercent: 100
        MinimumHealthyPercent: 0
      LoadBalancers:
      - ContainerName: microservice-exemplar-container
        ContainerPort: '8080'
        TargetGroupArn: !Ref EcsElbTargetGroupArn
      Role: !Ref EcsServiceRole
      TaskDefinition: !Ref MicroserviceTaskDefinition

Finally, we have a Gradle task in our service build.gradle for upserting the repo CloudFormation stack:

task repoUp(type: StackUpTask) {
 region project.region
 stackName "${project.stackBaseName}-repo-${project.name}"
 template file("../ecs-resources/repo.template")
 waitForComplete true
 capabilityIam true

 stackParams['PathPattern'] ='/bananas'
 stackParams['RepoName'] = project.name
}

And then another to upsert the service CloudFormation stack:

task serviceUp(type: StackUpTask) {
 region project.region
 stackName "${project.stackBaseName}-service-${project.name}"
 template file("../ecs-resources/service.template")
 waitForComplete true
 capabilityIam true

 stackParams['ServiceDesiredCount'] = project.serviceDesiredCount
 stackParams['ImageUrl'] = "${project.repoUrl}:${project.revision}"

 mustRunAfter dockerPushImage
}

And finally, a task to coordinate the management of the stacks and the build/push of the image:

task deploy(dependsOn: ['dockerPushImage', 'serviceUp']) {
  description "Upserts the repo stack, pushes a docker image, then upserts the service stack"
}

dockerPushImage.dependsOn repoUp

This then provides a simple command to deploy new or update existing microservices:

$ gradle deploy

Defining a similar build.gradle file in other microservices to deploy them to the same platform.

Blue/Green Deployment

When running the gradle deploy, the existing service stack is updated to use a new task definition that references a new docker image in ECR. This CloudFormation update causes ECS to do a rolling replacement of the containers, launching new containers with the new image and killing containers with the old image.

However, if you are looking for a more traditional blue/green deployment, this could be accomplished by creating a new service stack (the green stack) with the new docker image, rather than updating the existing. The new stack would attach to the existing ALB target group at which point you could update the existing service stack (the blue stack) to no longer reference the ALB target group, which would take it out of service without killing the containers.

Next Steps

Stay tuned for future blog posts that builds on this platform by accomplishing service discovery in a more decoupled manner through the use of Eureka as a service registry, Ribbon as a service client, and Zuul as an edge router.

Additionally, this solution isn’t complete since there is no Continuous Delivery pipeline defined. Look for an additional post showing how to use CodePipeline to orchestrate the movement of changes to the microservice source code into production.

The code for the examples demonstrated in this post are located at https://github.com/stelligent/microservice-exemplar. Let us know if you have any comments or questions @stelligent.

Are you interested in building resilient applications in AWS? Stelligent is hiring!

DevOps in AWS Radio: Orchestrating Docker containers with AWS ECS, ECR and CodePipeline (Episode 4)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak about the AWS EC2 Container Service (ECS), AWS EC2 Container Registry (ECR), HashiCorp Consul, AWS CodePipeline, and other tools in providing Docker-based solutions for customers. Here are the show notes:

DevOps in AWS News

Episode Topics

  1. Benefits of using ECS, ECR, Docker, etc.
  2. Components of ECS, ECR and Service Discovery
  3. Orchestrating and automating the deployment pipeline using CloudFormation, CodePipeline, Jenkins, etc. 

Blog Posts

  1. Automating ECS: Provisioning in CloudFormation (Part 1)
  2. Automating ECS: Orchestrating in CodePipeline and CloudFormation (Part 2)

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Automating Habitat with AWS CodePipeline

This article outlines a proof-of-concept (POC) for automating Habitat operations from AWS CodePipeline. Habitat is Chef’s new application automation platform that provides a packaging system that results in apps that are “immutable and atomically deployed, with self-organizing peer relationships.”  Habitat is an innovative technology for packaging applications, but a Continuous Delivery pipeline is still required to automate deployments.  For this exercise I’ve opted to build a lightweight pipeline using CodePipeline and Lambda.

An in-depth analysis of how to use Habitat is beyond the scope for this post, but you can get a good introduction by following their tutorial. This POC essentially builds a CD pipeline to automate the steps described in the tutorial, and builds the same demo app (mytutorialapp). It covers the “pre-artifact” stages of the pipeline (Source, Commit, Acceptance), but keep an eye out for a future post which will flesh out the rest.

Also be sure to read the article “Continuous deployment with Habitat” which provides a good overview of how the developers of Habitat intend it to be used in a pipeline, including links to some repos to help implement that vision using Chef Automate.

Technology Overview

Application

The application we’re automating is called mytutorialapp. It is a simple “hello world” web app that runs on nginx. The application code can be found in the hab-demo repository.

Pipeline

The pipeline is provisioned by a CloudFormation stack and implemented with CodePipeline. The pipeline uses a Lambda function as an Action executor. This Lambda function delegates command execution to  an EC2 instance via an SSM Run Command: aws:runShellScript. The pipeline code can be found in the hab-demo-pipeline repository. Here is a simplified diagram of the execution mechanics:

hab_pipeline_diagram

Stack

The CloudFormation stack that provisions the pipeline also creates several supporting resources.  Check out the pipeline.json template for details, but here is a screenshot to show what’s included:

hab_demo_cfn_results

Pipeline Stages

Here’s an overview of the pipeline structure. For the purpose of this article I’ve only implemented the Source, Commit, and Acceptance stages. This portion of the pipeline will get the source code from a git repo, build a Habitat package, build a Docker test environment, deploy the Habitat package to the test environment, run tests on it and then publish it to the Habitat Depot. All downstream pipeline stages can then source the package from the Depot.

  • Source
    • Clone the app repo
  • Commit
    • Stage-SourceCode
    • Initialize-Habitat
    • Test-StaticAnalysis
    • Build-HabitatPackage
  • Acceptance
    • Create-TestEnvironment
    • Test-HabitatPackage
    • Publish-HabitatPackage

Action Details

Here are the details for the various pipeline actions. These action implementations are defined in a “pipeline-runner” Lambda function and invoked by CodePipeline. Upon invocation, the scripts are executed on an EC2 box that gets provisioned at the same time as the code pipeline.

Commit Stage

Stage-SourceCode

Pulls down the source code artifact from S3 and unzips it.

Initialize-Habitat

Sets Habitat environment variables and generates/uploads a key to access my Origin on the Habitat Depot.

Test-StacticAnalysis

Runs static analysis on plan.sh using bash -n.

Build-HabitatPackage

Builds the Habitat package

Acceptance Stage

Build-TestEnvironment

Creates a Docker test environment by running a Habitat package export command inside the Habitat Studio.

Test-HabitatPackage

Runs a Bats test suite which verifies that the webserver is running and the “hello world” page is displayed.

Publish-HabitatPackage

Uploads the Habitat package to the Depot. In a later pipeline stage, a package deployment can be sourced directly from the Depot.

Wrapping up

This post provided an early look at a mechanism for automating Habitat deployments from AWS CodePipeline. There is still a lot of work to be done on this POC project so keep an eye out for later posts that describe the mechanics of the rest of the pipeline.

Do you love Chef and Habitat? Do you love AWS? Do you love automating software development workflows to create CI/CD pipelines? If you answered “Yes!” to any of these questions then you should come work at Stelligent. Check out our Careers page to learn more.

 

Automate CodePipeline Manual Approvals in CloudFormation

pipeline_manual_approvals_onestage.jpgRecently, AWS announced that it added manual approval actions to AWS CodePipeline. In doing so, you can now model your entire software delivery process – whether it’s entirely manual or a hybrid of automated and manual approval actions.

 

In this post, I describe how you can add manual approvals to an existing pipeline – manually or via CloudFormation – to minimize your CodePipeline costs.

Pricing

The AWS CodePipeline pricing model is structured to incentivize two things:

  • Frequent Code Commits
  • Long-lived Pipelines

This is because AWS charges $1 per active pipeline per month. Therefore, if you were to treat these pipelines as ephemeral, you’d likely be paying more than you might be otherwise consuming. While in experimentation mode, you might be regularly launching and terminating pipelines as you determine the appropriate stages and actions for an application/service, once you’ve established this pipeline, the change lifecycle is likely to be much less.

Since CodePipeline uses compute resources, AWS had to make a decision on whether they incentivize frequent code commits or treat pipelines ephemerally – as they do with other resources like EC2. If they’d chosen to charge by the frequency activity then it could result in paying more when committing more code – which would be a very bad thing since you want developers to be committing code many times a day.

Immutability

While we tend to prefer an immutable approach in most things when it comes to the infrastructure, the fact is that different parts of your system will change at varying frequencies. This is the case with your pipelines. Once your pipelines have been established, typically, you might make add, edit, or remove some stages and actions but probably not every day.

Our “workaround” is to use CloudFormation’s update capability to modify our pipeline’s stages and actions without incurring the additional $1 that we’d get charged if we were to launch a new active pipeline.

The best way to apply these changes is to make the minimum required changes in the template so that errors are prevalent if they do occur.

Manual Approvals

There are many reasons your software delivery workflow might require manual approvals including exploratory testing, visual inspection, change advisory boards, code reviews, etc.

Some other reasons for manual approvals include canary and blue/green deployments – where you might make final deployment decisions once some user or deployment testing is complete.

With manual approvals in CodePipeline, you can now make the approval process a part of a fully automated software delivery process.

Create and Connect to a CodeCommit Repository

Follow these instructions for creating and connecting to an AWS CodeCommit repository: Create and Connect to an AWS CodeCommit Repository. Take note of the repository name as you’ll be using it as a CloudFormation user parameter later. The default that I use the lab is called codecommit-demo but you can modify this CloudFormation parameter.

Launch a Pipeline

Click the button below to launch a CloudFormation stack that provisions AWS CodePipeline with some default Lambda Invoke actions.

Once the CloudFormation has launched successfully, click on the link next to the PipelineUrl Output from your CloudFormation stack. This launches your pipeline. You should see a pipeline similar to the one in the figure below.

pipeline_before_update

Update a Pipeline

To update your pipeline, click on the Edit button at the top of the pipeline in CodePipeline. Then, click the (+) Stage link in between the Staging the Production stage. Enter the name ExploratoryTesting for the stage name, then click the (+) Action link. The add action window displays. Choose the new Approval Action category from the drop down and enter the other required and optional fields, as appropriate. Finally, click the Add action button.

codepipeline_manual_approvals_pipeline_edit

Once you’ve done this, click on the Release change button. Once it goes through the pipeline stages and actions, it transitions to the Exploratory Testing stage where your pipeline should look similar to the figure below.

pipeline_before_after

At this time, if your SNS Topic registered with the pipeline is linked to an email address, you’ll receive an email message that looks similar to the one below.

codepipeline_manual_approvals

As you can see, you can click on the link to be brought to the same pipeline where you can approve or reject the “stage”.

Applying Changes in CloudFormation

You can apply the same updates to CodePipeline that you had previously manually performed in code using CloudFormation update-stack. We recommend you minimize the incremental number of changes you apply using CloudFormation so that they are specific to CodePipeline changes. This is because limiting your change sets often results in limiting the amount of time you spend troubleshooting any problems.

Once you’ve manually added the new manual approval stage and action, you can use your AWS CLI to get the JSON configuration that you can use in your CloudFormation update template. To do this, run the following command substituting {YOURPIPELINENAME} with the name of your pipeline.

aws codepipeline get-pipeline --name {YOURPIPELINENAME} >pipeline.json

You’ll also notice that this command pipes the output to a file that you can use as a means of copying and formatting as part of stage and action configuration in CodePipeline. For example, the difference between the initial pipeline and the updated pipeline is shown in the JSON configuration below.

          {
            "Name":"ExploratoryTesting",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"QA",
                "ActionTypeId":{
                  "Category":"Approval",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"Manual"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "NotificationArn":{
                    "Fn::Join":[
                      "",
                      [
                        "arn:aws:sns:",
                        {
                          "Ref":"AWS::Region"
                        },
                        ":",
                        {
                          "Ref":"AWS::AccountId"
                        },
                        ":",
                        {
                          "Ref":"SNSTopic"
                        }
                      ]
                    ]
                  },
                  "CustomData":"Approval or Reject this change after running Exploratory Tests"
                },
                "RunOrder":1
              }
            ]
          },

You can take this code and add it to a new CloudFormation template so that it’s between the Staging and Production stages. Once you’ve done this, go back to your command line and run the update-stack command from your AWS CLI. An example is shown below. You’ll replace the {CFNSTACKNAME} with your stack name. If you want to make additional changes to the new stack, you can download the CloudFormation template and update it to an S3 location you control.

aws cloudformation update-stack --stack-name {CFNSTACKNAME} --template-url https://s3.amazonaws.com/stelligent-public/cloudformation-templates/github/labs/codepipeline/codepipeline-updates-after.json --region us-east-1 --capabilities="CAPABILITY_IAM" --parameters ParameterKey=RepositoryBranch,UsePreviousValue=true ParameterKey=RepositoryName,UsePreviousValue=true ParameterKey=S3BucketLambdaFunction,UsePreviousValue=true ParameterKey=SNSTopic,UsePreviousValue=true

By running this command against the initial stack, you’ll see the same updates that you’d manually defined previously. The difference is that it’s defined in code which means you can version, test and deploy changes.

An alternative approach is to manually apply the changes using Update Stack through from your CloudFormation stack. You’ll enter the new CloudFormation template as an input and CloudFormation will determine which changes it will apply to your infrastructure. You see a screenshot of the change that CloudFormation will apply below.

codepipeline_preview_changes.jpg

Summary

By incorporating manual approvals into your software delivery process, you can fully automate its workflow. You learned how you can apply changes to your pipeline using CloudFormation as a way of minimizing your costs while providing a repeatable, reliable update process through code.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/codepipeline. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

References

Acknowledgements

My colleagues at Stelligent including Eric Kascic and Casey Lee provided some use cases for manual approvals.

 

DevOps in AWS Radio: Serverless Delivery with Casey Lee (Episode 2)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak with Casey Lee about his three-part series on Serverless Delivery:

 

About DevOps in AWS Radio

On DevOps in AWS Radio, we’ll be covering topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Automating and Orchestrating OpsWorks in CloudFormation and CodePipeline

pipeline_opsworks_consoleIn this post, you’ll learn how to provision, configure, and orchestrate a PHP application using the AWS OpsWorks application management service into a deployment pipeline using AWS CodePipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to the AWS CodeCommit version-control repository. This way, team members can release new changes to users whenever they choose to do so: aka, Continuous Delivery.

Recently, AWS announced the integration of OpsWorks into AWS CodePipeline so I’ll be describing various components and services that support this solution including CodePipeline along with codifying the entire infrastructure in AWS CloudFormation. As part of the announcement, AWS provided a step-by-step tutorial of integrating OpsWorks with CodePipeline that I used as a reference in automating the entire infrastructure and workflow.

This post describes how to automate all the steps using CloudFormation so that you can click on a Launch Stack button to instantiate all of your infrastructure resources.

OpsWorks

“AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application’s architecture and the specification of each component including package installation, software configuration and resources such as storage. Start from templates for common technologies like application servers and databases or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load and dynamic configuration to orchestrate changes as your environment scales.” [1]

OpsWorks provides a structured way to automate the operations of your AWS infrastructure and deployments with lifecycle events and the Chef configuration management tool. OpsWorks provides more flexibility than Elastic Beanstalk and more structure and constraints than CloudFormation. There are several key constructs that compose OpsWorks. They are:

  • Stack – An OpsWorks stack is the logical container defining OpsWorks layers, instances, apps and deployments.
  • Layer – There are built-in layers provided by OpsWorks such as Static Web Servers, Rails, Node.js, etc. But, you can also define your own custom layers as well.
  • Instances – These are EC2 instances on which the OpsWorks agent has been installed. There are only certain Linux and Windows operating systems supported by OpsWorks instances.
  • App – “Each application is represented by an app, which specifies the application type and contains the information that is needed to deploy the application from the repository to your instances.” [2]
  • Deployment – Runs Chef recipes to deploy the application onto instances based on the defined layer in the stack.

There are also lifecycle events that get executed for each deployment. Lifecycle events are linked to one or more Chef recipes. The five lifecycle events are setup, configure, deploy, undeploy, shutdown. Events get triggered based upon certain conditions. Some events can be triggered multiple times. They are described in more detail below:

  • setup – When an instance finishes booting as part of the initial setup
  • configure – When this event is run, it executes on all instances in all layers whenever a new instance comes in service, or an EIP changes, or an ELB is attached
  • deploy – When running a deployment on an instance, this event is run
  • undeploy – When an app gets deleted, this event is run
  • shutdown – Before an instance is terminated, this event is run

Solution Architecture and Components

In Figure 2, you see the deployment pipeline and infrastructure architecture for the OpsWorks/CodePipeline integration.

opsworks_pipeline_arch.jpg
Figure 2 – Deployment Pipeline Architecture for OpsWorks

Both OpsWorks and CodePipeline are defined in a single CloudFormation stack, which is described in more detail later in this post. Here are the key services and tools that make up the solution:

  • OpsWorks – In this stack, code configures operations of your infrastructure using lifecycle events and Chef
  • CodePipeline – Orchestrate all actions in your software delivery process. In this solution, I provision a CodePipeline pipeline with two stages and one action per stage in CloudFormation
  • CloudFormation – Automates the provisioning of all AWS resources. In this solution, I’m using CloudFormation to automate the provisioning for OpsWorks, CodePipeline,  IAM, and S3
  • CodeCommit – A Git repo used to host the sample application code from this solution
  • PHP – In this solution, I leverage AWS’ OpsWorks sample application written in PHP.
  • IAM – The CloudFormation stack defines an IAM Instance Profile and Roles for controlled access to AWS resources
  • EC2 – A single compute instance is launched as part of the configuration of the OpsWorks stack
  • S3 – Hosts the deployment artifacts used by CodePipeline.

Create and Connect to a CodeCommit Repository

While you can store your software code in any version-control repository, in this solution, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code off of the Amazon OpsWorks PHP Simple Demo App located at https://github.com/awslabs/opsworks-demo-php-simple-app.

To create your own CodeCommit repo, follow these instructions: Create and Connect to an AWS CodeCommit Repository. I called my CodeCommit repository opsworks-php-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.

After you create your CodeCommit repo, copy the contents from the AWS PHP OpsWorks Demo app and commit all of the files.

Implementation

I created this sample solution by stitching together several available resources including the CloudFormation template provided by the Step-by-Step Tutorial from AWS on integrating OpsWorks with CodePipeline and existing templates we use at Stelligent for CodePipeline. Finally, I manually created the pipeline in CodePipeline using the same step-by-step tutorial and then obtained the configuration of the pipeline using the get-pipeline command as shown in the command snippet below.

aws codepipeline get-pipeline --name OpsWorksPipeline > pipeline.json

This section describes the various resources of the CloudFormation solution in greater detail including IAM Instance Profiles and Roles, the OpsWorks resources, and CodePipeline.

Security Group

Here, you see the CloudFormation definition for the security group that the OpsWorks instance uses. The definition restricts the ingress port to 80 so that only web traffic is accepted on the instance.

    "CPOpsDeploySecGroup":{
      "Type":"AWS::EC2::SecurityGroup",
      "Properties":{
        "GroupDescription":"Lets you manage OpsWorks instances deployed to by CodePipeline"
      }
    },
    "CPOpsDeploySecGroupIngressHTTP":{
      "Type":"AWS::EC2::SecurityGroupIngress",
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"80",
        "ToPort":"80",
        "CidrIp":"0.0.0.0/0",
        "GroupId":{
          "Fn::GetAtt":[
            "CPOpsDeploySecGroup",
            "GroupId"
          ]
        }
      }
    },

IAM Role

Here, you see the CloudFormation definition for the OpsWorks instance role. In the same CloudFormation template, there’s a definition for an IAM service role and an instance profile. The instance profile refers to OpsWorksInstanceRole defined in the snippet below.

The roles, policies and profiles restrict the service and resources to the essential permissions it needs to perform its functions.

    "OpsWorksInstanceRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  {
                    "Fn::FindInMap":[
                      "Region2Principal",
                      {
                        "Ref":"AWS::Region"
                      },
                      "EC2Principal"
                    ]
                  }
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"s3-get",
            "PolicyDocument":{
              "Version":"2012-10-17",
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "s3:GetObject"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

Stack

The snippet below shows the CloudFormation definition for the OpsWorks Stack. It makes references to the IAM service role and instance profile, using Chef 11.10 for its configuration, and using Amazon Linux 2016.03 for its operating system. This stack is used as the basis for defining the layer, app, instance, and deployment that are described later in this section.

    "MyStack":{
      "Type":"AWS::OpsWorks::Stack",
      "Properties":{
        "Name":{
          "Ref":"AWS::StackName"
        },
        "ServiceRoleArn":{
          "Fn::GetAtt":[
            "OpsWorksServiceRole",
            "Arn"
          ]
        },
        "ConfigurationManager":{
          "Name":"Chef",
          "Version":"11.10"
        },
        "DefaultOs":"Amazon Linux 2016.03",
        "DefaultInstanceProfileArn":{
          "Fn::GetAtt":[
            "OpsWorksInstanceProfile",
            "Arn"
          ]
        }
      }
    },

Layer

The OpsWorks PHP layer is described in the CloudFormation definition below. It references the OpsWorks stack that was previously created in the same template. It also uses the php-app layer type. For a list of valid types, see CreateLayer in the AWS API documentation. This resource also enables auto healing, assigns public IPs and references the previously-created security group.

    "MyLayer":{
      "Type":"AWS::OpsWorks::Layer",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Name":"MyLayer",
        "Type":"php-app",
        "Shortname":"mylayer",
        "EnableAutoHealing":"true",
        "AutoAssignElasticIps":"false",
        "AutoAssignPublicIps":"true",
        "CustomSecurityGroupIds":[
          {
            "Fn::GetAtt":[
              "CPOpsDeploySecGroup",
              "GroupId"
            ]
          }
        ]
      },
      "DependsOn":[
        "MyStack",
        "CPOpsDeploySecGroup"
      ]
    },

OpsWorks Instance

In the snippet below, you see the CloudFormation definition for the OpsWorks instance. It references the OpsWorks layer and stack that are created in the same template. It defines the instance type as c3.large and refers to the EC2 Key Pair that you will provide as an input parameter when launching the stack.

    "MyInstance":{
      "Type":"AWS::OpsWorks::Instance",
      "Properties":{
        "LayerIds":[
          {
            "Ref":"MyLayer"
          }
        ],
        "StackId":{
          "Ref":"MyStack"
        },
        "InstanceType":"c3.large",
        "SshKeyName":{
          "Ref":"KeyName"
        }
      }
    },

OpsWorks App

In the snippet below, you see the CloudFormation definition for the OpsWorks app. It refers to the previously created OpsWorks stack and uses the current stack name for the app name – making it unique. In the OpsWorks type, I’m using php. For other supported types, see CreateApp.

I’m using other for the AppSource type (OpsWorks doesn’t seem to make the documentation obvious in terms of the types that AppSource supports, so I resorted to using the OpsWorks console to determine the possibilities). I’m using other because my source type is CodeCommit, which isn’t currently an option in OpsWorks.

    "MyOpsWorksApp":{
      "Type":"AWS::OpsWorks::App",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Type":"php",
        "Shortname":"phptestapp",
        "Name":{
          "Ref":"AWS::StackName"
        },
        "AppSource":{
          "Type":"other"
        }
      }
    },

CodePipeline

In the snippet below, you see the CodePipeline definition for the Deploy stage and the DeployPHPApp action in CloudFormation. It takes MyApp as an Input Artifact – which is an Output Artifact of the Source stage and action that obtains code assets from CodeCommit.

The action uses a Deploy category and OpsWorks as the Provider. It takes four inputs for the configuration: StackId, AppId, DeploymentType, LayerId. With the exception of DeploymentType, these values are obtained as references from previously created AWS resources in this CloudFormation template.

For more information, see CodePipeline Concepts.

         {
            "Name":"Deploy",
            "Actions":[
              {
                "InputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Name":"DeployPHPApp",
                "ActionTypeId":{
                  "Category":"Deploy",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"OpsWorks"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "StackId":{
                    "Ref":"MyStack"
                  },
                  "AppId":{
                    "Ref":"MyOpsWorksApp"
                  },
                  "DeploymentType":"deploy_app",
                  "LayerId":{
                    "Ref":"MyLayer"
                  }
                },
                "RunOrder":1
              }
            ]
          }

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the OpsWorks environment including all the resources previously described such as CodePipeline, OpsWorks, IAM Roles, etc.

When launching a stack, you’ll enter a value the KeyName parameter from the drop down. Optionally, you can enter values for your CodeCommit repository name and branch if they are different than the default values.

opsworks_pipeline_cfn
Figure 3- Parameters for Launching the CloudFormation Stack

You will charged for your AWS usage – particularly EC2, CodePipeline and S3.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name OpsWorksPipelineStack --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/codepipeline-opsworks.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters  ParameterKey=KeyName,ParameterValue=YOURKEYNAME

Outputs

Once the CloudFormation stack successfully launches, there’s an output for the CodePipelineURL. You can click on this value to launch the pipeline that’s running that’s getting the source assets from CodeCommit and launch an OpsWorks stack and associated resources. See the screenshot below.

cfn_opsworks_pipeline_outputs
Figure 4 – CloudFormation Outputs for CodePipeline/OpsWorks stack

Once the pipeline is complete, you can access the OpsWorks stack and click on the Public IP link for one of the instances to launch the PHP application that was deployed using OpsWorks as shown in Figures 5 and 6 below.

opsworks_public_ip.jpg
Figure 5 – Public IP for the OpsWorks instance

 

opsworks_app_before.jpg
Figure 6 – OpsWorks PHP app once initially deployed

Commit Changes to CodeCommit

Make some visual changes to the code (e.g. your local CodeCommit version of index.php) and commit these changes to your CodeCommit repository to see these software get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to rust orange"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser – as shown in Figure 7.

opsworks_app_after.jpg
Figure 7 – Application after code changes committed to CodeCommit, orchestrated by CodePipeline and deployed by OpsWorks

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/opsworks. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Useful Resources and References

OpsWorks Reference

Below, I’ve documented some additional information that might be useful on the OpsWorks service itself including its available integrations, supported versions and features.

  • OpsWorks supports three application source types: GitHub, S3, and HTTP.
  • You can store up to five versions of an OpsWorks application: the current revision plus four more for rollbacks.
  • When using the create-deployment method, you can target the OpsWorks stack, app, or instance
  • OpsWorks require internet access for the OpsWorks endpoint instance
  • Chef supports Windows in version 12
  • You cannot mix Windows and Linux instances in an OpsWorks stack
  • To change the default OS in OpsWorks, you need to change the OS and reprovision the instances
  • You cannot change the VPC for an OpsWorks instance
  • You can add ELB, EIPs, Volumes and RDS to an OpsWorks stack
  • OpsWorks autoheals at the layer level
  • You can assign multiple Chef recipes to an OpsWorks layer event
  • The three instance types in OpsWorks are: 24/7, time-based, load-based
  • To initiate a rollback in OpsWorks, you use create-deployment command
  • The following commands are available when using OpsWorks create-deployment along with possible use cases:
    • install_dependencies
    • update_dependencies – Patches to the Operating System. Not available after Chef 12.
    • update_custom_cookbooks – pulling down changes in your Chef cookbooks
    • execute_recipes – manually run specific Chef recipes that are defined in your layers
    • configure – service discovery or whenever endpoints change
    • setup
    • deploy
    • rollback
    • start
    • stop
    • restart
    • undeploy
  • To enable the use of multiple custom cookbook repositories in OpsWorks, you can enable custom cookbook at the stack and then create a cookbook that has a Berkshelf file with multiple sources. Before Chef 11.10, you couldn’t use multiple cookbook repositories.
  • You can define Chef databags in OpsWorks Users, Stacks, Layers, Apps and Instances
  • OpsWorks Auto Healing is triggered when an OpsWorks Agent detects loss of communication and stops, then restarts the instances. If it fails, it goes into manual intervention
  • OpsWorks will not auto heal an upgrade to the OS
  • OpsWorks does not auto heal by monitoring performance, only failures.

Acknowledgements

My colleague Casey Lee provided some of the background information on OpsWorks features. I also used several resources from AWS including the PHP sample app and the step-by-step tutorial on the OpsWorks/CodePipeline integration.

 

 

 

Automating ECS: Orchestrating in CodePipeline and CloudFormation (Part 2)

In my first post on automating the EC2 Container Service (ECS), I described how I automated the provisioning of ECS in AWS CloudFormation using its JSON-based DSL.

In this second and last part of the series, I will demonstrate how to create a deployment pipeline in AWS CodePipeline to deploy changes to ECS Docker images in the EC2 Container Registry (ECR).

In doing this, you’ll not only see how to automate the creation of the infrastructure but also automate the deployment of the application and its infrastructure via Docker containers. This way you can commit infrastructure, application and deployment changes as code to your version-control repository and have these changes automatically deployed to production or production-like environments.

The benefit is the customer responsiveness this embodies: you can deploy new features or fixes to users in minutes, not days or weeks.

Pipeline Architecture

In the figure below, you see the high-level architecture for the deployment pipeline

 

Deployment Pipeline Architecture
Deployment Pipeline Architecture for ECS

With the exception of the CodeCommit repository creation, most of the architecture is implemented in a CloudFormation template. Some of this is the result of not requiring a traditional configuration management tool to perform configuration on compute instances.

CodePipeline is a Continuous Delivery service that enables you to orchestrate every step of your software delivery process in a workflow that consists of a series of stages and actions. These actions perform the steps of your software delivery process.

In CodePipeline, I’ve defined two stages: Source and Build. The Source stage retrieves code artifacts via a CodeCommit repository whenever someone commits a new change. This initiates the pipeline. CodePipeline is integrated with the Jenkins Continuous Integration server. The Build stage updates the ECS Docker image (which runs a small PHP web application) within ECR and makes the new application available through an ELB endpoint.

Jenkins is installed and configured on an Amazon EC2 instance within an Amazon Virtual Private Cloud (VPC). The CloudFormation template runs commands to install and configure the Jenkins server, install and configure Docker, install and configure the CodePipeline plugin and configure the job that’s run as part of the CodePipeline build action. The Jenkins job is configured to run a bash script that’s committed to the CodeCommit repository. This bash script updates the ECS service and task definition by running a Docker build, tag and push to the ECR repository. I describe the implementation of this architecture in more detail in this post.

Jenkins

In this example, CodePipeline manages the orchestration of the software delivery workflow. Since CodePipeline doesn’t actually execute the actions, you need to integrate it with an execution platform. To perform the execution of the actions, I’m using the Jenkins Continuous Integration server. I’ll configure a CodePipeline plugin for Jenkins so that Jenkins executes certain CodePipeline actions.

In particular, I have an action to update an ECS service. I do this by running a CloudFormation update on the stack. CloudFormation looks for any differences in the templates and applies those changes to the existing stack.

To orchestrate and execute this CloudFormation update, I configure a CodePipeline custom action that calls a Jenkins job. In this Jenkins job, I call a shell script passing several arguments.

Provision Jenkins in CloudFormation

In the CloudFormation template, I create an EC2 instance on which I will install and configure the Jenkins server. This CloudFormation script is based on the CodePipeline starter kit.

To launch a Jenkins server in CloudFormation, you will use the AWS::EC2::Instance resource. Before doing this, you’ll be creating an IAM role and an EC2 security group to the already provisioned VPC (the VPC provisioning is part of the CloudFormation script).

Within the Metadata attribute of the resource (i.e. the EC2 instance on which Jenkins will run), you use the AWS::CloudFormation::Init resource to define the user data configuration. To apply your changes, you call cfn-init to run commands on the EC2 instance like this:

"/opt/aws/bin/cfn-init -v -s ",

Then, you can install and configure Docker:

"# Install Docker\n",
"cd /tmp/\n",
"yum install -y docker\n",

On this same instance, you will install and configure the Jenkins server:

"# Install Jenkins\n",
...
"yum install -y jenkins-1.658-1.1\n",
"service jenkins start\n",

And, apply the dynamic Jenkins configuration for the job so that it updates the CloudFormation stack based on arguments passed to the shell script.

"/bin/sed -i \"s/MY_STACK/",
{
"Ref":"AWS::StackName"
},
"/g\" /tmp/config-template.xml\n",

In the config-template.xml, I added tokens that get replaced as part of the commands run from the CloudFormation template. You can see a snippet of this below in which the command for the Jenkins job makes a call to the configure-ecs.sh bash script with some tokenized parameters.

<command>bash ./configure-ecs.sh MY_STACK MY_ACCTID MY_ECR</command>

All of the commands for installing and configuring the Jenkins Server, Docker, the CodePipeline plugin and Jenkins jobs are described in the CloudFormation template that is hosted in the version-control repository.

Jenkins Job Configuration Template

In the previous code snippets from CloudFormation, you see that I’m using sed to update a file called  config-template.xml. This is a Jenkins job configuration file for which I’m updating some token variables with dynamic information that gets passed to it from CloudFormation. This information is used to run a bash script to update the CloudFormation stack – which is described in the next section.

ECS Service Script to Update CloudFormation Stack

The code snippet below shows how the bash script captures that arguments that are passed by the Jenkins job into bash variables. Later in the script, it uses these bash variables to make a call the update-stack command in the CloudFormation API to apply a new ECS Docker image to the endpoint.

MY_STACK=$1
MY_ACCTID=$2
MY_ECR=$3

uuid=$(date +%s)
awsacctid="$MY_ACCTID"
ecr_repo="$MY_ECR"
ecs_stack_name="$MY_STACK"
ecs_template_url="$MY_URL"

In the code snippet below of the configure-ecs.sh script, I’m building, tagging and pushing to the Docker repository in my EC2 Container Registry repository using the dynamic values passed to this script from Jenkins (which were initially passed from the parameters and resources of my CloudFormation script).

In doing this, it creates a new Docker image for each commit and tags it with a unique id based on date and time. Finally, it uses the AWS CLI to call the update-stack command of the CloudFormation API using the variable information.

eval $(aws --region us-east-1 ecr get-login)

# Build, Tag and Deploy Docker
docker build -t $ecr_repo:$uuid .
docker tag $ecr_repo:$uuid $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid
docker push $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid

aws cloudformation update-stack --stack-name $ecs_stack_name \ 
--template-url $ecs_template_url --region us-east-1 \
--capabilities="CAPABILITY_IAM" --parameters \ 
ParameterKey=AppName,UsePreviousValue=true \
ParameterKey=ECSRepoName,UsePreviousValue=true \ ParameterKey=DesiredCapacity,UsePreviousValue=true \ ParameterKey=KeyName,UsePreviousValue=true \ ParameterKey=RepositoryBranch,UsePreviousValue=true \ ParameterKey=RepositoryName,UsePreviousValue=true \ ParameterKey=InstanceType,UsePreviousValue=true \ ParameterKey=MaxSize,UsePreviousValue=true \ ParameterKey=S3ArtifactBucket,UsePreviousValue=true \ ParameterKey=S3ArtifactObject,UsePreviousValue=true \ ParameterKey=SSHLocation,UsePreviousValue=true \ ParameterKey=YourIP,UsePreviousValue=true \ ParameterKey=ImageTag,ParameterValue=$uuid

Now that you see the basics of install and configuring Jenkins in CloudFormation and what happens when the Jenkins is run through the CodePipeline orchestration, let’s look at the steps for configuring the CodePipeline part of the CodePipeline/Jenkins configuration.

Create a Pipeline using AWS CodePipeline

Before I create a working pipeline, I prefer to model the stages and actions in CodePipeline using Lambda so that I can think through the workflow. To do this I refer to my blog post on Mocking AWS CodePipeline pipelines with Lambda. I’m going to create a two-stage pipeline consisting of a Source and a Build stage. These stages and the actions in these stages are described in more detail below.

Define a Custom Action

There are five types of action categories in CodePipeline: Source, Build, Deploy, Invoke and Test. Each action has four attributes: category, owner, provider and version. There are codepipeline_ecsthree types of action owners: AWS, ThirdParty and Custom. AWS refers to built-in actions provided by AWS. Currently, there are four built-in action providers from AWS: S3, CodeCommit, CodeDeploy and ElasticBeanstalk. Examples of ThirdParty action providers include RunScope and GitHub. If none of the action providers suit your needs, you can define custom actions in CodePipeline. In my case, I wanted to run a script from a Jenkins job so I used the CloudFormation sample configuration from the CodePipeline starter kit for the configuration of the custom build action that I use to integrate Jenkins with CodePipeline. See the snippet below.

    "CustomJenkinsActionType":{
      "Type":"AWS::CodePipeline::CustomActionType",
      "DependsOn":"JenkinsHostWaitCondition",
      "Properties":{
        "Category":"Build",
        "Provider":{
          "Fn::Join":[
            "",
            [
              {
                "Ref":"AppName"
              },
              "-Jenkins"
            ]
          ]
        },
        "Version":"1",
        "ConfigurationProperties":[
          {
            "Key":"true",
            "Name":"ProjectName",
            "Queryable":"true",
            "Required":"true",
            "Secret":"false",
            "Type":"String"
          }
        ],
        "InputArtifactDetails":{
          "MaximumCount":5,
          "MinimumCount":0
        },
        "OutputArtifactDetails":{
          "MaximumCount":5,
          "MinimumCount":0
        },
        "Settings":{
          "EntityUrlTemplate":{
            "Fn::Join":[
              "",
              [
                "http://",
                {
                  "Fn::GetAtt":[
                    "JenkinsServer",
                    "PublicIp"
                  ]
                },
                "/job/{Config:ProjectName}"
              ]
            ]
          },
          "ExecutionUrlTemplate":{
            "Fn::Join":[
              "",
              [
                "http://",
                {
                  "Fn::GetAtt":[
                    "JenkinsServer",
                    "PublicIp"
                  ]
                },
                "/job/{Config:ProjectName}/{ExternalExecutionId}"
              ]
            ]
          }
        }
      }
    },

The example pipeline that I’ve defined in CodePipeline (and described as code in CloudFormation) uses the above custom action in the Build stage of the pipeline, which is described in more detail in the Build Stage section later.

Source Stage

The Source stage has a single action to look for any changes to a CodeCommit repository. If it discovers any new commits, it retrieves the the artifacts from the CodeCommit and stores them in an encrypted form in an S3 bucket. If it’s successful, it transitions to the next stage: Build. A snippet from the CodePipeline resource definition for the Source stage in CloudFormation is shown below.

        "Stages":[
          {
            "Name":"Source",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"Source",
                "ActionTypeId":{
                  "Category":"Source",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"CodeCommit"
                },
                "OutputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-SourceArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "Configuration":{
                  "BranchName":{
                    "Ref":"RepositoryBranch"
                  },
                  "RepositoryName":{
                    "Ref":"RepositoryName"
                  }
                },
                "RunOrder":1
              }
            ]
          },

Build Stage

The Build stage invokes actions to create a new ECS repository if one doesn’t exist, builds and tags a Docker image and makes a call to a CloudFormation template to launch the rest of the ECS environment – including creating an ECS cluster, task definition, ECS services, ELB, Security Groups and IAM resources. It does this using the custom CodePipeline action for Jenkins that I described earlier. A snippet from the CodePipeline resource definition in CloudFormation for the Build stage is shown below.

          {
            "Name":"Build",
            "Actions":[
              {
                "Name":"DeployPHPApp",
                "InputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-SourceArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "ActionTypeId":{
                  "Category":"Build",
                  "Owner":"Custom",
                  "Version":"1",
                  "Provider":{
                    "Fn::Join":[
                      "",
                      [
                        {
                          "Ref":"AWS::StackName"
                        },
                        "-Jenkins"
                      ]
                    ]
                  }
                },
                "OutputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-BuiltArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "Configuration":{
                  "ProjectName":{
                    "Ref":"AWS::StackName"
                  }
                },
                "RunOrder":1
              }
            ]
          }

The custom action for Jenkins (via the CodePipeline plugin) is looking for work from CodePipeline. When it finds work, it performs the task associated with the CodePipeline action. In this case, it runs the Jenkins job that calls the configure-ecs.sh script. This bash script makes a update-stack call to the original CloudFormation template passing in the new image via the ImageTag parameter which is the new tag generated for the Docker image created as part of this script.

CloudFormation seeks to run the minimum necessary changes to the infrastructure based on the stack update. In this case, I’m only providing a new image tag but this results in creating a new ECS task definition for the service. In your CloudFormation events console, you’ll see a message similar to the one below:

AWS::ECS::TaskDefinition Requested update requires the creation of a new physical resource; hence creating one.

As I mentioned in part 1 of this series, I defined a DeploymentConfiguration type with a MinimumHealthyPercent property of 0 since I’m only using one EC2 instance as running through the earlier stages of the pipeline. This means the application experiences a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.

Other Stages

In the example I provided, I stop at the Build stage. If you were to take this to production, you might include other stages as well. Perhaps you might have a “Staging” stage in which you might include actions to deploy the application to the ECS containers using a production-like configuration which might include more instances in the Auto Scaling Group.

Once Staging is complete, the pipeline would automatically transition to the Production stage where it might make Lambda calls to test the application running in ECS containers. If everything looks ok, it switches the Route 53 hosted zone endpoint to the new container.

Launch the ECS Stack and Pipeline

In this section, you’ll launch the CloudFormation stack that creates the ECS and Pipeline resources.

Prerequisites

You need to have already created an ECR repository and a CodeCommit repository to successfully launch this stack. For instructions on creating an ECR repository, see part 1 of this series (or to directly launch the CloudFormation stack to create this ECR repository, click this button: .) For creating a CodeCommit repository, you can either see part 1 or use the instructions described at: Create and Connect to an AWS CodeCommit Repository.

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the ECS environment including all the resources previously described such as CodePipeline, ECS Cluster, ECS Task Definition, ECS Service, ELB, VPC resources, IAM Roles, etc.

You’ll enter values for the following parameters: RepositoryNameYourIPKeyName, and ECSRepoName.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name ecs-stack-1648 --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/ecs-pipeline.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters ParameterKey=RepositoryName,ParameterValue=YOURCCREPO ParameterKey=RepositoryBranch,ParameterValue=master ParameterKey=KeyName,ParameterValue=YOUREC2KEYPAIR ParameterKey=YourIP,ParameterValue=YOURIP/32 ParameterKey=ECSRepoName,ParameterValue=YOURECRREPO ParameterKey=ECSCFNURL,ParameterValue=NOURL ParameterKey=AppName,ParameterValue=app-name-1648

Outputs

Once the CloudFormation stack successfully launches, there are several outputs but the two most relevant are AppURL and CodePipelineURL. You can click on the AppURL value to launch the PHP application running on ECS from the ELB endpoint. The CodePipelineURL output value launches the generated pipeline from the CodePipeline console. See the screenshot below.

codepipeline_beanstalk_cfn_outputs  

Access the Application

Once the stack successfully completes, go to the Outputs tab for the CloudFormation stack and click on the AppURL value to launch the application.

codepipeline_ecs_php_app_before

Commit Changes to CodeCommit

Make some visual changes to the code and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to pink"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser.

codepipeline_ecs_php_app_after

Making Modifications

While the solution can work “straight out of the box”, if you’d like to make some changes, I’ve included a few sections of the code that you’ll need to modify.

configure-ecs.sh

The purpose of the configure-ecs.sh Bash script is to run the Docker commands to build, tag and push the image along with updating the existing CloudFormation stack to update the ECS service and task. The source for this bash script is here: https://github.com/stelligent/cloudformation_templates/blob/master/labs/ecs/configure-ecs.sh. I hard coded the ecs_template_url variable to a specific S3 location. You can either download the source file from one of these two locations: GitHub or S3 to make your desired modifications and then modify the ecs_template_url variable to the new location (presumably in S3).

config-template.xml

The purpose of the config-template.xml file is the Jenkins job configuration for the update ECS action. This XML file contains tokens that get replaced from the ecs-pipeline.json CloudFormation template with dynamic information like the CloudFormation stack name, account id, etc. This XML file is obtained via a wget command from within the template. The file is stored in S3 at https://s3.amazonaws.com/stelligent-training-public/public/jenkins/config-template.xml so you can modify the S3 location to your account while updating the CloudFormation template to point to the new location. In doing this, you can modify any of the behavior of the updates to the file when used by Jenkins.

Summary

In this series, you learned how to use CloudFormation to fully automate the provisioning of the Elastic Container Service along with a CodePipeline pipeline that uses CodeCommit as its version-control repository so that whenever a change is made to the Git repo, the changes are automatically applied to a PHP application hosted on ECS images.

By modeling your pipeline in CodePipeline you can apply even more stages and actions as part of your Continuous Delivery process so that it runs through all the tests and other checks enabling you to deliver changes to the production whenever there’s a business need to do so.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/ecs. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Notes

The sample solution currently only works in the us-east-1 AWS region. You will be charged for your AWS usage – including EC2, S3, CodePipeline and other services.

Resources

Here’s a list of some of the resources described or were influenced in this post:

 

Automating ECS: Provisioning in CloudFormation (Part 1)

In this two-part series, you’ll learn how to provision, configure, and orchestrate the EC2 Container Service (ECS) applications into a deployment pipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to a version-control repository so that team members can release new changes to users whenever they choose to do so: Continuous Delivery.

While the primary AWS service described in this solution is ECS, I’ll also be covering the various components and services that support this solution including AWS CloudFormationEC2 Container Registry (ECR), Docker, Identity and Access Management (IAM), VPC and Auto Scaling Services – to name a few. In part 2, I’ll be covering the integration of CodePipeline, Jenkins and CodeCommit in greater detail.

ECS allows you to run Docker containers on Amazon. The benefits of ECS and Docker include the following:

  • Portability – You can build on one Linux operating system and have it work on others without modification. It’s also portable across environment types so you can build it in development and use the same image in production.
  • Scalability – You can run multiple images on the same EC2 instance to scale thousands of tasks across a cluster.
  • Speed – Increase your speed of development and speed of runtime execution.

“ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.” [1]

The reason you might use Docker-based containers over traditional virtual machine-based application deployments is that it allows a faster, more flexible, and still very robust immutable deployment pattern in comparison with services such as traditional Elastic Beanstalk, OpsWorks, or native EC2 instances.

While you can very effectively integrate Docker into Elastic Beanstalk, ECS provides greater overall flexibility.

The reason you might use ECS or Elastic Beanstalk containers with EC2 Container Registry over similar offerings such as Docker Hub or Docker Trusted Registry is higher performance, better availability, and lower pricing. In addition, ECR utilizes other AWS services such as IAM and S3, allowing you to compose more secure or robust patterns to meet your needs.

Based on the current implementation of Lambda, the reasons you might choose to utilize ECS instead of serverless architectures include:

  • Lower latency in request response time
  • Flexibility in the underlying language stack to use
  • Elimination of AWS Lambda service limits (requests per second, code size, total code runtime)
  • Greater control of the application runtime environment
  • The ability to link modules in ways not possible with Lambda functions

I’ll be using a sample PHP application provided by AWS to demonstrate Continuous Delivery pipeline using ECS, CloudFormation and, in part 2, AWS CodePipeline.

Create and Connect to a CodeCommit Repository

While you can store your application code in any version-control repository, in this example, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code from the Amazon ECS PHP Simple Demo App located at https://github.com/awslabs/ecs-demo-php-simple-app.

To create your own CodeCommit repo,  follow these instructions: Create and Connect to an AWS CodeCommit Repository. Take note of the repository name as you’ll be using it as a CloudFormation user parameter in part 2. I called my CodeCommit repository ecs-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.

After you create your CodeCommit repo, copy the contents from the AWS PHP ECS Demo app and commit all of the files.

CodeCommit provides the following features and benefits[2]:

  • Highly available, Secure and Private Git repositories
  • Use your existing Git tools
  • Automatically encrypts all files in transit and at rest
  • Provides Webhooks – to trigger Lambda functions or push notifications in response to events
  • Integrated with other AWS services like IAM so you can define user-specific permissions

Create a Private Image Repository in ECS using ECR

codepipeline_ecr_archYou can create private Docker repositories using ECS Repositories (ECR) to store your Docker images. Follow these instructions to manually create an ECR: Create a Repository.

A snippet of the CloudFormation template for provisioning an ECR repo is listed below.

    "MyRepository":{
      "Type":"AWS::ECR::Repository",
      "Properties":{
        "RepositoryName":{
          "Ref":"AWS::StackName"
        },
        "RepositoryPolicyText":{
          "Version":"2008-10-17",
          "Statement":[
            {
              "Sid":"AllowPushPull",
              "Effect":"Allow",
              "Principal":{
                "AWS":[
                  {
                    "Fn::Join":[
                      "",
                      [
                        "arn:aws:iam::",
                        {
                          "Ref":"AWS::AccountId"
                        },
                        ":user/",
                        {
                          "Ref":"IAMUsername"
                        }
                      ]
                    ]
                  }
                ]
              },
              "Action":[
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload"
              ]
            }
          ]
        }
      }
    }

In defining an ECR, you can securely store your Docker images and refer to them when building, tagging and pushing these Docker images.

To launch the CloudFormation stack to create an ECR repository, click this button: . Your IAM username is a parameter to this CloudFormation template. You only need to enter the IAM username (and not the entire ARN) as the input value. Make note of the ECSRepository Output from the stack as you’ll be using this as an input to the ECS Environment Stack in part 2.

Docker

“Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.” [3] In this demonstration, you’ll build, tag and push a PHP application as a Docker image into an ECR repository.

Build Docker Image and Upload to ECR Locally

Prerequisites

  • You’re running these commands from an Amazon Linux EC2 instance. If you’re not, you’ll need to adapt the instructions according to your OS flavor.
  • You’ve created an ECR repo (see the “Create a Private Image Repository in ECS using ECR” section above)
  • You’ve created a CodeCommit repository and committed the PHP code from the AWS PHP app in GitHub (see the “Create and Connect to a CodeCommit Repository” section above)

Steps

  1. Install Docker on an Amazon Linux EC2 instance for which your AWS CLI has been configured (you can find detailed instructions at Install Docker)
    sudo yum update -y
    sudo yum install -y docker
    sudo service docker start
    sudo usermod -a -G docker ec2-user
  2. Logout and log back in and type:
    docker info
  3. Install Git:
    sudo yum -y install git*
  4. Clone the ECS PHP example application (if you used a different repo name, be sure to update the sample command here):
    git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/ecs-demo
  5. Change your directory:
    cd ecs-demo
  6. Configure your AWS account by running the command below and following the prompts to enter your credentials, region and output format.
    aws configure
  7. Run the command below to login to ECR.
    eval $(aws --region us-east-1 ecr get-login)
  8. Build the image using Docker. Replace REPOSITORY_NAME with the ECSRepository Output from the ECR stack you launched and TAG with a unique value. Make note of the name the image tag you’re using in creating the Docker image as you’ll be using it as a input parameter to a CloudFormation stack later. If you want to use the default value, just name it latest.
    docker build -t REPOSITORY_NAME:TAG .
  9. Tag the image (replace REPOSITORY_NAME, TAG and AWS_ACCOUNT_ID):
    docker tag REPOSITORY_NAME:TAG AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/REPOSITORY_NAME:TAG
  10. Push the tagged image to ECR (replace REPOSITORY_NAME, AWS_ACCOUNT_ID and TAG):
    docker push AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/REPOSITORY_NAME:TAG
  11. Verify the image was uploaded to your ECS Repository by going to your AWS ECS Console, clicking on Repositories and selecting the repository you created when you launched the ECS Stack.

Dockerfile

“A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.” [4] The snippet you see below is the Dockerfile to run the PHP sample application. You can see that it runs OS updates, installs the required packages including apache and PHP and then configures the HTTP server and port. While these are types of steps you might run in any automated build and deployment script, the difference is that it’s running these steps within a container which means that it runs very quickly, you can run these same steps across operating systems, and you can run these procedures across multiple tasks in a cluster.

FROM ubuntu:12.04

# Install dependencies
RUN apt-get update -y
RUN apt-get install -y git curl apache2 php5 libapache2-mod-php5 php5-mcrypt php5-mysql

# Install app
RUN rm -rf /var/www/*
ADD src /var/www

# Configure apache
RUN a2enmod rewrite
RUN chown -R www-data:www-data /var/www
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D",  "FOREGROUND"]

This Dockerfile gets run when you run the docker build command. This file has been committed to my CodeCommit repo as you can see in the figure below.

ecs_codecommit
AWS CodeCommit repository for a PHP application illustrating Dockerfile location

Create an ECS Environment in CloudFormation

In this section, I’m describing the how to configure the entire ECS stack in CloudFormation. This includes the architecture, its dependencies, and the key CloudFormation resources that make up the stack.

Architecture

The overall solution architecture is illustrated in the CloudFormation diagram below.

codepipeline_ecs_arch.jpg
Provisioning, Configuring and Orchestrating an EC2 Container Service Architecture
  • Auto Scaling Group – I’m using an auto scaling group to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the Launch Configuration.
  • Auto Scaling Launch Configuration – I’m using a launch configuration to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the  Auto Scaling Group.
  • CodeCommit – I’m using CodeCommit as my Git repo to store the application and infrastructure code.
  • CodePipeline – CodePipeline describes my Continuous Delivery workflow. In particular, it integrates with CodeCommit and Jenkins to run actions every time someone commits new code to the CodeCommit repo. This will be covered in more detail in part 2.
  • ECS Cluster – “An ECS cluster is a logical grouping of container instances that you can place tasks on.”[6]
  • ECS Service – With an ECS service, you can run a specific number of instances of a task definition simultaneously in an ECS cluster [5]
  • ECS Task Definition – A task definition is the core resource within ECS. This is where you define which Docker images to run, CPU/Memory, ports, commands and so on. Everything else in ECS is based upon the task definition
  • Elastic Load Balancer – The ELB provides the endpoint for the application. The ELB dynamically determines which EC2 instance in the cluster is serving the running ECS tasks at any given time.
  • IAM Instance Profile – “An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.” [7] In the sample, I’m using the instance profile to define the roles for which launch configurations use as part of the underlying EC2 instance that the ECS cluster runs
  • IAM Roles – I’m describing roles that have access to certain AWS resources for the EC2 instances (for ECS), Jenkins and CodePipeline
  • Jenkins – I’m using Jenkins to execute the actions that I’ve defined in CodePipeline. For example, I have a bash script that updates the CloudFormation stack when an ECS Service is update. This action is orchestrated via CodePipeline and then executed on te Jenkins server on one of its configured jobs. This will be covered in more detail in part 2.
  • Virtual Private Cloud (VPC) – In the CloudFormation template, I’m using a VPC template that we developed to define VPC resources such as: VPCGatewayAttachment, SecurityGroup, SecurityGroupIngress, SecurityGroupEgress, SubnetNetworkAclAssociation, NetworkAclEntry, NetworkAcl, SubnetRouteTableAssociation, Route, RouteTable, InternetGateway, and Subnet

Dependencies

There are four core dependencies in this solution: EC2 Key Pair, CodeCommit Repo, a VPC, and an ECR repo and Docker Image

  • EC2 Key Pair – A key pair for which you have access. See Create a Key Pair.
  • CodeCommit – In this demo, I’m using an AWS CodeCommit Git repo to store the PHP application code along with my Docker configuration. See the instructions for configuring a Git repo in CodeCommit above
  • VPC – This template requires an existing AWS Virtual Private Cloud has been created
  • ECR repo and image – You should have created an E2 Container Service Repository (ECR) using the CloudFormation template from the previous section. You should have also built, tagged and pushed a Docker image to ECR using the instructions described at Create a Private Image Repository in ECS using ECR above

ECS Cluster

With an ECS Cluster, you can manage multiple services. An ECS Container Instance runs an ECS agent that is registered to the ECS Cluster. To define an ECS Cluster in CloudFormation, use the Cluster resource: AWS::ECS::Cluster as shown below.

    "EcsCluster":{
      "Type":"AWS::ECS::Cluster",
      "DependsOn":[
        "MyVPC"
      ]
    },

ECS Service

An ECS Service defines a task definition and a desired number of task instances. A service manages tasks of a specified task definition.

In the context of ECS, an ELB distributes load between the different EC2 instances hosting your tasks, so you can optionally create a new ELB when creating a service.

To define an ECS Service in CloudFormation, use the Service resource: AWS::ECS::Service.

    "EcsService":{
      "Type":"AWS::ECS::Service",
      "DependsOn":[
        "MyVPC",
        "ECSAutoScalingGroup"
      ],
      "Properties":{
        "Cluster":{
          "Ref":"EcsCluster"
        },
        "DesiredCount":"1",
        "DeploymentConfiguration":{
          "MaximumPercent":100,
          "MinimumHealthyPercent":0
        },
        "LoadBalancers":[
          {
            "ContainerName":"php-simple-app",
            "ContainerPort":"80",
            "LoadBalancerName":{
              "Ref":"EcsElb"
            }
          }
        ],
        "Role":{
          "Ref":"EcsServiceRole"
        },
        "TaskDefinition":{
          "Ref":"PhpTaskDefinition"
        }
      }
    },

Notice that I defined a DeploymentConfiguration with a MinimumHealthyPercent of 0. Since I’m only using one EC2 instance in development, the ECS service would fail during a CloudFormation update so by setting the MinimumHealthyPercent to zero, the application will experience a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.

Task Definition

With an ECS Task Definition, you can define multiple Container Definitions and volumes. With a Container Definition, you define port mappings, environment variables, CPU Units and Memory. An ECS Volume is a persistent volume to mount and map to container volumes.

To define an ECS Task Definition, use the ECS Task Definition resource: AWS::ECS::TaskDefinition.

    "PhpTaskDefinition":{
      "Type":"AWS::ECS::TaskDefinition",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "ContainerDefinitions":[
          {
            "Name":"php-simple-app",
            "Cpu":"10",
            "Essential":"true",
            "Image":{
              "Fn::Join":[
                "",
                [
                  {
                    "Ref":"AWS::AccountId"
                  },
                  ".dkr.ecr.us-east-1.amazonaws.com/",
                  {
                    "Ref":"ECSRepoName"
                  },
                  ":",
                  {
                    "Ref":"ImageTag"
                  }
                ]
              ]
            },
            "Memory":"300",
            "PortMappings":[
              {
                "HostPort":80,
                "ContainerPort":80
              }
            ]
          }
        ],
        "Volumes":[
          {
            "Name":"my-vol"
          }
        ]
      }
    },

Auto Scaling

To define an Auto Scaling Group, use the Auto Scaling Group resource in CloudFormation: AWS::AutoScaling::AutoScalingGroup.

    "ECSAutoScalingGroup":{
      "Type":"AWS::AutoScaling::AutoScalingGroup",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "VPCZoneIdentifier":[
          {
            "Ref":"publicSubnet01"
          },
          {
            "Ref":"publicSubnet02"
          }
        ],
        "LaunchConfigurationName":{
          "Ref":"ContainerInstances"
        },
        "MinSize":"1",
        "MaxSize":{
          "Ref":"MaxSize"
        },
        "DesiredCapacity":{
          "Ref":"DesiredCapacity"
        }
      },
      "CreationPolicy":{
        "ResourceSignal":{
          "Timeout":"PT15M"
        }
      },
      "UpdatePolicy":{
        "AutoScalingRollingUpdate":{
          "MinInstancesInService":"1",
          "MaxBatchSize":"1",
          "PauseTime":"PT15M",
          "WaitOnResourceSignals":"true"
        }
      }
    },

To define a Launch Configuration, use the Launch Configuration resource in CloudFormation: AWS::AutoScaling::LaunchConfiguration.

    "ContainerInstances":{
      "Type":"AWS::AutoScaling::LaunchConfiguration",
      "DependsOn":[
        "MyVPC"
      ],
      "Metadata":{
        "AWS::CloudFormation::Init":{
          "config":{
            "commands":{
              "01_add_instance_to_cluster":{
                "command":{
                  "Fn::Join":[
                    "",
                    [
                      "#!/bin/bash\n",
                      "echo ECS_CLUSTER=",
                      {
                        "Ref":"EcsCluster"
                      },
                      " >> /etc/ecs/ecs.config"
                    ]
                  ]
                }
              }
            },
            "files":{
              "/etc/cfn/cfn-hup.conf":{
                "content":{
                  "Fn::Join":[
                    "",
                    [
                      "[main]\n",
                      "stack=",
                      {
                        "Ref":"AWS::StackId"
                      },
                      "\n",
                      "region=",
                      {
                        "Ref":"AWS::Region"
                      },
                      "\n"
                    ]
                  ]
                },
                "mode":"000400",
                "owner":"root",
                "group":"root"
              },
              "/etc/cfn/hooks.d/cfn-auto-reloader.conf":{
                "content":{
                  "Fn::Join":[
                    "",
                    [
                      "[cfn-auto-reloader-hook]\n",
                      "triggers=post.update\n",
                      "path=Resources.ContainerInstances.Metadata.AWS::CloudFormation::Init\n",
                      "action=/opt/aws/bin/cfn-init -v ",
                      "         --stack ",
                      {
                        "Ref":"AWS::StackName"
                      },
                      "         --resource ContainerInstances ",
                      "         --region ",
                      {
                        "Ref":"AWS::Region"
                      },
                      "\n",
                      "runas=root\n"
                    ]
                  ]
                }
              }
            },

>

IAM

To define an IAM Instance Profile, use the InstanceProfile resource in CloudFormation: AWS::IAM::InstanceProfile.

    "EC2InstanceProfile":{
      "Type":"AWS::IAM::InstanceProfile",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "Path":"/",
        "Roles":[
          {
            "Ref":"EC2Role"
          }
        ]
      }
    },
    "JenkinsRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Sid":"",
              "Effect":"Allow",
              "Principal":{
                "Service":"ec2.amazonaws.com"
              },
              "Action":"sts:AssumeRole"
            }
          ]
        },
        "Path":"/"
      }
    },

To define an IAM Role, use the IAM Role resource in CloudFormation: AWS::IAM::Role. The snippet below is for the EC2 role.

    "EC2Role":{
      "Type":"AWS::IAM::Role",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  "ec2.amazonaws.com"
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"ecs-service",
            "PolicyDocument":{
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "ecs:CreateCluster",
                    "ecs:RegisterContainerInstance",
                    "ecs:DeregisterContainerInstance",
                    "ecs:DiscoverPollEndpoint",
                    "ecs:Submit*",
                    "ecr:*",
                    "ecs:Poll"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

The snippet below is for defining the ECS IAM role.

    "EcsServiceRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  "ecs.amazonaws.com"
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"ecs-service",
            "PolicyDocument":{
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "elasticloadbalancing:Describe*",
                    "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                    "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                    "ec2:Describe*",
                    "ec2:AuthorizeSecurityGroupIngress"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

EC2

To define security group ingress within a VPC, use the SecurityGroupIngress resource in CloudFormation: AWS::EC2::SecurityGroupIngress.

    "InboundRule":{
      "Type":"AWS::EC2::SecurityGroupIngress",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"0",
        "ToPort":"65535",
        "SourceSecurityGroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        },
        "GroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        }
      }
    },

To define the security group egress within a VPC, use theSecurityGroupEgress resource in CloudFormation: AWS::EC2::SecurityGroupEgress.

    "OutboundRule":{
      "Type":"AWS::EC2::SecurityGroupEgress",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"0",
        "ToPort":"65535",
        "DestinationSecurityGroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        },
        "GroupId":{
          "Fn::GetAtt":[
            "SourceSG",
            "GroupId"
          ]
        }
      }
    },

To define the security group within a VPC, use the SecurityGroup resource in CloudFormation: AWS::EC2::SecurityGroup.

    "SourceSG":{
      "Type":"AWS::EC2::SecurityGroup",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "VpcId":{
          "Ref":"MyVPC"
        },
        "GroupDescription":"Sample source security group",
        "SecurityGroupIngress":[
          {
            "IpProtocol":"tcp",
            "FromPort":"80",
            "ToPort":"80",
            "CidrIp":"0.0.0.0/0"
          }
        ],
        "Tags":[
          {
            "Key":"Name",
            "Value":{
              "Fn::Join":[
                "",
                [
                  {
                    "Ref":"AWS::StackName"
                  },
                  "-SourceSG"
                ]
              ]
            }
          }
        ]
      }
    },

ELB

To define the ELB, use the LoadBalancer resource in CloudFormation: AWS::ElasticLoadBalancing::LoadBalancer.

    "EcsElb":{
      "Type":"AWS::ElasticLoadBalancing::LoadBalancer",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "Subnets":[
          {
            "Ref":"publicSubnet01"
          },
          {
            "Ref":"publicSubnet02"
          }
        ],
        "Listeners":[
          {
            "LoadBalancerPort":"80",
            "InstancePort":"80",
            "Protocol":"HTTP"
          }
        ],
        "SecurityGroups":[
          {
            "Ref":"SourceSG"
          },
          {
            "Ref":"TargetSG"
          }
        ],
        "HealthCheck":{
          "Target":"HTTP:80/",
          "HealthyThreshold":"2",
          "UnhealthyThreshold":"10",
          "Interval":"30",
          "Timeout":"5"
        }
      }
    },

Summary

In this first part of the series, you learned how to use CloudFormation to fully automate the provisioning of the EC2 Container Service and Docker which includes ELB, Auto Scaling, and VPC resources. You also learned how to setup a CodeCommit repository.

In the next and last part of this series, you’ll learn how to orchestrate all of the changes into a deployment pipeline to achieve Continuous Delivery using CodePipeline and Jenkins so that any change made to the CodeCommit repo can be deployed to production in an automated fashion. I’ll provide access to all the code resources in part 2 of this series. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Resources

Here’s a list of some of the resources described in this post:

Acknowledgements

My colleague Jeff Bachtel provided the thoughts on reasons why some teams might choose to use Docker and ECS over serverless. I also used several resources from AWS including the PHP sample app, the Introduction to AWS CodeCommit video, the CodePipeline Starter Kit and the ECS CloudFormation snippets.