DevOps in AWS Radio: Orchestrating Docker containers with AWS ECS, ECR and CodePipeline (Episode 4)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak about the AWS EC2 Container Service (ECS), AWS EC2 Container Registry (ECR), HashiCorp Consul, AWS CodePipeline, and other tools in providing Docker-based solutions for customers. Here are the show notes:

DevOps in AWS News

Episode Topics

  1. Benefits of using ECS, ECR, Docker, etc.
  2. Components of ECS, ECR and Service Discovery
  3. Orchestrating and automating the deployment pipeline using CloudFormation, CodePipeline, Jenkins, etc. 

Blog Posts

  1. Automating ECS: Provisioning in CloudFormation (Part 1)
  2. Automating ECS: Orchestrating in CodePipeline and CloudFormation (Part 2)

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Automating Habitat with AWS CodePipeline

This article outlines a proof-of-concept (POC) for automating Habitat operations from AWS CodePipeline. Habitat is Chef’s new application automation platform that provides a packaging system that results in apps that are “immutable and atomically deployed, with self-organizing peer relationships.”  Habitat is an innovative technology for packaging applications, but a Continuous Delivery pipeline is still required to automate deployments.  For this exercise I’ve opted to build a lightweight pipeline using CodePipeline and Lambda.

An in-depth analysis of how to use Habitat is beyond the scope for this post, but you can get a good introduction by following their tutorial. This POC essentially builds a CD pipeline to automate the steps described in the tutorial, and builds the same demo app (mytutorialapp). It covers the “pre-artifact” stages of the pipeline (Source, Commit, Acceptance), but keep an eye out for a future post which will flesh out the rest.

Also be sure to read the article “Continuous deployment with Habitat” which provides a good overview of how the developers of Habitat intend it to be used in a pipeline, including links to some repos to help implement that vision using Chef Automate.

Technology Overview

Application

The application we’re automating is called mytutorialapp. It is a simple “hello world” web app that runs on nginx. The application code can be found in the hab-demo repository.

Pipeline

The pipeline is provisioned by a CloudFormation stack and implemented with CodePipeline. The pipeline uses a Lambda function as an Action executor. This Lambda function delegates command execution to  an EC2 instance via an SSM Run Command: aws:runShellScript. The pipeline code can be found in the hab-demo-pipeline repository. Here is a simplified diagram of the execution mechanics:

hab_pipeline_diagram

Stack

The CloudFormation stack that provisions the pipeline also creates several supporting resources.  Check out the pipeline.json template for details, but here is a screenshot to show what’s included:

hab_demo_cfn_results

Pipeline Stages

Here’s an overview of the pipeline structure. For the purpose of this article I’ve only implemented the Source, Commit, and Acceptance stages. This portion of the pipeline will get the source code from a git repo, build a Habitat package, build a Docker test environment, deploy the Habitat package to the test environment, run tests on it and then publish it to the Habitat Depot. All downstream pipeline stages can then source the package from the Depot.

  • Source
    • Clone the app repo
  • Commit
    • Stage-SourceCode
    • Initialize-Habitat
    • Test-StaticAnalysis
    • Build-HabitatPackage
  • Acceptance
    • Create-TestEnvironment
    • Test-HabitatPackage
    • Publish-HabitatPackage

Action Details

Here are the details for the various pipeline actions. These action implementations are defined in a “pipeline-runner” Lambda function and invoked by CodePipeline. Upon invocation, the scripts are executed on an EC2 box that gets provisioned at the same time as the code pipeline.

Commit Stage

Stage-SourceCode

Pulls down the source code artifact from S3 and unzips it.

Initialize-Habitat

Sets Habitat environment variables and generates/uploads a key to access my Origin on the Habitat Depot.

Test-StacticAnalysis

Runs static analysis on plan.sh using bash -n.

Build-HabitatPackage

Builds the Habitat package

Acceptance Stage

Build-TestEnvironment

Creates a Docker test environment by running a Habitat package export command inside the Habitat Studio.

Test-HabitatPackage

Runs a Bats test suite which verifies that the webserver is running and the “hello world” page is displayed.

Publish-HabitatPackage

Uploads the Habitat package to the Depot. In a later pipeline stage, a package deployment can be sourced directly from the Depot.

Wrapping up

This post provided an early look at a mechanism for automating Habitat deployments from AWS CodePipeline. There is still a lot of work to be done on this POC project so keep an eye out for later posts that describe the mechanics of the rest of the pipeline.

Do you love Chef and Habitat? Do you love AWS? Do you love automating software development workflows to create CI/CD pipelines? If you answered “Yes!” to any of these questions then you should come work at Stelligent. Check out our Careers page to learn more.

 

Automate CodePipeline Manual Approvals in CloudFormation

pipeline_manual_approvals_onestage.jpgRecently, AWS announced that it added manual approval actions to AWS CodePipeline. In doing so, you can now model your entire software delivery process – whether it’s entirely manual or a hybrid of automated and manual approval actions.

 

In this post, I describe how you can add manual approvals to an existing pipeline – manually or via CloudFormation – to minimize your CodePipeline costs.

Pricing

The AWS CodePipeline pricing model is structured to incentivize two things:

  • Frequent Code Commits
  • Long-lived Pipelines

This is because AWS charges $1 per active pipeline per month. Therefore, if you were to treat these pipelines as ephemeral, you’d likely be paying more than you might be otherwise consuming. While in experimentation mode, you might be regularly launching and terminating pipelines as you determine the appropriate stages and actions for an application/service, once you’ve established this pipeline, the change lifecycle is likely to be much less.

Since CodePipeline uses compute resources, AWS had to make a decision on whether they incentivize frequent code commits or treat pipelines ephemerally – as they do with other resources like EC2. If they’d chosen to charge by the frequency activity then it could result in paying more when committing more code – which would be a very bad thing since you want developers to be committing code many times a day.

Immutability

While we tend to prefer an immutable approach in most things when it comes to the infrastructure, the fact is that different parts of your system will change at varying frequencies. This is the case with your pipelines. Once your pipelines have been established, typically, you might make add, edit, or remove some stages and actions but probably not every day.

Our “workaround” is to use CloudFormation’s update capability to modify our pipeline’s stages and actions without incurring the additional $1 that we’d get charged if we were to launch a new active pipeline.

The best way to apply these changes is to make the minimum required changes in the template so that errors are prevalent if they do occur.

Manual Approvals

There are many reasons your software delivery workflow might require manual approvals including exploratory testing, visual inspection, change advisory boards, code reviews, etc.

Some other reasons for manual approvals include canary and blue/green deployments – where you might make final deployment decisions once some user or deployment testing is complete.

With manual approvals in CodePipeline, you can now make the approval process a part of a fully automated software delivery process.

Create and Connect to a CodeCommit Repository

Follow these instructions for creating and connecting to an AWS CodeCommit repository: Create and Connect to an AWS CodeCommit Repository. Take note of the repository name as you’ll be using it as a CloudFormation user parameter later. The default that I use the lab is called codecommit-demo but you can modify this CloudFormation parameter.

Launch a Pipeline

Click the button below to launch a CloudFormation stack that provisions AWS CodePipeline with some default Lambda Invoke actions.

Once the CloudFormation has launched successfully, click on the link next to the PipelineUrl Output from your CloudFormation stack. This launches your pipeline. You should see a pipeline similar to the one in the figure below.

pipeline_before_update

Update a Pipeline

To update your pipeline, click on the Edit button at the top of the pipeline in CodePipeline. Then, click the (+) Stage link in between the Staging the Production stage. Enter the name ExploratoryTesting for the stage name, then click the (+) Action link. The add action window displays. Choose the new Approval Action category from the drop down and enter the other required and optional fields, as appropriate. Finally, click the Add action button.

codepipeline_manual_approvals_pipeline_edit

Once you’ve done this, click on the Release change button. Once it goes through the pipeline stages and actions, it transitions to the Exploratory Testing stage where your pipeline should look similar to the figure below.

pipeline_before_after

At this time, if your SNS Topic registered with the pipeline is linked to an email address, you’ll receive an email message that looks similar to the one below.

codepipeline_manual_approvals

As you can see, you can click on the link to be brought to the same pipeline where you can approve or reject the “stage”.

Applying Changes in CloudFormation

You can apply the same updates to CodePipeline that you had previously manually performed in code using CloudFormation update-stack. We recommend you minimize the incremental number of changes you apply using CloudFormation so that they are specific to CodePipeline changes. This is because limiting your change sets often results in limiting the amount of time you spend troubleshooting any problems.

Once you’ve manually added the new manual approval stage and action, you can use your AWS CLI to get the JSON configuration that you can use in your CloudFormation update template. To do this, run the following command substituting {YOURPIPELINENAME} with the name of your pipeline.

aws codepipeline get-pipeline --name {YOURPIPELINENAME} >pipeline.json

You’ll also notice that this command pipes the output to a file that you can use as a means of copying and formatting as part of stage and action configuration in CodePipeline. For example, the difference between the initial pipeline and the updated pipeline is shown in the JSON configuration below.

          {
            "Name":"ExploratoryTesting",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"QA",
                "ActionTypeId":{
                  "Category":"Approval",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"Manual"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "NotificationArn":{
                    "Fn::Join":[
                      "",
                      [
                        "arn:aws:sns:",
                        {
                          "Ref":"AWS::Region"
                        },
                        ":",
                        {
                          "Ref":"AWS::AccountId"
                        },
                        ":",
                        {
                          "Ref":"SNSTopic"
                        }
                      ]
                    ]
                  },
                  "CustomData":"Approval or Reject this change after running Exploratory Tests"
                },
                "RunOrder":1
              }
            ]
          },

You can take this code and add it to a new CloudFormation template so that it’s between the Staging and Production stages. Once you’ve done this, go back to your command line and run the update-stack command from your AWS CLI. An example is shown below. You’ll replace the {CFNSTACKNAME} with your stack name. If you want to make additional changes to the new stack, you can download the CloudFormation template and update it to an S3 location you control.

aws cloudformation update-stack --stack-name {CFNSTACKNAME} --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/codepipeline-updates-after.json --region us-east-1 --capabilities="CAPABILITY_IAM" --parameters ParameterKey=RepositoryBranch,UsePreviousValue=true ParameterKey=RepositoryName,UsePreviousValue=true ParameterKey=S3BucketLambdaFunction,UsePreviousValue=true ParameterKey=SNSTopic,UsePreviousValue=true

By running this command against the initial stack, you’ll see the same updates that you’d manually defined previously. The difference is that it’s defined in code which means you can version, test and deploy changes.

An alternative approach is to manually apply the changes using Update Stack through from your CloudFormation stack. You’ll enter the new CloudFormation template as an input and CloudFormation will determine which changes it will apply to your infrastructure. You see a screenshot of the change that CloudFormation will apply below.

codepipeline_preview_changes.jpg

Summary

By incorporating manual approvals into your software delivery process, you can fully automate its workflow. You learned how you can apply changes to your pipeline using CloudFormation as a way of minimizing your costs while providing a repeatable, reliable update process through code.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/codepipeline. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

References

Acknowledgements

My colleagues at Stelligent including Eric Kascic and Casey Lee provided some use cases for manual approvals.

 

DevOps in AWS Radio: Serverless Delivery with Casey Lee (Episode 2)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak with Casey Lee about his three-part series on Serverless Delivery:

 

About DevOps in AWS Radio

On DevOps in AWS Radio, we’ll be covering topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Automating and Orchestrating OpsWorks in CloudFormation and CodePipeline

pipeline_opsworks_consoleIn this post, you’ll learn how to provision, configure, and orchestrate a PHP application using the AWS OpsWorks application management service into a deployment pipeline using AWS CodePipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to the AWS CodeCommit version-control repository. This way, team members can release new changes to users whenever they choose to do so: aka, Continuous Delivery.

Recently, AWS announced the integration of OpsWorks into AWS CodePipeline so I’ll be describing various components and services that support this solution including CodePipeline along with codifying the entire infrastructure in AWS CloudFormation. As part of the announcement, AWS provided a step-by-step tutorial of integrating OpsWorks with CodePipeline that I used as a reference in automating the entire infrastructure and workflow.

This post describes how to automate all the steps using CloudFormation so that you can click on a Launch Stack button to instantiate all of your infrastructure resources.

OpsWorks

“AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application’s architecture and the specification of each component including package installation, software configuration and resources such as storage. Start from templates for common technologies like application servers and databases or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load and dynamic configuration to orchestrate changes as your environment scales.” [1]

OpsWorks provides a structured way to automate the operations of your AWS infrastructure and deployments with lifecycle events and the Chef configuration management tool. OpsWorks provides more flexibility than Elastic Beanstalk and more structure and constraints than CloudFormation. There are several key constructs that compose OpsWorks. They are:

  • Stack – An OpsWorks stack is the logical container defining OpsWorks layers, instances, apps and deployments.
  • Layer – There are built-in layers provided by OpsWorks such as Static Web Servers, Rails, Node.js, etc. But, you can also define your own custom layers as well.
  • Instances – These are EC2 instances on which the OpsWorks agent has been installed. There are only certain Linux and Windows operating systems supported by OpsWorks instances.
  • App – “Each application is represented by an app, which specifies the application type and contains the information that is needed to deploy the application from the repository to your instances.” [2]
  • Deployment – Runs Chef recipes to deploy the application onto instances based on the defined layer in the stack.

There are also lifecycle events that get executed for each deployment. Lifecycle events are linked to one or more Chef recipes. The five lifecycle events are setup, configure, deploy, undeploy, shutdown. Events get triggered based upon certain conditions. Some events can be triggered multiple times. They are described in more detail below:

  • setup – When an instance finishes booting as part of the initial setup
  • configure – When this event is run, it executes on all instances in all layers whenever a new instance comes in service, or an EIP changes, or an ELB is attached
  • deploy – When running a deployment on an instance, this event is run
  • undeploy – When an app gets deleted, this event is run
  • shutdown – Before an instance is terminated, this event is run

Solution Architecture and Components

In Figure 2, you see the deployment pipeline and infrastructure architecture for the OpsWorks/CodePipeline integration.

opsworks_pipeline_arch.jpg
Figure 2 – Deployment Pipeline Architecture for OpsWorks

Both OpsWorks and CodePipeline are defined in a single CloudFormation stack, which is described in more detail later in this post. Here are the key services and tools that make up the solution:

  • OpsWorks – In this stack, code configures operations of your infrastructure using lifecycle events and Chef
  • CodePipeline – Orchestrate all actions in your software delivery process. In this solution, I provision a CodePipeline pipeline with two stages and one action per stage in CloudFormation
  • CloudFormation – Automates the provisioning of all AWS resources. In this solution, I’m using CloudFormation to automate the provisioning for OpsWorks, CodePipeline,  IAM, and S3
  • CodeCommit – A Git repo used to host the sample application code from this solution
  • PHP – In this solution, I leverage AWS’ OpsWorks sample application written in PHP.
  • IAM – The CloudFormation stack defines an IAM Instance Profile and Roles for controlled access to AWS resources
  • EC2 – A single compute instance is launched as part of the configuration of the OpsWorks stack
  • S3 – Hosts the deployment artifacts used by CodePipeline.

Create and Connect to a CodeCommit Repository

While you can store your software code in any version-control repository, in this solution, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code off of the Amazon OpsWorks PHP Simple Demo App located at https://github.com/awslabs/opsworks-demo-php-simple-app.

To create your own CodeCommit repo, follow these instructions: Create and Connect to an AWS CodeCommit Repository. I called my CodeCommit repository opsworks-php-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.

After you create your CodeCommit repo, copy the contents from the AWS PHP OpsWorks Demo app and commit all of the files.

Implementation

I created this sample solution by stitching together several available resources including the CloudFormation template provided by the Step-by-Step Tutorial from AWS on integrating OpsWorks with CodePipeline and existing templates we use at Stelligent for CodePipeline. Finally, I manually created the pipeline in CodePipeline using the same step-by-step tutorial and then obtained the configuration of the pipeline using the get-pipeline command as shown in the command snippet below.

aws codepipeline get-pipeline --name OpsWorksPipeline > pipeline.json

This section describes the various resources of the CloudFormation solution in greater detail including IAM Instance Profiles and Roles, the OpsWorks resources, and CodePipeline.

Security Group

Here, you see the CloudFormation definition for the security group that the OpsWorks instance uses. The definition restricts the ingress port to 80 so that only web traffic is accepted on the instance.

    "CPOpsDeploySecGroup":{
      "Type":"AWS::EC2::SecurityGroup",
      "Properties":{
        "GroupDescription":"Lets you manage OpsWorks instances deployed to by CodePipeline"
      }
    },
    "CPOpsDeploySecGroupIngressHTTP":{
      "Type":"AWS::EC2::SecurityGroupIngress",
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"80",
        "ToPort":"80",
        "CidrIp":"0.0.0.0/0",
        "GroupId":{
          "Fn::GetAtt":[
            "CPOpsDeploySecGroup",
            "GroupId"
          ]
        }
      }
    },

IAM Role

Here, you see the CloudFormation definition for the OpsWorks instance role. In the same CloudFormation template, there’s a definition for an IAM service role and an instance profile. The instance profile refers to OpsWorksInstanceRole defined in the snippet below.

The roles, policies and profiles restrict the service and resources to the essential permissions it needs to perform its functions.

    "OpsWorksInstanceRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  {
                    "Fn::FindInMap":[
                      "Region2Principal",
                      {
                        "Ref":"AWS::Region"
                      },
                      "EC2Principal"
                    ]
                  }
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"s3-get",
            "PolicyDocument":{
              "Version":"2012-10-17",
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "s3:GetObject"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

Stack

The snippet below shows the CloudFormation definition for the OpsWorks Stack. It makes references to the IAM service role and instance profile, using Chef 11.10 for its configuration, and using Amazon Linux 2016.03 for its operating system. This stack is used as the basis for defining the layer, app, instance, and deployment that are described later in this section.

    "MyStack":{
      "Type":"AWS::OpsWorks::Stack",
      "Properties":{
        "Name":{
          "Ref":"AWS::StackName"
        },
        "ServiceRoleArn":{
          "Fn::GetAtt":[
            "OpsWorksServiceRole",
            "Arn"
          ]
        },
        "ConfigurationManager":{
          "Name":"Chef",
          "Version":"11.10"
        },
        "DefaultOs":"Amazon Linux 2016.03",
        "DefaultInstanceProfileArn":{
          "Fn::GetAtt":[
            "OpsWorksInstanceProfile",
            "Arn"
          ]
        }
      }
    },

Layer

The OpsWorks PHP layer is described in the CloudFormation definition below. It references the OpsWorks stack that was previously created in the same template. It also uses the php-app layer type. For a list of valid types, see CreateLayer in the AWS API documentation. This resource also enables auto healing, assigns public IPs and references the previously-created security group.

    "MyLayer":{
      "Type":"AWS::OpsWorks::Layer",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Name":"MyLayer",
        "Type":"php-app",
        "Shortname":"mylayer",
        "EnableAutoHealing":"true",
        "AutoAssignElasticIps":"false",
        "AutoAssignPublicIps":"true",
        "CustomSecurityGroupIds":[
          {
            "Fn::GetAtt":[
              "CPOpsDeploySecGroup",
              "GroupId"
            ]
          }
        ]
      },
      "DependsOn":[
        "MyStack",
        "CPOpsDeploySecGroup"
      ]
    },

OpsWorks Instance

In the snippet below, you see the CloudFormation definition for the OpsWorks instance. It references the OpsWorks layer and stack that are created in the same template. It defines the instance type as c3.large and refers to the EC2 Key Pair that you will provide as an input parameter when launching the stack.

    "MyInstance":{
      "Type":"AWS::OpsWorks::Instance",
      "Properties":{
        "LayerIds":[
          {
            "Ref":"MyLayer"
          }
        ],
        "StackId":{
          "Ref":"MyStack"
        },
        "InstanceType":"c3.large",
        "SshKeyName":{
          "Ref":"KeyName"
        }
      }
    },

OpsWorks App

In the snippet below, you see the CloudFormation definition for the OpsWorks app. It refers to the previously created OpsWorks stack and uses the current stack name for the app name – making it unique. In the OpsWorks type, I’m using php. For other supported types, see CreateApp.

I’m using other for the AppSource type (OpsWorks doesn’t seem to make the documentation obvious in terms of the types that AppSource supports, so I resorted to using the OpsWorks console to determine the possibilities). I’m using other because my source type is CodeCommit, which isn’t currently an option in OpsWorks.

    "MyOpsWorksApp":{
      "Type":"AWS::OpsWorks::App",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Type":"php",
        "Shortname":"phptestapp",
        "Name":{
          "Ref":"AWS::StackName"
        },
        "AppSource":{
          "Type":"other"
        }
      }
    },

CodePipeline

In the snippet below, you see the CodePipeline definition for the Deploy stage and the DeployPHPApp action in CloudFormation. It takes MyApp as an Input Artifact – which is an Output Artifact of the Source stage and action that obtains code assets from CodeCommit.

The action uses a Deploy category and OpsWorks as the Provider. It takes four inputs for the configuration: StackId, AppId, DeploymentType, LayerId. With the exception of DeploymentType, these values are obtained as references from previously created AWS resources in this CloudFormation template.

For more information, see CodePipeline Concepts.

         {
            "Name":"Deploy",
            "Actions":[
              {
                "InputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Name":"DeployPHPApp",
                "ActionTypeId":{
                  "Category":"Deploy",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"OpsWorks"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "StackId":{
                    "Ref":"MyStack"
                  },
                  "AppId":{
                    "Ref":"MyOpsWorksApp"
                  },
                  "DeploymentType":"deploy_app",
                  "LayerId":{
                    "Ref":"MyLayer"
                  }
                },
                "RunOrder":1
              }
            ]
          }

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the OpsWorks environment including all the resources previously described such as CodePipeline, OpsWorks, IAM Roles, etc.

When launching a stack, you’ll enter a value the KeyName parameter from the drop down. Optionally, you can enter values for your CodeCommit repository name and branch if they are different than the default values.

opsworks_pipeline_cfn
Figure 3- Parameters for Launching the CloudFormation Stack

You will charged for your AWS usage – particularly EC2, CodePipeline and S3.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name OpsWorksPipelineStack --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/codepipeline-opsworks.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters  ParameterKey=KeyName,ParameterValue=YOURKEYNAME

Outputs

Once the CloudFormation stack successfully launches, there’s an output for the CodePipelineURL. You can click on this value to launch the pipeline that’s running that’s getting the source assets from CodeCommit and launch an OpsWorks stack and associated resources. See the screenshot below.

cfn_opsworks_pipeline_outputs
Figure 4 – CloudFormation Outputs for CodePipeline/OpsWorks stack

Once the pipeline is complete, you can access the OpsWorks stack and click on the Public IP link for one of the instances to launch the PHP application that was deployed using OpsWorks as shown in Figures 5 and 6 below.

opsworks_public_ip.jpg
Figure 5 – Public IP for the OpsWorks instance

 

opsworks_app_before.jpg
Figure 6 – OpsWorks PHP app once initially deployed

Commit Changes to CodeCommit

Make some visual changes to the code (e.g. your local CodeCommit version of index.php) and commit these changes to your CodeCommit repository to see these software get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to rust orange"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser – as shown in Figure 7.

opsworks_app_after.jpg
Figure 7 – Application after code changes committed to CodeCommit, orchestrated by CodePipeline and deployed by OpsWorks

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/opsworks. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Useful Resources and References

OpsWorks Reference

Below, I’ve documented some additional information that might be useful on the OpsWorks service itself including its available integrations, supported versions and features.

  • OpsWorks supports three application source types: GitHub, S3, and HTTP.
  • You can store up to five versions of an OpsWorks application: the current revision plus four more for rollbacks.
  • When using the create-deployment method, you can target the OpsWorks stack, app, or instance
  • OpsWorks require internet access for the OpsWorks endpoint instance
  • Chef supports Windows in version 12
  • You cannot mix Windows and Linux instances in an OpsWorks stack
  • To change the default OS in OpsWorks, you need to change the OS and reprovision the instances
  • You cannot change the VPC for an OpsWorks instance
  • You can add ELB, EIPs, Volumes and RDS to an OpsWorks stack
  • OpsWorks autoheals at the layer level
  • You can assign multiple Chef recipes to an OpsWorks layer event
  • The three instance types in OpsWorks are: 24/7, time-based, load-based
  • To initiate a rollback in OpsWorks, you use create-deployment command
  • The following commands are available when using OpsWorks create-deployment along with possible use cases:
    • install_dependencies
    • update_dependencies – Patches to the Operating System. Not available after Chef 12.
    • update_custom_cookbooks – pulling down changes in your Chef cookbooks
    • execute_recipes – manually run specific Chef recipes that are defined in your layers
    • configure – service discovery or whenever endpoints change
    • setup
    • deploy
    • rollback
    • start
    • stop
    • restart
    • undeploy
  • To enable the use of multiple custom cookbook repositories in OpsWorks, you can enable custom cookbook at the stack and then create a cookbook that has a Berkshelf file with multiple sources. Before Chef 11.10, you couldn’t use multiple cookbook repositories.
  • You can define Chef databags in OpsWorks Users, Stacks, Layers, Apps and Instances
  • OpsWorks Auto Healing is triggered when an OpsWorks Agent detects loss of communication and stops, then restarts the instances. If it fails, it goes into manual intervention
  • OpsWorks will not auto heal an upgrade to the OS
  • OpsWorks does not auto heal by monitoring performance, only failures.

Acknowledgements

My colleague Casey Lee provided some of the background information on OpsWorks features. I also used several resources from AWS including the PHP sample app and the step-by-step tutorial on the OpsWorks/CodePipeline integration.

 

 

 

Automating ECS: Orchestrating in CodePipeline and CloudFormation (Part 2)

In my first post on automating the EC2 Container Service (ECS), I described how I automated the provisioning of ECS in AWS CloudFormation using its JSON-based DSL.

In this second and last part of the series, I will demonstrate how to create a deployment pipeline in AWS CodePipeline to deploy changes to ECS Docker images in the EC2 Container Registry (ECR).

In doing this, you’ll not only see how to automate the creation of the infrastructure but also automate the deployment of the application and its infrastructure via Docker containers. This way you can commit infrastructure, application and deployment changes as code to your version-control repository and have these changes automatically deployed to production or production-like environments.

The benefit is the customer responsiveness this embodies: you can deploy new features or fixes to users in minutes, not days or weeks.

Pipeline Architecture

In the figure below, you see the high-level architecture for the deployment pipeline

 

Deployment Pipeline Architecture
Deployment Pipeline Architecture for ECS

With the exception of the CodeCommit repository creation, most of the architecture is implemented in a CloudFormation template. Some of this is the result of not requiring a traditional configuration management tool to perform configuration on compute instances.

CodePipeline is a Continuous Delivery service that enables you to orchestrate every step of your software delivery process in a workflow that consists of a series of stages and actions. These actions perform the steps of your software delivery process.

In CodePipeline, I’ve defined two stages: Source and Build. The Source stage retrieves code artifacts via a CodeCommit repository whenever someone commits a new change. This initiates the pipeline. CodePipeline is integrated with the Jenkins Continuous Integration server. The Build stage updates the ECS Docker image (which runs a small PHP web application) within ECR and makes the new application available through an ELB endpoint.

Jenkins is installed and configured on an Amazon EC2 instance within an Amazon Virtual Private Cloud (VPC). The CloudFormation template runs commands to install and configure the Jenkins server, install and configure Docker, install and configure the CodePipeline plugin and configure the job that’s run as part of the CodePipeline build action. The Jenkins job is configured to run a bash script that’s committed to the CodeCommit repository. This bash script updates the ECS service and task definition by running a Docker build, tag and push to the ECR repository. I describe the implementation of this architecture in more detail in this post.

Jenkins

In this example, CodePipeline manages the orchestration of the software delivery workflow. Since CodePipeline doesn’t actually execute the actions, you need to integrate it with an execution platform. To perform the execution of the actions, I’m using the Jenkins Continuous Integration server. I’ll configure a CodePipeline plugin for Jenkins so that Jenkins executes certain CodePipeline actions.

In particular, I have an action to update an ECS service. I do this by running a CloudFormation update on the stack. CloudFormation looks for any differences in the templates and applies those changes to the existing stack.

To orchestrate and execute this CloudFormation update, I configure a CodePipeline custom action that calls a Jenkins job. In this Jenkins job, I call a shell script passing several arguments.

Provision Jenkins in CloudFormation

In the CloudFormation template, I create an EC2 instance on which I will install and configure the Jenkins server. This CloudFormation script is based on the CodePipeline starter kit.

To launch a Jenkins server in CloudFormation, you will use the AWS::EC2::Instance resource. Before doing this, you’ll be creating an IAM role and an EC2 security group to the already provisioned VPC (the VPC provisioning is part of the CloudFormation script).

Within the Metadata attribute of the resource (i.e. the EC2 instance on which Jenkins will run), you use the AWS::CloudFormation::Init resource to define the user data configuration. To apply your changes, you call cfn-init to run commands on the EC2 instance like this:

"/opt/aws/bin/cfn-init -v -s ",

Then, you can install and configure Docker:

"# Install Docker\n",
"cd /tmp/\n",
"yum install -y docker\n",

On this same instance, you will install and configure the Jenkins server:

"# Install Jenkins\n",
...
"yum install -y jenkins-1.658-1.1\n",
"service jenkins start\n",

And, apply the dynamic Jenkins configuration for the job so that it updates the CloudFormation stack based on arguments passed to the shell script.

"/bin/sed -i \"s/MY_STACK/",
{
"Ref":"AWS::StackName"
},
"/g\" /tmp/config-template.xml\n",

In the config-template.xml, I added tokens that get replaced as part of the commands run from the CloudFormation template. You can see a snippet of this below in which the command for the Jenkins job makes a call to the configure-ecs.sh bash script with some tokenized parameters.

<command>bash ./configure-ecs.sh MY_STACK MY_ACCTID MY_ECR</command>

All of the commands for installing and configuring the Jenkins Server, Docker, the CodePipeline plugin and Jenkins jobs are described in the CloudFormation template that is hosted in the version-control repository.

Jenkins Job Configuration Template

In the previous code snippets from CloudFormation, you see that I’m using sed to update a file called  config-template.xml. This is a Jenkins job configuration file for which I’m updating some token variables with dynamic information that gets passed to it from CloudFormation. This information is used to run a bash script to update the CloudFormation stack – which is described in the next section.

ECS Service Script to Update CloudFormation Stack

The code snippet below shows how the bash script captures that arguments that are passed by the Jenkins job into bash variables. Later in the script, it uses these bash variables to make a call the update-stack command in the CloudFormation API to apply a new ECS Docker image to the endpoint.

MY_STACK=$1
MY_ACCTID=$2
MY_ECR=$3

uuid=$(date +%s)
awsacctid="$MY_ACCTID"
ecr_repo="$MY_ECR"
ecs_stack_name="$MY_STACK"
ecs_template_url="$MY_URL"

In the code snippet below of the configure-ecs.sh script, I’m building, tagging and pushing to the Docker repository in my EC2 Container Registry repository using the dynamic values passed to this script from Jenkins (which were initially passed from the parameters and resources of my CloudFormation script).

In doing this, it creates a new Docker image for each commit and tags it with a unique id based on date and time. Finally, it uses the AWS CLI to call the update-stack command of the CloudFormation API using the variable information.

eval $(aws --region us-east-1 ecr get-login)

# Build, Tag and Deploy Docker
docker build -t $ecr_repo:$uuid .
docker tag $ecr_repo:$uuid $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid
docker push $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid

aws cloudformation update-stack --stack-name $ecs_stack_name \ 
--template-url $ecs_template_url --region us-east-1 \
--capabilities="CAPABILITY_IAM" --parameters \ 
ParameterKey=AppName,UsePreviousValue=true \
ParameterKey=ECSRepoName,UsePreviousValue=true \ ParameterKey=DesiredCapacity,UsePreviousValue=true \ ParameterKey=KeyName,UsePreviousValue=true \ ParameterKey=RepositoryBranch,UsePreviousValue=true \ ParameterKey=RepositoryName,UsePreviousValue=true \ ParameterKey=InstanceType,UsePreviousValue=true \ ParameterKey=MaxSize,UsePreviousValue=true \ ParameterKey=S3ArtifactBucket,UsePreviousValue=true \ ParameterKey=S3ArtifactObject,UsePreviousValue=true \ ParameterKey=SSHLocation,UsePreviousValue=true \ ParameterKey=YourIP,UsePreviousValue=true \ ParameterKey=ImageTag,ParameterValue=$uuid

Now that you see the basics of install and configuring Jenkins in CloudFormation and what happens when the Jenkins is run through the CodePipeline orchestration, let’s look at the steps for configuring the CodePipeline part of the CodePipeline/Jenkins configuration.

Create a Pipeline using AWS CodePipeline

Before I create a working pipeline, I prefer to model the stages and actions in CodePipeline using Lambda so that I can think through the workflow. To do this I refer to my blog post on Mocking AWS CodePipeline pipelines with Lambda. I’m going to create a two-stage pipeline consisting of a Source and a Build stage. These stages and the actions in these stages are described in more detail below.

Define a Custom Action

There are five types of action categories in CodePipeline: Source, Build, Deploy, Invoke and Test. Each action has four attributes: category, owner, provider and version. There are codepipeline_ecsthree types of action owners: AWS, ThirdParty and Custom. AWS refers to built-in actions provided by AWS. Currently, there are four built-in action providers from AWS: S3, CodeCommit, CodeDeploy and ElasticBeanstalk. Examples of ThirdParty action providers include RunScope and GitHub. If none of the action providers suit your needs, you can define custom actions in CodePipeline. In my case, I wanted to run a script from a Jenkins job so I used the CloudFormation sample configuration from the CodePipeline starter kit for the configuration of the custom build action that I use to integrate Jenkins with CodePipeline. See the snippet below.

    "CustomJenkinsActionType":{
      "Type":"AWS::CodePipeline::CustomActionType",
      "DependsOn":"JenkinsHostWaitCondition",
      "Properties":{
        "Category":"Build",
        "Provider":{
          "Fn::Join":[
            "",
            [
              {
                "Ref":"AppName"
              },
              "-Jenkins"
            ]
          ]
        },
        "Version":"1",
        "ConfigurationProperties":[
          {
            "Key":"true",
            "Name":"ProjectName",
            "Queryable":"true",
            "Required":"true",
            "Secret":"false",
            "Type":"String"
          }
        ],
        "InputArtifactDetails":{
          "MaximumCount":5,
          "MinimumCount":0
        },
        "OutputArtifactDetails":{
          "MaximumCount":5,
          "MinimumCount":0
        },
        "Settings":{
          "EntityUrlTemplate":{
            "Fn::Join":[
              "",
              [
                "http://",
                {
                  "Fn::GetAtt":[
                    "JenkinsServer",
                    "PublicIp"
                  ]
                },
                "/job/{Config:ProjectName}"
              ]
            ]
          },
          "ExecutionUrlTemplate":{
            "Fn::Join":[
              "",
              [
                "http://",
                {
                  "Fn::GetAtt":[
                    "JenkinsServer",
                    "PublicIp"
                  ]
                },
                "/job/{Config:ProjectName}/{ExternalExecutionId}"
              ]
            ]
          }
        }
      }
    },

The example pipeline that I’ve defined in CodePipeline (and described as code in CloudFormation) uses the above custom action in the Build stage of the pipeline, which is described in more detail in the Build Stage section later.

Source Stage

The Source stage has a single action to look for any changes to a CodeCommit repository. If it discovers any new commits, it retrieves the the artifacts from the CodeCommit and stores them in an encrypted form in an S3 bucket. If it’s successful, it transitions to the next stage: Build. A snippet from the CodePipeline resource definition for the Source stage in CloudFormation is shown below.

        "Stages":[
          {
            "Name":"Source",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"Source",
                "ActionTypeId":{
                  "Category":"Source",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"CodeCommit"
                },
                "OutputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-SourceArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "Configuration":{
                  "BranchName":{
                    "Ref":"RepositoryBranch"
                  },
                  "RepositoryName":{
                    "Ref":"RepositoryName"
                  }
                },
                "RunOrder":1
              }
            ]
          },

Build Stage

The Build stage invokes actions to create a new ECS repository if one doesn’t exist, builds and tags a Docker image and makes a call to a CloudFormation template to launch the rest of the ECS environment – including creating an ECS cluster, task definition, ECS services, ELB, Security Groups and IAM resources. It does this using the custom CodePipeline action for Jenkins that I described earlier. A snippet from the CodePipeline resource definition in CloudFormation for the Build stage is shown below.

          {
            "Name":"Build",
            "Actions":[
              {
                "Name":"DeployPHPApp",
                "InputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-SourceArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "ActionTypeId":{
                  "Category":"Build",
                  "Owner":"Custom",
                  "Version":"1",
                  "Provider":{
                    "Fn::Join":[
                      "",
                      [
                        {
                          "Ref":"AWS::StackName"
                        },
                        "-Jenkins"
                      ]
                    ]
                  }
                },
                "OutputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-BuiltArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "Configuration":{
                  "ProjectName":{
                    "Ref":"AWS::StackName"
                  }
                },
                "RunOrder":1
              }
            ]
          }

The custom action for Jenkins (via the CodePipeline plugin) is looking for work from CodePipeline. When it finds work, it performs the task associated with the CodePipeline action. In this case, it runs the Jenkins job that calls the configure-ecs.sh script. This bash script makes a update-stack call to the original CloudFormation template passing in the new image via the ImageTag parameter which is the new tag generated for the Docker image created as part of this script.

CloudFormation seeks to run the minimum necessary changes to the infrastructure based on the stack update. In this case, I’m only providing a new image tag but this results in creating a new ECS task definition for the service. In your CloudFormation events console, you’ll see a message similar to the one below:

AWS::ECS::TaskDefinition Requested update requires the creation of a new physical resource; hence creating one.

As I mentioned in part 1 of this series, I defined a DeploymentConfiguration type with a MinimumHealthyPercent property of 0 since I’m only using one EC2 instance as running through the earlier stages of the pipeline. This means the application experiences a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.

Other Stages

In the example I provided, I stop at the Build stage. If you were to take this to production, you might include other stages as well. Perhaps you might have a “Staging” stage in which you might include actions to deploy the application to the ECS containers using a production-like configuration which might include more instances in the Auto Scaling Group.

Once Staging is complete, the pipeline would automatically transition to the Production stage where it might make Lambda calls to test the application running in ECS containers. If everything looks ok, it switches the Route 53 hosted zone endpoint to the new container.

Launch the ECS Stack and Pipeline

In this section, you’ll launch the CloudFormation stack that creates the ECS and Pipeline resources.

Prerequisites

You need to have already created an ECR repository and a CodeCommit repository to successfully launch this stack. For instructions on creating an ECR repository, see part 1 of this series (or to directly launch the CloudFormation stack to create this ECR repository, click this button: .) For creating a CodeCommit repository, you can either see part 1 or use the instructions described at: Create and Connect to an AWS CodeCommit Repository.

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the ECS environment including all the resources previously described such as CodePipeline, ECS Cluster, ECS Task Definition, ECS Service, ELB, VPC resources, IAM Roles, etc.

You’ll enter values for the following parameters: RepositoryNameYourIPKeyName, and ECSRepoName.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name ecs-stack-1648 --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/ecs-pipeline.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters ParameterKey=RepositoryName,ParameterValue=YOURCCREPO ParameterKey=RepositoryBranch,ParameterValue=master ParameterKey=KeyName,ParameterValue=YOUREC2KEYPAIR ParameterKey=YourIP,ParameterValue=YOURIP/32 ParameterKey=ECSRepoName,ParameterValue=YOURECRREPO ParameterKey=ECSCFNURL,ParameterValue=NOURL ParameterKey=AppName,ParameterValue=app-name-1648

Outputs

Once the CloudFormation stack successfully launches, there are several outputs but the two most relevant are AppURL and CodePipelineURL. You can click on the AppURL value to launch the PHP application running on ECS from the ELB endpoint. The CodePipelineURL output value launches the generated pipeline from the CodePipeline console. See the screenshot below.

codepipeline_beanstalk_cfn_outputs  

Access the Application

Once the stack successfully completes, go to the Outputs tab for the CloudFormation stack and click on the AppURL value to launch the application.

codepipeline_ecs_php_app_before

Commit Changes to CodeCommit

Make some visual changes to the code and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to pink"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser.

codepipeline_ecs_php_app_after

Making Modifications

While the solution can work “straight out of the box”, if you’d like to make some changes, I’ve included a few sections of the code that you’ll need to modify.

configure-ecs.sh

The purpose of the configure-ecs.sh Bash script is to run the Docker commands to build, tag and push the image along with updating the existing CloudFormation stack to update the ECS service and task. The source for this bash script is here: https://github.com/stelligent/cloudformation_templates/blob/master/labs/ecs/configure-ecs.sh. I hard coded the ecs_template_url variable to a specific S3 location. You can either download the source file from one of these two locations: GitHub or S3 to make your desired modifications and then modify the ecs_template_url variable to the new location (presumably in S3).

config-template.xml

The purpose of the config-template.xml file is the Jenkins job configuration for the update ECS action. This XML file contains tokens that get replaced from the ecs-pipeline.json CloudFormation template with dynamic information like the CloudFormation stack name, account id, etc. This XML file is obtained via a wget command from within the template. The file is stored in S3 at https://s3.amazonaws.com/stelligent-training-public/public/jenkins/config-template.xml so you can modify the S3 location to your account while updating the CloudFormation template to point to the new location. In doing this, you can modify any of the behavior of the updates to the file when used by Jenkins.

Summary

In this series, you learned how to use CloudFormation to fully automate the provisioning of the Elastic Container Service along with a CodePipeline pipeline that uses CodeCommit as its version-control repository so that whenever a change is made to the Git repo, the changes are automatically applied to a PHP application hosted on ECS images.

By modeling your pipeline in CodePipeline you can apply even more stages and actions as part of your Continuous Delivery process so that it runs through all the tests and other checks enabling you to deliver changes to the production whenever there’s a business need to do so.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/ecs. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Notes

The sample solution currently only works in the us-east-1 AWS region. You will be charged for your AWS usage – including EC2, S3, CodePipeline and other services.

Resources

Here’s a list of some of the resources described or were influenced in this post:

 

Automating ECS: Provisioning in CloudFormation (Part 1)

In this two-part series, you’ll learn how to provision, configure, and orchestrate the EC2 Container Service (ECS) applications into a deployment pipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to a version-control repository so that team members can release new changes to users whenever they choose to do so: Continuous Delivery.

While the primary AWS service described in this solution is ECS, I’ll also be covering the various components and services that support this solution including AWS CloudFormationEC2 Container Registry (ECR), Docker, Identity and Access Management (IAM), VPC and Auto Scaling Services – to name a few. In part 2, I’ll be covering the integration of CodePipeline, Jenkins and CodeCommit in greater detail.

ECS allows you to run Docker containers on Amazon. The benefits of ECS and Docker include the following:

  • Portability – You can build on one Linux operating system and have it work on others without modification. It’s also portable across environment types so you can build it in development and use the same image in production.
  • Scalability – You can run multiple images on the same EC2 instance to scale thousands of tasks across a cluster.
  • Speed – Increase your speed of development and speed of runtime execution.

“ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.” [1]

The reason you might use Docker-based containers over traditional virtual machine-based application deployments is that it allows a faster, more flexible, and still very robust immutable deployment pattern in comparison with services such as traditional Elastic Beanstalk, OpsWorks, or native EC2 instances.

While you can very effectively integrate Docker into Elastic Beanstalk, ECS provides greater overall flexibility.

The reason you might use ECS or Elastic Beanstalk containers with EC2 Container Registry over similar offerings such as Docker Hub or Docker Trusted Registry is higher performance, better availability, and lower pricing. In addition, ECR utilizes other AWS services such as IAM and S3, allowing you to compose more secure or robust patterns to meet your needs.

Based on the current implementation of Lambda, the reasons you might choose to utilize ECS instead of serverless architectures include:

  • Lower latency in request response time
  • Flexibility in the underlying language stack to use
  • Elimination of AWS Lambda service limits (requests per second, code size, total code runtime)
  • Greater control of the application runtime environment
  • The ability to link modules in ways not possible with Lambda functions

I’ll be using a sample PHP application provided by AWS to demonstrate Continuous Delivery pipeline using ECS, CloudFormation and, in part 2, AWS CodePipeline.

Create and Connect to a CodeCommit Repository

While you can store your application code in any version-control repository, in this example, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code from the Amazon ECS PHP Simple Demo App located at https://github.com/awslabs/ecs-demo-php-simple-app.

To create your own CodeCommit repo,  follow these instructions: Create and Connect to an AWS CodeCommit Repository. Take note of the repository name as you’ll be using it as a CloudFormation user parameter in part 2. I called my CodeCommit repository ecs-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.

After you create your CodeCommit repo, copy the contents from the AWS PHP ECS Demo app and commit all of the files.

CodeCommit provides the following features and benefits[2]:

  • Highly available, Secure and Private Git repositories
  • Use your existing Git tools
  • Automatically encrypts all files in transit and at rest
  • Provides Webhooks – to trigger Lambda functions or push notifications in response to events
  • Integrated with other AWS services like IAM so you can define user-specific permissions

Create a Private Image Repository in ECS using ECR

codepipeline_ecr_archYou can create private Docker repositories using ECS Repositories (ECR) to store your Docker images. Follow these instructions to manually create an ECR: Create a Repository.

A snippet of the CloudFormation template for provisioning an ECR repo is listed below.

    "MyRepository":{
      "Type":"AWS::ECR::Repository",
      "Properties":{
        "RepositoryName":{
          "Ref":"AWS::StackName"
        },
        "RepositoryPolicyText":{
          "Version":"2008-10-17",
          "Statement":[
            {
              "Sid":"AllowPushPull",
              "Effect":"Allow",
              "Principal":{
                "AWS":[
                  {
                    "Fn::Join":[
                      "",
                      [
                        "arn:aws:iam::",
                        {
                          "Ref":"AWS::AccountId"
                        },
                        ":user/",
                        {
                          "Ref":"IAMUsername"
                        }
                      ]
                    ]
                  }
                ]
              },
              "Action":[
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload"
              ]
            }
          ]
        }
      }
    }

In defining an ECR, you can securely store your Docker images and refer to them when building, tagging and pushing these Docker images.

To launch the CloudFormation stack to create an ECR repository, click this button: . Your IAM username is a parameter to this CloudFormation template. You only need to enter the IAM username (and not the entire ARN) as the input value. Make note of the ECSRepository Output from the stack as you’ll be using this as an input to the ECS Environment Stack in part 2.

Docker

“Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.” [3] In this demonstration, you’ll build, tag and push a PHP application as a Docker image into an ECR repository.

Build Docker Image and Upload to ECR Locally

Prerequisites

  • You’re running these commands from an Amazon Linux EC2 instance. If you’re not, you’ll need to adapt the instructions according to your OS flavor.
  • You’ve created an ECR repo (see the “Create a Private Image Repository in ECS using ECR” section above)
  • You’ve created a CodeCommit repository and committed the PHP code from the AWS PHP app in GitHub (see the “Create and Connect to a CodeCommit Repository” section above)

Steps

  1. Install Docker on an Amazon Linux EC2 instance for which your AWS CLI has been configured (you can find detailed instructions at Install Docker)
    sudo yum update -y
    sudo yum install -y docker
    sudo service docker start
    sudo usermod -a -G docker ec2-user
  2. Logout and log back in and type:
    docker info
  3. Install Git:
    sudo yum -y install git*
  4. Clone the ECS PHP example application (if you used a different repo name, be sure to update the sample command here):
    git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/ecs-demo
  5. Change your directory:
    cd ecs-demo
  6. Configure your AWS account by running the command below and following the prompts to enter your credentials, region and output format.
    aws configure
  7. Run the command below to login to ECR.
    eval $(aws --region us-east-1 ecr get-login)
  8. Build the image using Docker. Replace REPOSITORY_NAME with the ECSRepository Output from the ECR stack you launched and TAG with a unique value. Make note of the name the image tag you’re using in creating the Docker image as you’ll be using it as a input parameter to a CloudFormation stack later. If you want to use the default value, just name it latest.
    docker build -t REPOSITORY_NAME:TAG .
  9. Tag the image (replace REPOSITORY_NAME, TAG and AWS_ACCOUNT_ID):
    docker tag REPOSITORY_NAME:TAG AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/REPOSITORY_NAME:TAG
  10. Push the tagged image to ECR (replace REPOSITORY_NAME, AWS_ACCOUNT_ID and TAG):
    docker push AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/REPOSITORY_NAME:TAG
  11. Verify the image was uploaded to your ECS Repository by going to your AWS ECS Console, clicking on Repositories and selecting the repository you created when you launched the ECS Stack.

Dockerfile

“A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.” [4] The snippet you see below is the Dockerfile to run the PHP sample application. You can see that it runs OS updates, installs the required packages including apache and PHP and then configures the HTTP server and port. While these are types of steps you might run in any automated build and deployment script, the difference is that it’s running these steps within a container which means that it runs very quickly, you can run these same steps across operating systems, and you can run these procedures across multiple tasks in a cluster.

FROM ubuntu:12.04

# Install dependencies
RUN apt-get update -y
RUN apt-get install -y git curl apache2 php5 libapache2-mod-php5 php5-mcrypt php5-mysql

# Install app
RUN rm -rf /var/www/*
ADD src /var/www

# Configure apache
RUN a2enmod rewrite
RUN chown -R www-data:www-data /var/www
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D",  "FOREGROUND"]

This Dockerfile gets run when you run the docker build command. This file has been committed to my CodeCommit repo as you can see in the figure below.

ecs_codecommit
AWS CodeCommit repository for a PHP application illustrating Dockerfile location

Create an ECS Environment in CloudFormation

In this section, I’m describing the how to configure the entire ECS stack in CloudFormation. This includes the architecture, its dependencies, and the key CloudFormation resources that make up the stack.

Architecture

The overall solution architecture is illustrated in the CloudFormation diagram below.

codepipeline_ecs_arch.jpg
Provisioning, Configuring and Orchestrating an EC2 Container Service Architecture
  • Auto Scaling Group – I’m using an auto scaling group to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the Launch Configuration.
  • Auto Scaling Launch Configuration – I’m using a launch configuration to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the  Auto Scaling Group.
  • CodeCommit – I’m using CodeCommit as my Git repo to store the application and infrastructure code.
  • CodePipeline – CodePipeline describes my Continuous Delivery workflow. In particular, it integrates with CodeCommit and Jenkins to run actions every time someone commits new code to the CodeCommit repo. This will be covered in more detail in part 2.
  • ECS Cluster – “An ECS cluster is a logical grouping of container instances that you can place tasks on.”[6]
  • ECS Service – With an ECS service, you can run a specific number of instances of a task definition simultaneously in an ECS cluster [5]
  • ECS Task Definition – A task definition is the core resource within ECS. This is where you define which Docker images to run, CPU/Memory, ports, commands and so on. Everything else in ECS is based upon the task definition
  • Elastic Load Balancer – The ELB provides the endpoint for the application. The ELB dynamically determines which EC2 instance in the cluster is serving the running ECS tasks at any given time.
  • IAM Instance Profile – “An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.” [7] In the sample, I’m using the instance profile to define the roles for which launch configurations use as part of the underlying EC2 instance that the ECS cluster runs
  • IAM Roles – I’m describing roles that have access to certain AWS resources for the EC2 instances (for ECS), Jenkins and CodePipeline
  • Jenkins – I’m using Jenkins to execute the actions that I’ve defined in CodePipeline. For example, I have a bash script that updates the CloudFormation stack when an ECS Service is update. This action is orchestrated via CodePipeline and then executed on te Jenkins server on one of its configured jobs. This will be covered in more detail in part 2.
  • Virtual Private Cloud (VPC) – In the CloudFormation template, I’m using a VPC template that we developed to define VPC resources such as: VPCGatewayAttachment, SecurityGroup, SecurityGroupIngress, SecurityGroupEgress, SubnetNetworkAclAssociation, NetworkAclEntry, NetworkAcl, SubnetRouteTableAssociation, Route, RouteTable, InternetGateway, and Subnet

Dependencies

There are four core dependencies in this solution: EC2 Key Pair, CodeCommit Repo, a VPC, and an ECR repo and Docker Image

  • EC2 Key Pair – A key pair for which you have access. See Create a Key Pair.
  • CodeCommit – In this demo, I’m using an AWS CodeCommit Git repo to store the PHP application code along with my Docker configuration. See the instructions for configuring a Git repo in CodeCommit above
  • VPC – This template requires an existing AWS Virtual Private Cloud has been created
  • ECR repo and image – You should have created an E2 Container Service Repository (ECR) using the CloudFormation template from the previous section. You should have also built, tagged and pushed a Docker image to ECR using the instructions described at Create a Private Image Repository in ECS using ECR above

ECS Cluster

With an ECS Cluster, you can manage multiple services. An ECS Container Instance runs an ECS agent that is registered to the ECS Cluster. To define an ECS Cluster in CloudFormation, use the Cluster resource: AWS::ECS::Cluster as shown below.

    "EcsCluster":{
      "Type":"AWS::ECS::Cluster",
      "DependsOn":[
        "MyVPC"
      ]
    },

ECS Service

An ECS Service defines a task definition and a desired number of task instances. A service manages tasks of a specified task definition.

In the context of ECS, an ELB distributes load between the different EC2 instances hosting your tasks, so you can optionally create a new ELB when creating a service.

To define an ECS Service in CloudFormation, use the Service resource: AWS::ECS::Service.

    "EcsService":{
      "Type":"AWS::ECS::Service",
      "DependsOn":[
        "MyVPC",
        "ECSAutoScalingGroup"
      ],
      "Properties":{
        "Cluster":{
          "Ref":"EcsCluster"
        },
        "DesiredCount":"1",
        "DeploymentConfiguration":{
          "MaximumPercent":100,
          "MinimumHealthyPercent":0
        },
        "LoadBalancers":[
          {
            "ContainerName":"php-simple-app",
            "ContainerPort":"80",
            "LoadBalancerName":{
              "Ref":"EcsElb"
            }
          }
        ],
        "Role":{
          "Ref":"EcsServiceRole"
        },
        "TaskDefinition":{
          "Ref":"PhpTaskDefinition"
        }
      }
    },

Notice that I defined a DeploymentConfiguration with a MinimumHealthyPercent of 0. Since I’m only using one EC2 instance in development, the ECS service would fail during a CloudFormation update so by setting the MinimumHealthyPercent to zero, the application will experience a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.

Task Definition

With an ECS Task Definition, you can define multiple Container Definitions and volumes. With a Container Definition, you define port mappings, environment variables, CPU Units and Memory. An ECS Volume is a persistent volume to mount and map to container volumes.

To define an ECS Task Definition, use the ECS Task Definition resource: AWS::ECS::TaskDefinition.

    "PhpTaskDefinition":{
      "Type":"AWS::ECS::TaskDefinition",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "ContainerDefinitions":[
          {
            "Name":"php-simple-app",
            "Cpu":"10",
            "Essential":"true",
            "Image":{
              "Fn::Join":[
                "",
                [
                  {
                    "Ref":"AWS::AccountId"
                  },
                  ".dkr.ecr.us-east-1.amazonaws.com/",
                  {
                    "Ref":"ECSRepoName"
                  },
                  ":",
                  {
                    "Ref":"ImageTag"
                  }
                ]
              ]
            },
            "Memory":"300",
            "PortMappings":[
              {
                "HostPort":80,
                "ContainerPort":80
              }
            ]
          }
        ],
        "Volumes":[
          {
            "Name":"my-vol"
          }
        ]
      }
    },

Auto Scaling

To define an Auto Scaling Group, use the Auto Scaling Group resource in CloudFormation: AWS::AutoScaling::AutoScalingGroup.

    "ECSAutoScalingGroup":{
      "Type":"AWS::AutoScaling::AutoScalingGroup",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "VPCZoneIdentifier":[
          {
            "Ref":"publicSubnet01"
          },
          {
            "Ref":"publicSubnet02"
          }
        ],
        "LaunchConfigurationName":{
          "Ref":"ContainerInstances"
        },
        "MinSize":"1",
        "MaxSize":{
          "Ref":"MaxSize"
        },
        "DesiredCapacity":{
          "Ref":"DesiredCapacity"
        }
      },
      "CreationPolicy":{
        "ResourceSignal":{
          "Timeout":"PT15M"
        }
      },
      "UpdatePolicy":{
        "AutoScalingRollingUpdate":{
          "MinInstancesInService":"1",
          "MaxBatchSize":"1",
          "PauseTime":"PT15M",
          "WaitOnResourceSignals":"true"
        }
      }
    },

To define a Launch Configuration, use the Launch Configuration resource in CloudFormation: AWS::AutoScaling::LaunchConfiguration.

    "ContainerInstances":{
      "Type":"AWS::AutoScaling::LaunchConfiguration",
      "DependsOn":[
        "MyVPC"
      ],
      "Metadata":{
        "AWS::CloudFormation::Init":{
          "config":{
            "commands":{
              "01_add_instance_to_cluster":{
                "command":{
                  "Fn::Join":[
                    "",
                    [
                      "#!/bin/bash\n",
                      "echo ECS_CLUSTER=",
                      {
                        "Ref":"EcsCluster"
                      },
                      " >> /etc/ecs/ecs.config"
                    ]
                  ]
                }
              }
            },
            "files":{
              "/etc/cfn/cfn-hup.conf":{
                "content":{
                  "Fn::Join":[
                    "",
                    [
                      "[main]\n",
                      "stack=",
                      {
                        "Ref":"AWS::StackId"
                      },
                      "\n",
                      "region=",
                      {
                        "Ref":"AWS::Region"
                      },
                      "\n"
                    ]
                  ]
                },
                "mode":"000400",
                "owner":"root",
                "group":"root"
              },
              "/etc/cfn/hooks.d/cfn-auto-reloader.conf":{
                "content":{
                  "Fn::Join":[
                    "",
                    [
                      "[cfn-auto-reloader-hook]\n",
                      "triggers=post.update\n",
                      "path=Resources.ContainerInstances.Metadata.AWS::CloudFormation::Init\n",
                      "action=/opt/aws/bin/cfn-init -v ",
                      "         --stack ",
                      {
                        "Ref":"AWS::StackName"
                      },
                      "         --resource ContainerInstances ",
                      "         --region ",
                      {
                        "Ref":"AWS::Region"
                      },
                      "\n",
                      "runas=root\n"
                    ]
                  ]
                }
              }
            },

>

IAM

To define an IAM Instance Profile, use the InstanceProfile resource in CloudFormation: AWS::IAM::InstanceProfile.

    "EC2InstanceProfile":{
      "Type":"AWS::IAM::InstanceProfile",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "Path":"/",
        "Roles":[
          {
            "Ref":"EC2Role"
          }
        ]
      }
    },
    "JenkinsRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Sid":"",
              "Effect":"Allow",
              "Principal":{
                "Service":"ec2.amazonaws.com"
              },
              "Action":"sts:AssumeRole"
            }
          ]
        },
        "Path":"/"
      }
    },

To define an IAM Role, use the IAM Role resource in CloudFormation: AWS::IAM::Role. The snippet below is for the EC2 role.

    "EC2Role":{
      "Type":"AWS::IAM::Role",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  "ec2.amazonaws.com"
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"ecs-service",
            "PolicyDocument":{
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "ecs:CreateCluster",
                    "ecs:RegisterContainerInstance",
                    "ecs:DeregisterContainerInstance",
                    "ecs:DiscoverPollEndpoint",
                    "ecs:Submit*",
                    "ecr:*",
                    "ecs:Poll"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

The snippet below is for defining the ECS IAM role.

    "EcsServiceRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  "ecs.amazonaws.com"
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"ecs-service",
            "PolicyDocument":{
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "elasticloadbalancing:Describe*",
                    "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                    "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                    "ec2:Describe*",
                    "ec2:AuthorizeSecurityGroupIngress"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

EC2

To define security group ingress within a VPC, use the SecurityGroupIngress resource in CloudFormation: AWS::EC2::SecurityGroupIngress.

    "InboundRule":{
      "Type":"AWS::EC2::SecurityGroupIngress",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"0",
        "ToPort":"65535",
        "SourceSecurityGroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        },
        "GroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        }
      }
    },

To define the security group egress within a VPC, use theSecurityGroupEgress resource in CloudFormation: AWS::EC2::SecurityGroupEgress.

    "OutboundRule":{
      "Type":"AWS::EC2::SecurityGroupEgress",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"0",
        "ToPort":"65535",
        "DestinationSecurityGroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        },
        "GroupId":{
          "Fn::GetAtt":[
            "SourceSG",
            "GroupId"
          ]
        }
      }
    },

To define the security group within a VPC, use the SecurityGroup resource in CloudFormation: AWS::EC2::SecurityGroup.

    "SourceSG":{
      "Type":"AWS::EC2::SecurityGroup",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "VpcId":{
          "Ref":"MyVPC"
        },
        "GroupDescription":"Sample source security group",
        "SecurityGroupIngress":[
          {
            "IpProtocol":"tcp",
            "FromPort":"80",
            "ToPort":"80",
            "CidrIp":"0.0.0.0/0"
          }
        ],
        "Tags":[
          {
            "Key":"Name",
            "Value":{
              "Fn::Join":[
                "",
                [
                  {
                    "Ref":"AWS::StackName"
                  },
                  "-SourceSG"
                ]
              ]
            }
          }
        ]
      }
    },

ELB

To define the ELB, use the LoadBalancer resource in CloudFormation: AWS::ElasticLoadBalancing::LoadBalancer.

    "EcsElb":{
      "Type":"AWS::ElasticLoadBalancing::LoadBalancer",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "Subnets":[
          {
            "Ref":"publicSubnet01"
          },
          {
            "Ref":"publicSubnet02"
          }
        ],
        "Listeners":[
          {
            "LoadBalancerPort":"80",
            "InstancePort":"80",
            "Protocol":"HTTP"
          }
        ],
        "SecurityGroups":[
          {
            "Ref":"SourceSG"
          },
          {
            "Ref":"TargetSG"
          }
        ],
        "HealthCheck":{
          "Target":"HTTP:80/",
          "HealthyThreshold":"2",
          "UnhealthyThreshold":"10",
          "Interval":"30",
          "Timeout":"5"
        }
      }
    },

Summary

In this first part of the series, you learned how to use CloudFormation to fully automate the provisioning of the EC2 Container Service and Docker which includes ELB, Auto Scaling, and VPC resources. You also learned how to setup a CodeCommit repository.

In the next and last part of this series, you’ll learn how to orchestrate all of the changes into a deployment pipeline to achieve Continuous Delivery using CodePipeline and Jenkins so that any change made to the CodeCommit repo can be deployed to production in an automated fashion. I’ll provide access to all the code resources in part 2 of this series. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Resources

Here’s a list of some of the resources described in this post:

Acknowledgements

My colleague Jeff Bachtel provided the thoughts on reasons why some teams might choose to use Docker and ECS over serverless. I also used several resources from AWS including the PHP sample app, the Introduction to AWS CodeCommit video, the CodePipeline Starter Kit and the ECS CloudFormation snippets.

Security Integration Testing (Part 3): Integrating with a Continuous Delivery pipeline

Continuous Security: Security in the Continuous Delivery Pipeline is a series of articles addressing security concerns and testing in the Continuous Delivery pipeline. This is the seventh article in the series.

Introduction

The purpose of this blog series is to show how AWS Config and Lambda can be used to add Security Integration tests to a Continuous Delivery pipeline. Part 1 covered setting up the Config service and creating AWS-managed Config Rules. Part 2 stepped through the process of running Stelligent’s Config-Rule-Status tool to create and deploy Lambda-backed Config Rules. Here in Part 3 I will expand on that topic and show how to run local functional tests for the Lambda-backed rules and how to manage versioning and deployment to AWS. Finally, I will wrap up the series by describing how to use the “Tester” Lambda function to add a security integration test to a Continuous Delivery pipeline.

Before diving into the technical details, here’s a refresher on what this is all about.  Config-Rule-Status sets up the Config service and Config Rule monitoring.  Then, as represented in the image below, a CD pipeline step will call out to the Tester lambda function to get the security compliance status of the infrastructure.  If the status is non-compliant then the pipeline stops.

crs-arch-diagram2

Config-Rule-Status…continued

In Part 2 of this series I showed how to install and configure the Config-Rule-Status tool. I have also included a quick summary of those steps here. If you were following along in the previous post and already have it installed then you can skip it.

Install (quickly revisited)

# Install Serverless
npm install --global serverless@0.5.5

# Install Gulp
npm install --global gulp-cli

# clone the repo
git clone https://github.com/stelligent/config-rule-status.git

# enter the project directory
cd config-rule-status

# install NPM packages
npm install

# initialize the project
gulp init \
--region us-east-1 \
--stage prod \
--name config-rule-status \
--awsProfile yourProfileName \
--email user@company.com

Local Lambda tests

With a simple gulp task the Lambda functions will be tested locally and test coverage will be analyzed. This task will also be run by the build task, but here I’m showing what happens when you run it by itself.

bash-3.2$ gulp test
[10:11:41] Using gulpfile ~/GoogleDrive/Sync/projects/config-rule-status/gulpfile.js
[10:11:41] Starting 'lint'...
[10:11:41] Finished 'lint' after 230 ms
[10:11:41] Starting 'pre-test'...
[10:11:41] Finished 'pre-test' after 136 ms
[10:11:41] Starting 'test:local'...
[10:11:41] Finished 'test:local' after 632 μs
[10:11:41] Starting 'test'...
[10:11:41] Finished 'test' after 4.23 μs


  ec2CidrIngress
    ✓ should be rejected with undefined invokingEvent.configurationItem
    ✓ should be InvalidGroup
    ✓ should be COMPLIANT (1017ms)
    ✓ should be NON_COMPLIANT (1006ms)

  ec2CidrEgress
    ✓ should be InvalidGroup
    ✓ should be COMPLIANT (1004ms)
    ✓ should be NON_COMPLIANT (1006ms)

  IAM/userInlinePolicy
    ✓ should be NoSuchEntity
    ✓ should be COMPLIANT (1004ms)
    ✓ should be NON_COMPLIANT

  IAM/userManagedPolicy
    ✓ should be NoSuchEntity
    ✓ should be COMPLIANT
    ✓ should be NON_COMPLIANT

  IAM/userMFA
    ✓ should be NoSuchEntity on call to getUser
    ✓ should be COMPLIANT
    ✓ should be NON_COMPLIANT

  tester
    ✓ should error on describeConfigRules
    ✓ should PASS
    ✓ should FAIL


  19 passing (5s)

-----------------------------------|----------|----------|----------|----------|----------------|
File                               |  % Stmts | % Branch |  % Funcs |  % Lines |Uncovered Lines |
-----------------------------------|----------|----------|----------|----------|----------------|
 complianceTest/tester/            |    88.46 |       90 |      100 |    88.46 |                |
  handler.js                       |    88.46 |       90 |      100 |    88.46 |       25,28,29 |
 configRules/ec2CidrEgress/        |      100 |      100 |      100 |      100 |                |
  handler.js                       |      100 |      100 |      100 |      100 |                |
 configRules/ec2CidrIngress/       |      100 |      100 |      100 |      100 |                |
  handler.js                       |      100 |      100 |      100 |      100 |                |
 configRules/iamUserInlinePolicy/  |      100 |      100 |      100 |      100 |                |
  handler.js                       |      100 |      100 |      100 |      100 |                |
 configRules/iamUserMFA/           |      100 |      100 |      100 |      100 |                |
  handler.js                       |      100 |      100 |      100 |      100 |                |
 configRules/iamUserManagedPolicy/ |      100 |      100 |      100 |      100 |                |
  handler.js                       |      100 |      100 |      100 |      100 |                |
 lib/                              |    92.41 |    70.73 |      100 |    92.41 |                |
  aws.js                           |    90.91 |    71.43 |      100 |    90.91 |          29,30 |
  config.js                        |     87.5 |       50 |      100 |     87.5 |          24,25 |
  ec2.js                           |      100 |      100 |      100 |      100 |                |
  global.js                        |      100 |      100 |      100 |      100 |                |
  iam.js                           |      100 |      100 |      100 |      100 |                |
  rules.js                         |    89.23 |    67.86 |      100 |    89.23 |... 45,46,69,72 |
  template.js                      |      100 |      100 |      100 |      100 |                |
-----------------------------------|----------|----------|----------|----------|----------------|
All files                          |    92.47 |    74.51 |      100 |    92.47 |                |
-----------------------------------|----------|----------|----------|----------|----------------|


=============================== Coverage summary ===============================
Statements   : 92.47% ( 172/186 )
Branches     : 74.51% ( 38/51 )
Functions    : 100% ( 38/38 )
Lines        : 92.47% ( 172/186 )
================================================================================

Managing versions and deployments

When preparing to deploy the Lambda functions the gulp build task needs to be run to copy and stage the files into a dist folder. It copies the function folders to dist and then injects each one with a copy of the lib folder and a copy of the node_modules. Then running the gulp deploy:lambda task will package and deploy all the functions that reside in that dist folder. Here is what the generated dist folder looks like. Notice that each function now contains its own copy of the shared dependencies that originally reside in the components folder.

crs-dist-folder

With the build step done, the Lambda functions are ready for deployment to AWS. When the deployment step is run it will deploy the function package and publish a new version of the Lambda function. The –stage parameter that is included in the gulp deploy call is used to attach an alias to the version. This mechanism makes it possible to deploy a new version of the function code to a “dev” or “beta” stage which allows you to then smoke test it on AWS before deploying it to production. If the smoke test passes then prod deployment is done by running the gulp deploy task again with –stage set to “prod”. Once that is done then the Config Rules will be using the newly deployed Lambda functions as their evaluation logic. This works because the Config Rules were defined to reference only a specific aliased versions of the Lambda functions. So if you ran “gulp deploy:config” with –stage set to “prod”, then the Config Rules only use Lambda functions with a “prod” alias.

In this example we can see that version 4 of the function has the “prod” alias (stage), and AWS allows us to reference it with an ARN that includes the alias.

crs-function-alias

Here we can see that the Config Rule, generated by executing the gulp deploy task for the prod stage uses the alias qualified ARN to define its association to a Lambda function.

crs-rule-alias2

So that’s a lot of explanation, but as you saw in Part 2, the execution required to configure all this is very simple. A couple of CLI tasks sets it all up:

# deploy the Lambda functions
gulp deploy:lambda --stage prod --region us-east-1

# deploy the CFN stacks that will setup the Config service 
#   and create Config Rules for each of the Lambda functions
gulp deploy:config --stage prod --region us-east-1

Post deployment smoke test

This process was covered in Part 2, but it bears repeating here because it is the interface to the security testing functionality and integral to CD pipeline integration as shown in the following section.

# Run the tester Lambda to get the overall Config Rule compliance status
#  and verify that the deployment was successful.
gulp test:deployed --stage prod --region us-east-1

In this example the overall test result is FAIL because at least one of the Config Rules has an evaluation status of NON_COMPLIANT.

[10:05:17] Starting 'test:deployed'...
Serverless: Running tester...  
Serverless: -----------------  
Serverless: Success! - This Response Was Returned:  
Serverless: {
    "result": "FAIL",
    "results": [
        {
            "rule": "ConfigRuleStatus-EC2-SecGrp-Cidr-Ingress-Rule",
            "status": "NON_COMPLIANT",
            "result": "FAIL"
        },
        {
            "rule": "ConfigRuleStatus-EC2-VPC-Rule",
            "status": "COMPLIANT",
            "result": "PASS"
        },
        {
            "rule": "ConfigRuleStatus-IAM-MFA-Rule",
            "status": "NON_COMPLIANT",
            "result": "FAIL"
        },
        {
            "rule": "ConfigRuleStatus-IAM-User-InlinePolicy-Rule",
            "status": "NON_COMPLIANT",
            "result": "FAIL"
        },
        {
            "rule": "ConfigRuleStatus-IAM-User-ManagedPolicy-Rule",
            "status": "NON_COMPLIANT",
            "result": "FAIL"
        }
    ],
    "timestamp": "2016-04-06T14:05:19.047Z"
}  
[10:05:19] Finished 'test:deployed' after 1.72 s

CD Pipeline integration

The CD pipeline integration is where the value of this framework is realized. This is done by adding a simple call to the Tester Lambda function to your pipeline’s Acceptance stage. The particular implementation will vary depending on the tools that implement your pipeline, but the fundamental logic will be the same. The logic is as follows:

IF the object.result returned from the Tester Lambda function equals “PASS”
THEN the pipeline action succeeds
ELSE the pipeline action fails

Here is an example of how this could be implemented in javascript as a mocha test.

'use strict';

var chai = require('chai');
var chaiAsPromised = require('chai-as-promised');
var expect = chai.expect;
var lambdaRunner = require('./lib/remoteRunner.js').lambdaRunner;
chai.use(chaiAsPromised);

describe('ConfigRuleStatus-tester', function() {
    it('should PASS',
        function() {
            var event = {};
            var lambdaResult = lambdaRunner('ConfigRuleStatus-tester', 'us-east-1', 'prod', event);
            return expect(lambdaResult).to.eventually.have.property('result', 'PASS');
        }
    );
});

If any Config Rules are non-compliant then the Tester Lambda will return an object containing “result”: “FAIL” and exit with an error. This will stop the pipeline and prevent vulnerabilities from being added to the production infrastructure.

  1 failing

  1) ConfigRuleStatus-tester should PASS:
     AssertionError: expected { Object (result, results, ...) } to have a property 'result' of 'PASS', but got 'FAIL'
  
events.js:154
      throw er; // Unhandled 'error' event
      ^
Error: 1 test failed.

Wrapping up

This blog series on Security Integration Testing has touched on many topics and tools, but the concept that ties it all together is what Stelligent calls “Continuous Security”. We at Stelligent believe that enforcing infrastructure security policies within the software delivery process is a fundamental requirement and should be a integral part of all Continuous Delivery pipelines. Static analysis of infrastructure code, security integration testing, and penetration testing are the three main building blocks of the Continuous Security capability. This series showed how to leverage AWS Config to enable continuous infrastructure monitoring, laying the groundwork to build the tooling to implement security integration tests that can be run during the Acceptance stage of a CD pipeline. Running these tests ensures that all existing infrastructure resources, and any newly provisioned resources comply with the security rules. Thanks for reading and stay tuned for more posts on Continuous Security.

Stelligent is hiring! Do you enjoy working on complex problems like security in the CD pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Automating Penetration Testing in a CI/CD Pipeline (Part 2)

Continuous Security: Security in the Continuous Delivery Pipeline is a series of articles addressing security concerns and testing in the Continuous Delivery pipeline. This is the sixth article in the series.

In the first post, we discussed what OWASP ZAP is, how it’s installed and automating that installation process with Ansible. This second article of three will drill down into how to use the ZAP server, created in Part 1 for penetration testing your web-based application.

Penetration Test Script

If you recall the flow diagram (below) from the first post, we will need a way to talk to ZAP so that it can trigger a test against our application. To do this we’ll use the available ZAP API and wrap up the API in a Python script. The script will allow us to specify our ZAP server, target application server, trigger each phase of the penetration test and report our results.

ZAP-Basic-CI_CD-Flow - New Page (1)

The core of the ZAP API is to open our proxy, access the target application, spider the application, run an automated scan against it and fetch the results. This can be accomplished with just a handful of commands; however, our goal is to eventually get this bound into a CI/CD environment, so the script will have to be more versatile than a handful of commands.

The Python ZAP API can be easily installed via pip:

pip install python-owasp-zap-v2.4

We’ll start by breaking down what was outlined in the above paragraph. For learning purposes, these can be easily ran from the Python command line.

from zapv2 import ZAPv2

target = "http://" % target_application_url
zap = ZAPv2(proxies={'http': "http://%s" %zap_hostname_or_ip,
                     'https': "https://%s" %zap_hostname_or_ip}
zap.urlopen(target)
zap.spider.scan(target)
zap.spider.static()
# when status is >= 100, the spider has completed and we can run our scan
zap.ascan.scan(target)
zap.ascan.status()
# when status is >= 100, the scan has completed and we can fetch results
print zap.core.alerts()

This snippet will print our results straight to STDOUT in a mostly human readable format. To wrap all this up so that we can easily integrate this into an automated environment we can easily change our output to JSON, accept incoming parameters for our ZAP host names and target url. The following script takes the above commands and adds the features just mentioned.

The script can be called as follows:

./pen-test-app.py --zap-host zap_host.example.com:8080 --target app.example.com

Take note, the server that is launching our penetration test does not need to run ZAP itself, nor does it need to run the application we wish to run our pen test against.

Lets set up a very simple web-based application that we can use to test against. This isn’t a real-world example but it works well for the scope of this article. We’ll utilize Flask, a simple Python-based http server and allow it run a basic application that will simply display what was typed into the form field once submitted. The script can be downloaded here.

First Flask needs to be installed and the server started with the following:

pip install flask
python simple_server.py

The server will run on port 5000 over http. Using the example command above, we’ll run our ZAP penetration test against it as so:

/pen-test-app.py --zap-host 192.168.1.5:8080 --target http://192.168.1.73:5000
Accessing http://192.168.1.73:5000
Spidering http://192.168.1.73:5000
Spider completed
Scanning http://192.168.1.73:5000
Info: Scan completed; writing results.

Please note that the ZAP host is simply a url and a port, while the target must specify the protocol, either ‘http’ or ‘https’.

The ‘pen-test-app.py’ script is just an example of one of the many ways OWASP ZAP can be used in an automated manner. Tests can also be written to integrate FireFox (with ZAP as its proxy) and Selenium to mimic user interaction with your application. This could also be ran from the same script in addition to the existing tests.

Scan and Report the Results

The ZAP API will return results to the ‘pen-test-app.py’ script which in turns will write them to a JSON file, ‘results.json’. These results could be easily scanned for risk severities such as “grep -ie ‘high’ -e ‘medium’ results.json”. This does not give us much granularity in determining which tests are reporting errors nor if they critical enough to fail an entire build pipeline.

This is where a tool called Behave comes into play. Behave is a Gerkin-based language that allows the user to write test scenarios in a very human readable format.

Behave can be easily installed with pip:

pip install behave

Once installed our test scenarios are placed into a feature file. For this example we can create a file called ‘pen_test.feature’ and create a scenario.

Feature: Pen test the Application
  Scenario: The application should not contain Cross Domain Scripting vulnerabilities
    Given we have valid json alert output
    When there is a cross domain source inclusion vulnerability
    Then none of these risk levels should be present
      | risk |
      | Medium |
      | High |

The above scenario gets broken down into steps. The ‘Given’, ‘When’ and ‘Then’ will each correlate to a portion of Python code that will test each statement. The ‘risk’ portion is a table, that will be passed to our ‘Then’ statement. This can be read as “If the scanner produced valid JSON, succeed if there are no CSX vulnerabilities or only ones with ‘Low’ severity.

With the feature file in place, each step must now be written. A directory must be created called ‘steps’. Inside the ‘steps’ directory we create a file with the same name as the feature file but with a ‘.py’ extension instead of a ‘.feature’ extension. The following example contains the code for each step above to produce a valid test scenario.

import json
import re
import sys

from behave import *

results_file = 'results.json'

@given('we have valid json alert output')
def step_impl(context):
    with open(results_file, 'r') as f:
        try:
            context.alerts = json.load(f)
        except Exception as e:
            sys.stdout.write('Error: Invalid JSON in %s: %s\n' %
                             (results_file, e))
            assert False

@when('there is a cross domain source inclusion vulnerability')
def step_impl(context):
    pattern = re.compile(r'cross(?:-|\s+)(?:domain|site)', re.IGNORECASE)
    matches = list()

    for alert in context.alerts:
        if pattern.match(alert['alert']) is not None:
             matches.append(alert)
    context.matches = matches
    assert True

@then('none of these risk levels should be present')
def step_impl(context):
    high_risks = list()

    risk_list = list()
    for row in context.table:
        risk_list.append(row['risk'])

    for alert in context.matches:
         if alert['risk'] in risk_list:
             if not any(n['alert'] == alert['alert'] for n in high_risks):
                 high_risks.append(dict({'alert': alert['alert'],
                                          'risk': alert['risk']}))

    if len(high_risks) > 0:
        sys.stderr.write("The following alerts failed:\n")
    for risk in high_risks:
        sys.stderr.write("\t%-5s: %s\n" % (risk['alert'], risk['risk']))
        assert False

    assert True

To run the above test simply type ‘behave’ from the command line.

behave
 
Feature: Pen test the Application # pen_test.feature:1

  Scenario: The application should not contain Cross Domain Scripting vulnerabilities # pen_test.feature:7
    Given we have valid json alert output # steps/pen_test.py:14 0.001s
    When there is a cross domain source inclusion vulnerability # steps/pen_test.py:25 0.000s
    Then none of these risk levels should be present # steps/pen_test.py:67 0.000s
      | risk |
      | Medium |
      | High |

1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 0 skipped
3 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.001s

We can clearly see what was ran and each result. If this was ran from a Jenkins server, the return code will be read and the job will succeed. If a step fails, behave will return non-zero, triggering Jenkins to fail the job. If the job fails, it’s up to the developer to investigate the pipeline, find the point it failed, login to the Jenkins server and view the console output to see which test failed. This may not be the most ideal method. We can tell behave that we want our output in JSON so that another script can consume the JSON, reformat it into something an existing reporting mechanism could use and upload it to a central location.

To change behave’s behavior to dump JSON:

behave --no-summary --format json.pretty > behave_results.json

A reporting script can either read the behave_results, json file or read the STDIN pipe directly from behave. We’ll discuss more regarding this in the followup post.

Summary

If you’ve been following along since the first post, we have learned how to set up our own ZAP service, have the ZAP service penetration test a target web application and examine the results. This may be a suitable scenario for many systems. However, integrating this into a full CI/CD pipeline would be the optimal and most efficient use of this.

In part three we will delve into how to fully integrate ZAP so that not only will your application involve user, acceptance and capacity testing, it will now pass through security testing before reaching your end users.

Stelligent is hiring! Do you enjoy working on complex problems like security in the CD pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Create a Pipeline for Elastic Beanstalk in CodePipeline using CloudFormation and CodeCommit

In Building Continuous Deployment on AWS with AWS CodePipeline, Jenkins and AWS Elastic Beanstalk, AWS describes how to manually configure CodePipeline to deploy an Elastic Beanstalk application. In this post, after describing how to create and connect to a new CodeCommit repository, I’ll explain how to fully automate the provisioning of all of the AWS resources in CloudFormation to achieve Continuous Delivery for a Node.js application in Elastic Beanstalk. This includes CloudFormation, CodeCommit, CodePipeline, Elastic Beanstalk, and IAM using a sample Node.js provided by AWS.

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.[1]

 

codepipeline_beanstalkCreate and Connect to a CodeCommit Repository

Follow these instructions for creating and connecting to an AWS CodeCommit repository: Create and Connect to an AWS CodeCommit Repository. Take note of the repository name as you’ll be using it as a CloudFormation user parameter later.

Create an Elastic Beanstalk Stack

  • Go to the AWS Elastic Beanstalk Console and click Create New Application.
  • Enter your Application name and Description
  • Click Create web server and choose Next
  • For this walkthrough, select Node.js as your platform from Predefined configuration and use the default Environment type (Load balancing, auto scaling) and choose Next
  • For the Source option of the Application Version section, choose S3 URL and assuming you’re in the us-east-1 region, enter https://elasticbeanstalk-samples-us-east-1.s3.amazonaws.com/nodejs-sample.zip. Leave the defaults for the Deployment Limits option and choose Next. If you’re in a different region, modify the URL as appropriate.
  • Choose Next from the Environment Information page.
  • Choose Next from the Additional Resources page
  • Leave the default options on the Configuration Details page and choose Next
  • Leave the default options on the Environment Tags page and choose Next
  • Leave the default options on the Permissions page and choose Next
  • From the Review page, choose Launch

You’ll wait about 5-10 minutes for the application to launch into an environment. Once it’s successfully complete, your Elastic Beanstalk console will look similar the one below.

 

elastic_beanstalk_manual

Get the Beanstalk Configuration

From a client on which you’ve installed and configured the AWS CLI, type the following command (replacing ENVIRONMENT and APPLICATION with your Elastic Beanstalk environment and application names):

aws elasticbeanstalk describe-configuration-settings --environment-name ENVIRONMENT 
--application-name APPLICATION > beanstalk.json

When you open the JSON file (or view the output if you didn’t pipe the contents to a file), you’ll get output similar to the following JSON fragment.

{
    "ConfigurationSettings": [
        {
            "ApplicationName": "EBPipeline-nodeApplication-1QOSJOIITHEZ6",
            "EnvironmentName": "EBPi-node-1UUQR8I2565GT",
            "Description": "AWS ElasticBeanstalk Sample Node Environment",
            "DeploymentStatus": "deployed",
            "DateCreated": "2016-05-05T15:41:07Z",
            "OptionSettings": [
                {
                    "OptionName": "Availability Zones",
                    "ResourceName": "AWSEBAutoScalingGroup",
                    "Namespace": "aws:autoscaling:asg",
                    "Value": "Any"
                },
                {
                    "OptionName": "Cooldown",
                    "ResourceName": "AWSEBAutoScalingGroup",
                    "Namespace": "aws:autoscaling:asg",
                    "Value": "360"
                },...

There are several default values that get automatically configured in every Elastic Beanstalk stack. Later in the Configuration Template section, you’ll see that I chose to specify the configuration for only three items: MinSize and MaxSize on the Auto Scaling Group and that I’m using a load-balanced solution with ELB.

Create a CloudFormation Template based on the Beanstalk Configuration

In this solution, the CloudFormation template is composed of several components. It includes an Elastic Beanstalk application, application version, and configuration template. “Under the hood”, Elastic Beanstalk uses CloudFormation to launch its resources. The solution also creates an IAM role and policy for the CodePipeline pipeline stages and actions – including the inclusion of CodeCommit. Finally, in the CloudFormation template, the pipeline is configured to use CodeCommit for its Source stage and actions. An illustration of this architecture is shown in the figure below.

codepipeline_eb_arch_2Each of the key components of this CloudFormation template is described in greater detail in the following sections.

Application

The application is the starting point for the rest of the solution. All you do here is define the name and description for the other components to have a namespace to apply their configuration.

    "nodeApplication":{
      "Type":"AWS::ElasticBeanstalk::Application",
      "Properties":{
        "Description":"AWS Elastic Beanstalk Sample Application"
      }
    },

Application Version

In the snippet below, you see that I’m defining an application version. A required property is the SourceBundle. Currently, SourceBundle requires that we use an S3 location. This might be a little confusing as we’re attempting to use CodeCommit as our “source of truth” with the pipeline. Rest assured, the solution still treats CodeCommit as the source of truth in the pipeline, but using S3 is a current workaround for the initial definition. Keep in mind that the integration between CodeCommit and CodePipeline was just released a few weeks ago – at the time of this writing. Basically, the Beanstalk application version uses S3 once and then CodePipeline uses CodeCommit for any changes thereafter.

In the example below, and if you’re running from the Northern Virginia region, the S3 Bucket and Key translate to https://elasticbeanstalk-samples-us-east-1.s3.amazonaws.com/nodejs-sample.zip.

    "nodeApplicationVersion":{
      "Type":"AWS::ElasticBeanstalk::ApplicationVersion",
      "Properties":{
        "ApplicationName":{
          "Ref":"nodeApplication"
        },
        "Description":"AWS ElasticBeanstalk Sample Application Version",
        "SourceBundle":{
          "S3Bucket":{
            "Fn::Join":[
              "-",
              [
                "elasticbeanstalk-samples",
                {
                  "Ref":"AWS::Region"
                }
              ]
            ]
          },
          "S3Key":"nodejs-sample.zip"
        }
      }
    },

Configuration Template

The Configuration Template snippet below defines the configuration of the Elastic Beanstalk stack. There are only three in this example, but Beanstalk provides many types of configuration options that you can add to your template. The most relevant part of this example is the SolutionStackName as this tells Beanstalk what kind of application it’s going to define and deploy. In this case, we’re using Node.js on Linux. You can find a list of Elastic Beanstalk’s supported platforms at Supported Platforms. The solution names are very odd as they require you to define a long, convoluted name that includes spaces, upper and lower case characters, dots and version numbers so be sure to copy, paste, and replace with your preferred and supported versions.

    "nodeConfigurationTemplate":{
      "Type":"AWS::ElasticBeanstalk::ConfigurationTemplate",
      "Properties":{
        "ApplicationName":{
          "Ref":"nodeApplication"
        },
        "Description":"AWS ElasticBeanstalk Sample Configuration Template",
        "OptionSettings":[
          {
            "Namespace":"aws:autoscaling:asg",
            "OptionName":"MinSize",
            "Value":"2"
          },
          {
            "Namespace":"aws:autoscaling:asg",
            "OptionName":"MaxSize",
            "Value":"6"
          },
          {
            "Namespace":"aws:elasticbeanstalk:environment",
            "OptionName":"EnvironmentType",
            "Value":"LoadBalanced"
          }
        ],
        "SolutionStackName":"64bit Amazon Linux 2015.09 v2.0.5 running Node.js"
      }
    },

Environment

The Elastic Beanstalk environment type is basically a collection of other resources in a “container” containing application, version and configuration resources. The application is deployed onto an Elastic Beanstalk environment which consists of 1-n EC2 instances configured based on your configuration.

    "nodeEnvironment":{
      "Type":"AWS::ElasticBeanstalk::Environment",
      "DependsOn":[
        "nodeApplication",
        "nodeConfigurationTemplate",
        "nodeApplicationVersion"
      ],
      "Properties":{
        "ApplicationName":{
          "Ref":"nodeApplication"
        },
        "Description":"AWS ElasticBeanstalk Sample Node Environment",
        "TemplateName":{
          "Ref":"nodeConfigurationTemplate"
        },
        "VersionLabel":{
          "Ref":"nodeApplicationVersion"
        }
      }
    },

CodePipeline

To get started with CodePipeline, you can manually create a two-stage pipeline and use CodeCommit as your Source and Elastic Beanstalk as your deployment provider. For more information, see Create a Pipeline using the AWS CodePipeline Console. The basic steps are:

  1. Go to the CodePipeline Console
  2. Select Create pipeline.
  3. Enter a Pipeline name.
  4. Choose AWS CodeCommit as the Source provider
    1. Choose a Repository name and Branch name from the repository you created in the Create and Connect to a CodeCommit Repository earlier in this post
  5. Select Next step.
  6. Choose No Build as the Build provider and select Next step.
  7. Choose AWS Elastic Beanstalk as the Deployment provider and enter the Application name and Environment name from the one you manually created before and select Next step.
  8. Choose a Role name.
  9. Click Create pipeline.

Once your pipeline successfully runs, go to your AWS CLI and type the following (replacing MYPIPELINE with the name of your pipeline).

aws codepipeline get-pipeline --name MYPIPELINE > MYPIPELINE.json

You’ll then use the output of the contents in the JSON file to create your CodePipeline stack in CloudFormation as shown in the snippet below. After copying the contents, be sure to update your template to use title case vs. camel case for some of the attribute names in order to conform to the CloudFormation DSL.

"Stages":[
          {
            "Name":"Source",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"Source",
                "ActionTypeId":{
                  "Category":"Source",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"CodeCommit"
                },
                "OutputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Configuration":{
                  "BranchName":{
                    "Ref":"RepositoryBranch"
                  },
                  "RepositoryName":{
                    "Ref":"RepositoryName"
                  }
                },
                "RunOrder":1
              }
            ]
          },
          {
            "Name":"Beta",
            "Actions":[
              {
                "InputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Name":"EbApp",
                "ActionTypeId":{
                  "Category":"Deploy",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"ElasticBeanstalk"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "ApplicationName":{
                    "Ref":"nodeApplication"
                  },
                  "EnvironmentName":{
                    "Ref":"nodeEnvironment"
                  }
                },
                "RunOrder":1
              }
            ]
          }
        ],

Launch the Stack

To launch the CloudFormation stack, simply click the button below to open the template from the CloudFormation console in your AWS account. You’ll need to enter a value for the CodeCommit Repository Name, and, optionally, CodeCommit Repository Branch. You’ll be charged for the use of CodePipeline, EC2, S3, and other AWS resources.

codepipeline_beanstalk_cfn.jpg

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter value described above):

aws cloudformation create-stack --stack-name EBPipelineStack 
--template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/codepipeline-eb.json 
--region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" 
--parameters ParameterKey=RepositoryName,ParameterValue=YOURCODECOMMITREPONAME
ParameterKey=RepositoryBranch,ParameterValue=master

Outputs

Once the CloudFormation stack successfully launches, there are several outputs but the two most relevant are AppURL and CodePipelineURL. You can click on the AppURL value to launch the Elastic Beanstalk application from the ELB endpoint. The CodePipelineURL output value launches the generated pipeline from the CodePipeline console. See the screenshot below.

codepipeline_beanstalk_cfn_outputs   

Access the Application

By clicking on the AppURL Output value in CloudFormation, you’ll launch Node.js application deployed via Elastic Beanstalk. You should see a page like the one below.

eb_app_before.jpg

Commit Changes to CodeCommit

Make some visual changes to the code and commit these changes to your CodeCommit repository to see them get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to red"
git push

Once you push these changes to the CodeCommit git repo, CodePipeline will discover these changes and kickoff the pipeline to deploy your changes. Once complete, you should see your changes have been applied to the application – similar to what you see in the figure below.
eb_app_after

Summary

In this post, you learned how to use CloudFormation to fully automate the provisioning of an Elastic Beanstalk stack along with a CodePipeline pipeline that uses CodeCommit as its version-control repository so that whenever a change is made to the Git repo, the changes are automatically applied to your Elastic Beanstalk application. By using a pipeline, you can apply Continuous Delivery so that it runs through all the tests and other checks to enable you to deliver changes to the production whenever there’s a business need to do so.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/stelligent_commons/blob/master/cloudformation/eb/codepipeline-eb.json. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Resources