Application Auto Scaling with Amazon ECS

In this blog post, you’ll see an example of Application Auto Scaling for the Amazon ECS (EC2 Container Service). Automatic scaling of the container instances in your ECS cluster has been a feature for quite some time, but until recently you were not able to scale the tasks in your ECS service with built-in technology from AWS. In May of 2016, Automatic Scaling with Amazon ECS was announced which allowed us to configure elasticity into our deployed container services in Amazon’s cloud.

Developer Note: Skip to the “CloudFormation Examples” section to skip right to the code!

Why should you auto scale your container services?

Efficient and effective scaling of your microservices is why you should choose automatic scaling of your containers. If your primary goals include fault tolerance or elastic workloads, then leveraging a combination of cloud technology for autoscaling and infrastructure as code are the keys to success. With AWS’ Automatic Application Autoscaling, you can quickly configure elasticity into your architecture in a repeatable and testable way.

Introducing CloudFormation Support

For the first few months of this new feature it was not available in AWS CloudFormation. Configuration was either a manual process in the AWS Console or a series of API calls made from the CLI or one of Amazon’s SDKs. Finally, in August of 2016, we can now manage this configuration easily using CloudFormation.

The resource types you’re going to need to work with are:

The ScalableTarget and ScalingPolicy are the new resources that configure how your ECS Service behaves when an Alarm is triggered. In addition, you will need to create a new Role to give access to the Application Auto Scaling service to describe your CloudWatch Alarms and to modify your ECS Service — such as increasing your Desired Count.

CloudFormation Examples

The below examples were written for AWS CloudFormation in the YAML format. You can plug these snippets directly into your existing templates with minimal adjustments necessary. Enjoy!

Step 1: Implement a Role

These permissions were gathered from the various sources in AWS documentation.

ApplicationAutoScalingRole:
  Type: AWS::IAM::Role
  Properties:
    AssumeRolePolicyDocument:
      Statement:
      - Effect: Allow
        Principal:
          Service:
          - application-autoscaling.amazonaws.com
        Action:
        - sts:AssumeRole
     Path: "/"
     Policies:
     - PolicyName: ECSBlogScalingRole
       PolicyDocument:
         Statement:
         - Effect: Allow
           Action:
           - ecs:UpdateService
           - ecs:DescribeServices
           - application-autoscaling:*
           - cloudwatch:DescribeAlarms
           - cloudwatch:GetMetricStatistics
           Resource: "*"

Step 2: Implement some alarms

The below alarm will initiate scaling based on container CPU Utilization.

AutoScalingCPUAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    AlarmDescription: Containers CPU Utilization High
    MetricName: CPUUtilization
    Namespace: AWS/ECS
    Statistic: Average
    Period: '300'
    EvaluationPeriods: '1'
    Threshold: '80'
    AlarmActions:
    - Ref: AutoScalingPolicy
    Dimensions:
    - Name: ServiceName
      Value:
        Fn::GetAtt:
        - YourECSServiceResource
        - Name
    - Name: ClusterName
      Value:
        Ref: YourECSClusterName
    ComparisonOperator: GreaterThanOrEqualToThreshold

Step 3: Implement the ScalableTarget

This resource configures your Application Scaling to your ECS Service and provides some limitations for its function. Other than your MinCapacity and MaxCapacity, these settings are quite fixed when used with ECS.

AutoScalingTarget:
  Type: AWS::ApplicationAutoScaling::ScalableTarget
  Properties:
    MaxCapacity: 20
    MinCapacity: 1
    ResourceId:
      Fn::Join:
      - "/"
      - - service
        - Ref: YourECSClusterName
        - Fn::GetAtt:
          - YourECSServiceResource
          - Name
    RoleARN:
      Fn::GetAtt:
      - ApplicationAutoScalingRole
      - Arn
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs

Step 4: Implement the ScalingPolicy

This resource configures your exact scaling configuration — when to scale up or down and by how much. Pay close attention to the StepAdjustments in the StepScalingPolicyConfiguration as the documentation on this is very vague.

In the below example, we are scaling up by 2 containers when the alarm is greater than the Metric Threshold and scaling down by 1 container when below the Metric Threshold. Take special note of how MetricIntervalLowerBound and MetricIntervalUpperBound work together. When unspecified, they are effectively infinity for the upper bound and negative infinity for the lower bound. Finally, note that these thresholds are computed based on aggregated metrics — meaning the Average, Minimum or Maximum of your combined fleet of containers.

AutoScalingPolicy:
  Type: AWS::ApplicationAutoScaling::ScalingPolicy
  Properties:
    PolicyName: ECSScalingBlogPolicy
    PolicyType: StepScaling
    ScalingTargetId:
      Ref: AutoScalingTarget
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs
    StepScalingPolicyConfiguration:
      AdjustmentType: ChangeInCapacity
      Cooldown: 60
      MetricAggregationType: Average
      StepAdjustments:
      - MetricIntervalLowerBound: 0
        ScalingAdjustment: 2
      - MetricIntervalUpperBound: 0
        ScalingAdjustment: -1

Wrapping It Up

Amazon Web Services continues to provide excellent resources for automation, elasticity and virtually unlimited scalability. As you can see, with a couple solid examples underfoot you can very quickly build in that on-demand elasticity and inherent fault tolerance. After you have your tasks auto scaled, I recommend you check out the documentation on how to scale your container instances also to provide the same benefits to your ECS cluster itself.

Deploying Microservices? Let mu help!

With support for ECS Application Auto Scaling coming soon to Stelligent mu, it offers the fastest and most comprehensive platform for deploying microservices as containers.

Want to learn more about mu from its creators? Check out the DevOps in AWS Radio’s podcast or find more posts in our blog.

Additional Resources

Here are some of the supporting resources discussed in this post.

We’re Hiring!

Like what you’ve read? Would you like to join a team on the cutting edge of DevOps and Amazon Web Services? We’re hiring talented engineers like you. Click here to visit our careers page.

 

 

AWS CodeStar – Quickly develop, build, and deploy applications on AWS

AWS CodeStar is a new service that changes the way development teams deliver software in AWS. CodeStar makes the process of setting up software applications for continuous delivery easier to manage through integrated authorization and access management, centralized member collaboration, and automated environment provisioning.

adh-team-whowhat1
(1) “Working with AWS CodeStar Teams.” Working with AWS CodeStar Teams – AWS CodeStar. Amazon Web Services, 2017. Web. 01 May 2017. – http://docs.aws.amazon.com/codestar/latest/userguide/working-with-teams.html

Through the use of CodeStar you can now automatically create entire environments for your application and all of its associated AWS resources. Furthermore, CodeStar is great for groups who are engaging in brand new start up applications and projects. Because of the simplicity of CodeStar, development teams can create efficient software workflows that will be able to build, test, and release software on AWS much faster than before. Some of the benefits of CodeStar include:

  • Automatic Provisioning of Resources: When you create a project through CodeStar, AWS will automatically provision a handful of the underlying resources that will be part of your software’s environment through the use of AWS CloudFormation. Some of these resources could include AWS Elastic Beanstalk, AWS EC2 instances, AWS S3 Buckets, and an AWS CodeCommit repository. One of the most significant resources that CodeStar creates is a continuous delivery pipeline. This pipeline is built using AWS CodePipeline and initially contains two stages: a Source (Commit) stage and an Application (Deploy) stage. If you need additional stages, you can modify your CodePipeline pipeline accordingly.
  • Pre-built Code Templates: When you begin the process of creating a project with CodeStar you are given the option to choose from many pre-built code templates used to build applications that will run on AWS Elastic Beanstalk, AWS EC2, or AWS Lambda. These pre-built templates come with already-setup sample code applications that are ready to be modified and as the user you can choose between five programming languages to build your software in. These five languages include Ruby, Python, PHP, Java, and Javascript. After you choose your programming language you then have the option to choose from three ways of editing your project code which include the use of Visual Studio, Eclipse, or Command Line Tools.

For the remainder of this blog I will demonstrate how to setup and build a CodeStar project using a Ruby on Rails template and will deploy the sample application on an AWS EC2 instance.

CodeStar Project with Ruby on Rails

Creating your CodeStar Project

  1. The first thing you will need to do to create your CodeStar project is to log into your AWS console, go to the CodeStar console, and select “Create New Project”.
  2. You will be directed to a page that displays the many variety of project templates for you to choose from. The types of  applications this service supports range from templates ready to deploy on:
    1. AWS Elastic Beanstalk (Automated management of capacity and load balancing), Amazon EC2 with AWS CodeDeploy (Flexible deployment onto any type of instance), and AWS Lambda (Lambda is serverless technology and uses AWS CodeBuild to build your artifacts automatically)
      1. Side note: As of now it is not possible to create a CodeStar project via a CloudFormation template. It is also not possible to start a CodeStar project with your already-built application or to use GitHub as your code repository. The only way to achieve this would be to modify the Source stage of the CodePipeline that gets created for you once it is complete.
    2. For my example I am going to choose the “Ruby on Rails Web Application” that will be running on an Amazon EC2 instance.

Screen Shot 2017-04-25 at 5.16.23 PM

3. You will then be prompted to enter in the name for your project (Project name) and will be able to edit the Project ID as well. You can also choose whether or not to allow AWS CodeStar to administer AWS resources on your behalf by either checking/unchecking the box on the bottom of the page. If you chose a template that has a project running on EC2 (such as my example) then you will be able to edit the EC2 configuration as well. This includes choosing:

  1. Your own VPC (you have the choice of being assigned a default VPC and Subnet or choosing an existing one. You cannot create a VPC here.)
    1. Side note: To create an AWS VPC and a subnet you must go into the Networking & Content Delivery Console: VPC section and create them.
  2. Your Subnet to deploy your instance into
  3. The instance type (I chose t2.micro) 

Screen Shot 2017-04-25 at 5.40.22 PM

4. Select your AWS EC2 Keypair and select “Create Project”

5. You will then be able to choose how you want to edit your project code from the following three choices (Visual Studio, Eclipse, or Command line tools). For my example I chose Command line tools. At the bottom of the page will also be the code repository URL for your project and you can choose an access method between SSH and HTTPS.

6. The next page will be the Connect to your tools page which is where you’ll select your local machine’s operating system (macOS, Windows, Linux) and your connection method (HTTPS, SSH).

  1. For HTTPS connection: If you haven’t done so already you will need to install a Git client on your local machine (there is a link to install it in Step 1). You will also need to generate your AWS IAM user Git credentials by clicking the “here” link in Step 2. Once you have completed the first two steps you can then clone your repository onto your local machine by copying the Git command in Step 3 and pasting it to whatever directory you would like in your terminal.  Once you  have cloned the git repository into  your terminal you will be prompted for your user name and password which will be the Git credentials that you generate for your IAM user. Hit the “Skip” button below to continue onto your management dashboard
  2. For SSH connection: If you haven’t done so already you will need to install a Git client on your local machine (there is a link to install it in Step 1). You will then need to register your SSH Public Key (for help on how to do this please go to this link here located in the instructions in Step 2). Once you have registered your SSH key you will need to go into your terminal into your ~/.ssh directory and create a file named “config”. Add the following lines into this file:
Host git-codecommit.*.amazonaws.com
User Your-IAM-SSH-Key-ID-Here
IdentityFile ~/.ssh/Your-Private-Key-File-Name-Here

Once you have saved the file, you will need to ensure it has the right permissions by running the following command in your ~/.ssh directory:

chmod 600 config

After you have followed these steps you can clone the project repository onto your local machine by copying and pasting the command located in Step 4.

As mentioned earlier in this article, when you go through the process of creating your CodeStar project, if you selected the box that “allows AWS to administer resources on your behalf” CodeStar creates a CloudFormation stack that automatically deploys the environment and resources for your application. Here is what the CloudFormation stack and its resources looks like if you chose to create the Ruby on Rails application on an EC2 instance:

Screen Shot 2017-05-01 at 5.14.33 PM

Pre-configured Management Dashboard

After you have created your CodeStar project  you will be given a pre-configured centralized management dashboard from which you will be able to view a variety of events that are going on with your application project. Things that are viewable in the default dashboard include your:

  • Application’s resource activity metrics via AWS CloudWatch
  • Code commits history
  • Your application’s endpoint (Outlined in red: my example contains a public EC2 DNS endpoint)
  • A visual of your AWS CodePipeline in which you can see real time progress of your software’s continuous delivery cycle.
  • You also have the option to add the Atlassian Jira Software extension to your dashboard so that you can directly track your application project’s issues and its collaborator’s tasks

Screen Shot 2017-04-27 at 3.40.31 PM.png

From the dashboard you can Configure issue tracking which enables to integrate the Jira extension into your project for easy tracking. You are also able to setup your team members who will be given access to work on your project and determine which role they will have on it. You will just have to pick their IAM user name, choose whether remote access is allowed, and select the role for them between:

  • Viewer
  • Contributer
  • Owner

Start Modifying Your Rails Application

For this example I will be opening up my sample Rails application by going to the application endpoint link on the CodeStar dashboard. The first modification that I will be making will be to the opening “hello page” of the application. Here is what the opening page of the sample application looks like when I go to the application endpoint:

Screen Shot 2017-04-27 at 3.03.10 PM

Assuming that you have cloned the Git repository for your project onto your local machine, you can now start to modify your Rails application and make changes using your own text editor. For this example I am just going to remove the links on the home page (/app/views/hello_page/hello.html.erb) and change some of the wording. After making my slight changes to the “hello page” and saving it, I can just into my Git repository on my local machine’s terminal and proceed to type the following commands to push my most recent changes:

git status
  • This will show you what changes have been made to your project

Screen Shot 2017-04-27 at 4.41.34 PM.png

git add app/views/hello_page/hello.html.erb
  • This will add all of the changes that are ready to be made to the hello page
git commit -m “[your message about the changes that have been made]”
git push
  • This will push your newly modified project into your code pipeline and will automatically trigger the continuous deployment cycle.

Here is what will happen to your CodePipeline on your dashboard when you “git push” your changes:

Screen Shot 2017-04-27 at 4.37.40 PM.png

Once the pipeline has has succeeded through the Application stage, refresh your browser page with your application’s endpoint and see the new changes that have been made to your Rails application:

Screen Shot 2017-04-27 at 4.34.47 PM.png

From here on out you have a full Ruby on Rails application framework running on an Amazon EC2 instance where you can start to build/modify your own custom application. For more information about what you can do with your new Rails application please refer to the README that can be accessed by clicking on the “Code” box on the left side of you CodeStar dashboard.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post we talked about how to use the newly added AWS CodeStar service and discovered the benefits that it can offer to a variety of users. You learned about the different types of projects that CodeStar can create and how to easily interact with those projects upon their creation.

Let us know if you have any comments or questions @stelligent or @TreyMcElhattan

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Using Parameter Store with AWS CodePipeline

Systems Manager Parameter Store is a managed service (part of AWS EC2 Systems Manager (SSM)) that provides a convenient way to efficiently and securely get and set commonly used configuration data across multiple resources in your software delivery lifecycle.

codepipeline_ssm

In this post, we will be focusing on the basic usage of Parameter Store and how to effectively use it as part of a continuous delivery pipeline using AWS CodePipeline. The following describes some of the capabilities of Parameter Store and the resources with which they can be used:

  • Managed Service: Parameter Store is managed by AWS. This means that you won’t have to put in the engineering work to setup something like Vault, Zookeeper, etc. just to store the configuration that your application/service needs.
  • Access Controls: Through the use of AWS Identity Access Management access to Parameter Store can be limited by enabling or restricting access to the service itself, or by enabling or restricting access to particular parameters.
  • Encryption: The Parameter Store gives a user the ability to also encrypt parameters using the AWS Key Management Service (KMS). When creating a parameter, you can specify that the parameter is encrypted with a KMS key. 
  • Audit: All calls to Parameter Store are tracked and recorded in AWS CloudTrail so they can be audited.

At the end of this post, you will be able to launch an example solution via AWS CloudFormation.

Working with Parameter Store

Prerequisites

In order to follow the examples below, you’ll need to have the AWS CLI setup on your local workstation. You can find a guide to install the AWS CLI here.

Creating a Parameter in the Parameter Store

To manually create a parameter in the parameter store there a few easy steps to follow:

  1. The user must sign into their AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on Parameter Store.
  3. Click Get Started Now or Create Parameter and input the following information:
    1. Name: The name that you want the parameter to be called
    2. Description(optional): A description of what the parameter does or contains
    3. Type: You can choose either a String, String List, or Secure String
  4. Click Create Parameter and it will bring you to the Parameter Store console where you can see your newly created parameter

To create a parameter using the AWS CLI, here are examples of creating a String, SecureString, and String List:

String:

 aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."

StringList:

aws ssm put-parameter --name "HostedZoneNames" --type "StringList" --value “stelligent.com.,google.com.,amazon.com.

SecureString:

 aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

After running these commands, your parameter store console would look something like this:

Screen Shot 2017-03-06 at 5.06.40 PM

Getting Parameter Values using the AWS CLI

To get a parameter String, StringList, or SecureString from the from the Parameter Store using the AWS CLI you must use the following syntax in your terminal:

String:

aws ssm get-parameters --names "HostedZoneName"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "String", 
            "Name": "HostedZoneName", 
            "Value": "stelligent.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.24 PMStringList:

 aws ssm get-parameters --names "HostedZoneNames"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "StringList", 
            "Name": "HostedZoneNames", 
            "Value": "stelligent.com.,google.com.,amazon.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.45 PMSecureString:

aws ssm get-parameters --names "Password"

The output in your terminal would look like this (the value of the parameter is encrypted):

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "SecureString", 
            "Name": "Password", 
            "Value": "AQECAHicQXIA+CERB7LyH8+YXXUK1vqiI87oM0Wq7kgMCmGqUQAAAGowaAYJKoZIhvcNAQcGoFswWQIBADBUBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDE0kvmQLY6Ertt5BGwIBEIAnlfTl1XxzRwUzkFCBYn8P0lJ6dOdjPNQNbYgjD1+KTk/SlNJznvrF"
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.59 PM

Deleting a Parameter from the Parameter Store

To delete a parameter from the Parameter Store manually, you must use the following steps:

  1. Sign into your AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on the Parameter Store tab.
  3. Select the parameter that you wish to delete
  4. Click the Actions button and select Delete Parameter from the menu

To delete a parameter using the AWS CLI you must use the following syntax in your terminal(this works for String, StringList, and SecureString)

aws ssm delete-parameter --name "HostedZoneName"
aws ssm delete-parameter --name "HostedZoneNames"
aws ssm delete-parameter --name "Password"

Using Parameter Store in AWS CodePipeline

Parameter Store can be very useful when constructing and running a deployment pipeline. Parameter Store can be used alongside a simple token/replace script to dynamically generate configuration files without having to manually modify those files. This is useful because you can pass through frequently used pieces of data configuration easily and efficiently as part of a continuous delivery process. An illustration of the AWS infrastructure architecture is shown below.

new-designer (4)

In this example, we have a deployment pipeline modeled via AWS CodePipeline that consists of two stages: a Source stage and a Build stage.

First, let’s take a look at the Source stage.

MyCodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      ArtifactStore:
        Location: !Ref S3Bucket
        Type: S3
      RoleArn: !GetAtt [CodePipelineRole, Arn]
      Stages:
        - Name: Source
          Actions:
          - Name: GitHubSource
            ActionTypeId:
              Category: Source
              Owner: ThirdParty
              Provider: GitHub
              Version: 1
            OutputArtifacts:
              - Name: OutputArtifact
            Configuration:
              Owner: stelligent
              Repo: parameter-store-example
              Branch: master
              OAuthToken: !Ref GitHubToken

As part of the Source stage, the pipeline will get source assets from the GitHub repository that contains the configuration file that will be modified along with a Ruby script that will get the parameter from the Parameter Store and replace the variable tokens inside of the configuration file. 

After the Source stage completes, there’s a Build stage, where we’ll be doing all of the actual work to modify our configuration file.

The Build stage uses the CodeBuild project (defined as the ConfigFileBuild action) to run the Ruby script that will modify the configuration file and replace the variable tokens inside of it with the requested parameters from the Parameter Store.

  ConfigFileBuild:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Ref AWS::StackName
      Description: Changes sample configuration file
      ServiceRole: !GetAtt CodeBuildRole.Arn
      Artifacts:
        Type: CODEPIPELINE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_LARGE
        Image: aws/codebuild/eb-ruby-2.3-amazonlinux-64:2.1.6
      Source:
        Type: CODEPIPELINE
        BuildSpec: |
          version: 0.1
          phases:
            pre_build:
              commands:
                - gem install aws-sdk
            build:
              commands:
                - ruby sample_ruby_ssm.rb
          artifacts:
            files:
              - '**/*'

The CodeBuild project contains the buildspec that actually runs the  Ruby script that will be making the configuration changes (sample_ruby_ssm.rb).

Copy of AWS Simple Icons v2.1 (6).png

Here is what the Ruby script looks like:

require 'aws-sdk'

client = Aws::SSM::Client.new(region: 'us-east-1')
resp = client.get_parameters({
  names: ["HostedZoneName", "Password"], # required
  with_decryption: true,
})

hostedzonename = resp.parameters[0].value
password = resp.parameters[1].value

file_names = ['sample_ssm_config.json']

file_names.each do |file_name|
  text = File.read(file_name)

  # Display text for usability
  puts text

  # Substitute Variables
  new_contents = text.gsub(/HOSTEDZONE/, hostedzonename)
  new_contents = new_contents.gsub(/PASSWORD/, password)

  # To write changes to the file, use:
  File.open(file_name, "w") {|file| file.puts new_contents.to_s }
end

Here is what the configuration file with the variable tokens (HOSTEDZONE, PASSWORD) looks like before it gets modified:

{
  "Parameters" : {
    "HostedZoneName" : "HOSTEDZONE",
    "Password" : "PASSWORD"
  }
}

Here is what the configuration file would consist of after the Ruby script pulls the requested parameters from the Parameter Store and replaces the variable tokens (HOSTEDZONE, PASSWORD). The Password parameter is being decrypted through the ruby script in this process.

{
  "Parameters" : {
    "HostedZoneName" : "stelligent.com.",
    "Password" : "Password123$"
  }
}

IMPORTANT NOTE:  In this example above you can see that the “Password” parameter is being returned in plain text (Password123$). The reason that is happening is because when this Ruby script runs, it is returning the secured string parameter with the decrypted value (with_decryption: true). The purpose of showing this example in this way is purely just to illustrate what returning multiple parameters into a configuration file would look like. In a real-world situation you would never want to return any password displayed in its plain text because that can present security issues and is bad practice in general. In order to return that “Password” parameter value in its encrypted form all you would simply have to do is modify the Ruby script on the 6th line and change the “with_decryption: true” to “with_decryption: false“.  Here is what the modified configuration file would look like with the “Password” parameter being returned in its encrypted form:

Screen Shot 2017-03-10 at 10.57.27 AM

Launch the Solution via CloudFormation

To run this deployment pipeline and see Parameter Store in action, you can click the “Launch Stack” button below which will take you directly to the CloudFormation console within your AWS account and load the CloudFormation template. Walk through the CloudFormation wizard to launch the stack. 

In order to be able to execute this pipeline you must have the following:

  • The AWS CLI already installed on your local workstation. You can find a guide to install the AWS CLI here
  • A generated GitHub Oauth token for your GitHub user. Instructions on how to generate an Oauth token can be found here
  • In order for the Ruby script that is part of this pipeline process to run you must create these two parameters in your Parameter Store:
    1. HostedZoneName
    2. Password
aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."
aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

As you begin to launch the pipeline in CloudFormation (Launch Stack button is located below), you will be prompted to enter this one parameter:

  • GitHubToken (Your generated GitHub Oauth token)

Once you have passed in this initial parameter, you can begin to launch the pipeline that will make use of the Parameter Store.

NOTE: You will be charged for your CodePipeline and CodeBuild usage.

Once the stack is CREATE_COMPLETE, click on the Outputs tab and then the value for the CodePipelineUrl output to view the pipeline in CodePipeline.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post, you learned how to use the EC2 Systems Manager Parameter Store and some of its features. You learned how to create, delete, get, and set parameters manually as well as through the use of the AWS CLI. You also learned how to use the Parameter Store in a practical situation by incorporating it in the process of setting configuration data that is used as part of a CodePipeline continuous delivery pipeline.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/parameter-store-example. Let us know if you have any comments or questions @stelligent or @TreyMcElhattan

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

AWS CodeBuild is Here

At re:Invent 2016 AWS introduced AWS CodeBuild, a new service which compiles from source, runs tests, and produces ready to deploy software packages.  AWS CodeBuild handles provisioning, management, and scaling of your build servers.  You can either use pre-packaged build environments to get started quickly, or create custom build environments using your own build tools.  CodeBuild charges by the minute for compute resources, so you aren’t paying for a build environment while it is not in use.

AWS CodeBuild Introduction

Stelligent engineer, Harlen Bains has posted An Introduction to AWS CodeBuild to the AWS Partner Network (APN) Blog.  In the post he explores the basics of AWS CodeBuild and then demonstrates how to use the service to build a Java application.

Integrating AWS CodeBuild with AWS Developer Tools

In the follow-up post:  Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite, Stelligent CTO and AWS Community Hero,  Paul Duvall expands on how to integrate and automate the orchestration of CodeBuild with the rest of the AWS Developer Tools suite – including AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline using AWS’ provisioning tool, AWS CloudFormation.  He goes over the benefits of automating all the actions and stages into a deployment pipeline, while also providing an example with detailed screencast.

In the Future

Look to the Stelligent Blog for announcements, evaluations, and guides on new AWS products.  We are always looking for engineers who love to make things work better, faster, and just get a kick out of automating everything.  If you live and breathe DevOps, continuous delivery, and AWS, we want to hear from you.

Refactoring CD Pipelines – Part 1: Chef-Solo in AWS AutoScaling Groups

We often overlook similarities between new CD Pipeline code we’re writing today and code we’ve already written. In addition, we might sometimes rely on the ‘copy, paste, then modify’ approach to make quick work of CD Pipelines supporting similar application architectures. Despite the short-term gains, what often results is code sprawl and maintenance headaches. This blog post series is aimed at helping to reduce that sprawl and improve code re-use across CD Pipelines.

This mini-series references two real but simple applications, hosted in Github, representing the kind of organic growth phenomenon we often encounter and sometimes even cause, despite our best intentions. We’ll refactor them each a couple of times over the course of this series, with this post covering CloudFormation template reuse.

Chef’s a great tool… let’s misuse it!

During the deployment of an AWS AutoScaling Group, we typically rely on the user data section of the Launch Configuration to configure new instances in an automated, repeatable way. A common automation practice is to use chef-solo to accomplish this. We carefully chose chef-solo as a great tool for immutable infrastructure approaches. Both applications have CD Pipelines that leverage it as a scale-time configuration tool by reading a JSON document describing the actions and attributes to be applied.

It’s all roses

It’s a great approach: we sprinkle in a handful or two of CloudFormation parameters to support our Launch Configuration, embed the chef-solo JSON in the user data and decorate it with references to the CloudFormation parameters. Voila, we’re done! The implementation hardly took any time (probably less than an hour per application if you could find good examples in the internet), and each time we need a new CD Pipeline, we can just stamp out a new CloudFormation template.

Figure 1: Launch Configuration user data (as plain text)

 

Figure 2: CloudFormation parameters (corresponding to Figure 1)

Well, it’s mostly roses…

Why is it, then, that a few months and a dozen or so CD Pipelines later, we’re spending all our time debugging and doing maintenance on what should be minor tweaks to our application configurations? New configuration parameters take hours of trial and error, and new application pipelines can be copied and pasted into place, but even then it takes hours to scrape out the previous application’s specific needs from its CloudFormation template and replace them.

Fine, it’s got thorns, and they’re slowing us down

Maybe our great solution could have been better? Let’s start with the major pitfall to our original approach: each application we support has its own highly-customized CloudFormation template.

  • lots of application-specific CFN parameters exist solely to shuttle values to the chef-solo JSON
  • fairly convoluted user data, containing an embedded JSON structure and parameter references, is a bear to maintain
  • tracing parameter values from the CD Pipeline, traversing the CFN parameters into the user data… that’ll take some time to debug when it goes awry

One path to code reuse

Since we’re referencing two real GitHub application repositories that demonstrate our current predicament, we’ll continue using those repositories to present our solution via a code branch named Phase1 in each repository. At this point, we know our applications share enough of a common infrastructure approach that they should be sharing that part of the CloudFormation template.

The first part of our solution will be to extract the ‘differences’ from the CloudFormation templates between these two application pipelines. That should leave us with a common skeleton to work with, minus all the Chef specific items and user data, which will allow us to push the CFN template into an S3 bucket to be shared by both application CD pipelines.

The second part will be to add back the required application specificity, but in a way that migrates those differences from the CloudFormation templates to external artifacts stored in S3.

Taking it apart

Our first tangible goal is to make the user data generic enough to support both applications. We start by moving the inline chef-solo JSON to its own plain JSON document in each application’s pipeline folder (/pipelines/config/app-config.json). Later, we’ll modify our CD pipelines so they can make application and deployment-specific versions of that file and upload it to an S3 bucket.

Figure 3: Before/after comparison (diff) of our Launch Configuration User Data

Screen Shot 2016-08-30 at 4.41.42 PM
Left: original user data; Right: updated user data

The second goal is to make a single, vanilla CloudFormation template. Since we orphaned these Chef only CloudFormation parameters by removing the parts of the user data referencing them, we can remove them. The resulting template’s focus can now be on meeting the infrastructure concerns of our applications.

Figure 4: Before/after comparison (diff) of the CloudFormation parameters required

Screen Shot 2016-08-31 at 6.39.47 PM
Left: original CFN parameters; Right: pared-down parameters

 

At this point, we have eliminated all the differences between the CloudFormation templates, but now they can’t configure our application! Let’s fix that.

Reassembling it for reuse

Our objective now is to make our Launch Configuration user data truly generic so that we can actually reuse our CloudFormation template across both applications. We do that by scripting it to download the JSON that Chef needs from a specified S3 bucket. At the same time, we enhance the CD Pipelines by scripting them to create application and deploy-specific JSON, and to push that JSON to our S3 bucket.

Figure 5: Chef JSON stored as a deploy-specific object in S3

Screen Shot 2016-08-30 at 4.43.50 PM
The S3 key is unique to the deployment.

To stitch these things together we add back one CloudFormation parameter, ChefJsonKey, required by both CD Pipelines – its value at execution time will be the S3 key where the Chef JSON will be downloaded from. (Since our CD Pipeline has created that file, it’s primed to provide that parameter value when it executes the CloudFormation stack.)

Two small details left. First, we give our AutoScaling Group instances the ability to download from that S3 bucket. Now that we’re convinced our CloudFormation template is as generic as it needs to be, we upload it to S3 and have our CD Pipelines reference it as an S3 URL.

Figure 6: Our S3 bucket structure ‘replaces’ the /pipeline/config folder 

Screen Shot 2016-08-30 at 4.43.36 PM
The templates can be maintained in GitHub.

That’s a wrap

We now have a vanilla CloudFormation template that supports both applications. When an AutoScaling group scales up, the new servers will now download a Chef JSON document from S3 in order to execute chef-solo. We were able to eliminate that template from both application pipelines and still get all the benefits of Chef based server configuration.

See these GitHub repositories referenced throughout the article:

In Part 2 of this series, we’ll continue our refactoring effort with a focus on the CD Pipeline code itself.

Authors: Jeff Dugas and Matt Adams

Interested in working with and sometimes misusing configuration management tools like Chef, Puppet, and Ansible ? Stelligent is hiring!

 

DevOps in AWS Radio: Orchestrating Docker containers with AWS ECS, ECR and CodePipeline (Episode 4)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak about the AWS EC2 Container Service (ECS), AWS EC2 Container Registry (ECR), HashiCorp Consul, AWS CodePipeline, and other tools in providing Docker-based solutions for customers. Here are the show notes:

DevOps in AWS News

Episode Topics

  1. Benefits of using ECS, ECR, Docker, etc.
  2. Components of ECS, ECR and Service Discovery
  3. Orchestrating and automating the deployment pipeline using CloudFormation, CodePipeline, Jenkins, etc. 

Blog Posts

  1. Automating ECS: Provisioning in CloudFormation (Part 1)
  2. Automating ECS: Orchestrating in CodePipeline and CloudFormation (Part 2)

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Stelligent Bookclub: “Building Microservices” by Sam Newman

At Stelligent, we put a strong focus on education and so I wanted to share some books that have been popular within our team. Today we explore the world of microservices with “Building Microservices” by Sam Newman.

Microservices are an approach to distributed systems that promotes the use of small independent services within a software solution. By adopting microservices, teams can achieve better scaling and gain autonomy, that allows teams to chose their technologies and iterate independently from other teams.

As a result, a change to one part of the system could unintentionally break a different part, which in turn might lead to hard-to-predict outages

Microservices are an alternative to the development of a monolithic codebase in many organizations – a codebase that contains your entire application and where new code piles on at alarming rates. Monoliths become difficult to work with as interdependencies within the code begin to develop.

As a result, a change to one part of the system could unintentionally break a different part, which in turn might lead to hard-to-predict outages. This is where Newman’s argument about the benefits of microservices really comes into play.

  • Reasons to split the monolith
    • Increase pace of change
    • Security
    • Smaller team structure
    • Adopt the proper technology for a problem
    • Remove tangled dependencies
    • Remove dependency on databases for integration
    • Less technical debt

By splitting monoliths at their seams, we can slowly transform a monolithic codebase into a group of microservices. Each service his loosely coupled and highly cohesive, as a result changes within a microservice do not change it’s function to other parts of the system. Each element works in a blackbox where only the inputs and outputs matter. When splitting a monolith, databases pose some of the greatest challenge; as a result, Newman devotes a significant chunk of the text/book to explaining various useful techniques to reduce these dependencies.

Ways to reduce dependencies

  • Clear well documented api
  • Loose coupling and high cohesion within a microservice
  • Enforce standards on how services can interact with each other

Though Newman’s argument for the adoption of microservices is spot-on, his explanation on continuous delivery and scaling micro-services is shallow. For anyone who has a background in CD or has read “Continuous Delivery” these sections do not deliver. For example, he takes the time to talk about machine images at great length but lightly brushes over build pipelines. The issue I ran into with scaling microservices is Newman suggests that ideally each microservice should ideally be put on its own instance where it exists independently of all other services. Though this is a possibility and it would be nice to have this would be highly unlikely to happen in a production environment where cost is a consideration. Though he does talk about using traditional virtualization, Vagrant, linux containers, and Docker to host multiple services on a single host he remains platform agnostic and general. As a result he misses out on the opportunity to talk about services like Amazon ECS, Kubernetes, or Docker Swarm. Combining these technologies with reserved cloud capacity would be a real world example that I feel would have added a lot to this section

Overall Newman’s presentation of microservices is a comprehensive introduction for IT professionals. Some of the concepts covered are basic but there are many nuggets of insight that are worth reading for. If you are looking to get a good idea about how microservices work, pick it up. If you’re looking to advance your microservice patterns or suggest some, feel free to comment below!

Interested in working someplace that gives all employees an impressive book expense budget? We’re hiring.

Automating and Orchestrating OpsWorks in CloudFormation and CodePipeline

pipeline_opsworks_consoleIn this post, you’ll learn how to provision, configure, and orchestrate a PHP application using the AWS OpsWorks application management service into a deployment pipeline using AWS CodePipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to the AWS CodeCommit version-control repository. This way, team members can release new changes to users whenever they choose to do so: aka, Continuous Delivery.

Recently, AWS announced the integration of OpsWorks into AWS CodePipeline so I’ll be describing various components and services that support this solution including CodePipeline along with codifying the entire infrastructure in AWS CloudFormation. As part of the announcement, AWS provided a step-by-step tutorial of integrating OpsWorks with CodePipeline that I used as a reference in automating the entire infrastructure and workflow.

This post describes how to automate all the steps using CloudFormation so that you can click on a Launch Stack button to instantiate all of your infrastructure resources.

OpsWorks

“AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application’s architecture and the specification of each component including package installation, software configuration and resources such as storage. Start from templates for common technologies like application servers and databases or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load and dynamic configuration to orchestrate changes as your environment scales.” [1]

OpsWorks provides a structured way to automate the operations of your AWS infrastructure and deployments with lifecycle events and the Chef configuration management tool. OpsWorks provides more flexibility than Elastic Beanstalk and more structure and constraints than CloudFormation. There are several key constructs that compose OpsWorks. They are:

  • Stack – An OpsWorks stack is the logical container defining OpsWorks layers, instances, apps and deployments.
  • Layer – There are built-in layers provided by OpsWorks such as Static Web Servers, Rails, Node.js, etc. But, you can also define your own custom layers as well.
  • Instances – These are EC2 instances on which the OpsWorks agent has been installed. There are only certain Linux and Windows operating systems supported by OpsWorks instances.
  • App – “Each application is represented by an app, which specifies the application type and contains the information that is needed to deploy the application from the repository to your instances.” [2]
  • Deployment – Runs Chef recipes to deploy the application onto instances based on the defined layer in the stack.

There are also lifecycle events that get executed for each deployment. Lifecycle events are linked to one or more Chef recipes. The five lifecycle events are setup, configure, deploy, undeploy, shutdown. Events get triggered based upon certain conditions. Some events can be triggered multiple times. They are described in more detail below:

  • setup – When an instance finishes booting as part of the initial setup
  • configure – When this event is run, it executes on all instances in all layers whenever a new instance comes in service, or an EIP changes, or an ELB is attached
  • deploy – When running a deployment on an instance, this event is run
  • undeploy – When an app gets deleted, this event is run
  • shutdown – Before an instance is terminated, this event is run

Solution Architecture and Components

In Figure 2, you see the deployment pipeline and infrastructure architecture for the OpsWorks/CodePipeline integration.

opsworks_pipeline_arch.jpg
Figure 2 – Deployment Pipeline Architecture for OpsWorks

Both OpsWorks and CodePipeline are defined in a single CloudFormation stack, which is described in more detail later in this post. Here are the key services and tools that make up the solution:

  • OpsWorks – In this stack, code configures operations of your infrastructure using lifecycle events and Chef
  • CodePipeline – Orchestrate all actions in your software delivery process. In this solution, I provision a CodePipeline pipeline with two stages and one action per stage in CloudFormation
  • CloudFormation – Automates the provisioning of all AWS resources. In this solution, I’m using CloudFormation to automate the provisioning for OpsWorks, CodePipeline,  IAM, and S3
  • CodeCommit – A Git repo used to host the sample application code from this solution
  • PHP – In this solution, I leverage AWS’ OpsWorks sample application written in PHP.
  • IAM – The CloudFormation stack defines an IAM Instance Profile and Roles for controlled access to AWS resources
  • EC2 – A single compute instance is launched as part of the configuration of the OpsWorks stack
  • S3 – Hosts the deployment artifacts used by CodePipeline.

Create and Connect to a CodeCommit Repository

While you can store your software code in any version-control repository, in this solution, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code off of the Amazon OpsWorks PHP Simple Demo App located at https://github.com/awslabs/opsworks-demo-php-simple-app.

To create your own CodeCommit repo, follow these instructions: Create and Connect to an AWS CodeCommit Repository. I called my CodeCommit repository opsworks-php-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.

After you create your CodeCommit repo, copy the contents from the AWS PHP OpsWorks Demo app and commit all of the files.

Implementation

I created this sample solution by stitching together several available resources including the CloudFormation template provided by the Step-by-Step Tutorial from AWS on integrating OpsWorks with CodePipeline and existing templates we use at Stelligent for CodePipeline. Finally, I manually created the pipeline in CodePipeline using the same step-by-step tutorial and then obtained the configuration of the pipeline using the get-pipeline command as shown in the command snippet below.

aws codepipeline get-pipeline --name OpsWorksPipeline > pipeline.json

This section describes the various resources of the CloudFormation solution in greater detail including IAM Instance Profiles and Roles, the OpsWorks resources, and CodePipeline.

Security Group

Here, you see the CloudFormation definition for the security group that the OpsWorks instance uses. The definition restricts the ingress port to 80 so that only web traffic is accepted on the instance.

    "CPOpsDeploySecGroup":{
      "Type":"AWS::EC2::SecurityGroup",
      "Properties":{
        "GroupDescription":"Lets you manage OpsWorks instances deployed to by CodePipeline"
      }
    },
    "CPOpsDeploySecGroupIngressHTTP":{
      "Type":"AWS::EC2::SecurityGroupIngress",
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"80",
        "ToPort":"80",
        "CidrIp":"0.0.0.0/0",
        "GroupId":{
          "Fn::GetAtt":[
            "CPOpsDeploySecGroup",
            "GroupId"
          ]
        }
      }
    },

IAM Role

Here, you see the CloudFormation definition for the OpsWorks instance role. In the same CloudFormation template, there’s a definition for an IAM service role and an instance profile. The instance profile refers to OpsWorksInstanceRole defined in the snippet below.

The roles, policies and profiles restrict the service and resources to the essential permissions it needs to perform its functions.

    "OpsWorksInstanceRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  {
                    "Fn::FindInMap":[
                      "Region2Principal",
                      {
                        "Ref":"AWS::Region"
                      },
                      "EC2Principal"
                    ]
                  }
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"s3-get",
            "PolicyDocument":{
              "Version":"2012-10-17",
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "s3:GetObject"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

Stack

The snippet below shows the CloudFormation definition for the OpsWorks Stack. It makes references to the IAM service role and instance profile, using Chef 11.10 for its configuration, and using Amazon Linux 2016.03 for its operating system. This stack is used as the basis for defining the layer, app, instance, and deployment that are described later in this section.

    "MyStack":{
      "Type":"AWS::OpsWorks::Stack",
      "Properties":{
        "Name":{
          "Ref":"AWS::StackName"
        },
        "ServiceRoleArn":{
          "Fn::GetAtt":[
            "OpsWorksServiceRole",
            "Arn"
          ]
        },
        "ConfigurationManager":{
          "Name":"Chef",
          "Version":"11.10"
        },
        "DefaultOs":"Amazon Linux 2016.03",
        "DefaultInstanceProfileArn":{
          "Fn::GetAtt":[
            "OpsWorksInstanceProfile",
            "Arn"
          ]
        }
      }
    },

Layer

The OpsWorks PHP layer is described in the CloudFormation definition below. It references the OpsWorks stack that was previously created in the same template. It also uses the php-app layer type. For a list of valid types, see CreateLayer in the AWS API documentation. This resource also enables auto healing, assigns public IPs and references the previously-created security group.

    "MyLayer":{
      "Type":"AWS::OpsWorks::Layer",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Name":"MyLayer",
        "Type":"php-app",
        "Shortname":"mylayer",
        "EnableAutoHealing":"true",
        "AutoAssignElasticIps":"false",
        "AutoAssignPublicIps":"true",
        "CustomSecurityGroupIds":[
          {
            "Fn::GetAtt":[
              "CPOpsDeploySecGroup",
              "GroupId"
            ]
          }
        ]
      },
      "DependsOn":[
        "MyStack",
        "CPOpsDeploySecGroup"
      ]
    },

OpsWorks Instance

In the snippet below, you see the CloudFormation definition for the OpsWorks instance. It references the OpsWorks layer and stack that are created in the same template. It defines the instance type as c3.large and refers to the EC2 Key Pair that you will provide as an input parameter when launching the stack.

    "MyInstance":{
      "Type":"AWS::OpsWorks::Instance",
      "Properties":{
        "LayerIds":[
          {
            "Ref":"MyLayer"
          }
        ],
        "StackId":{
          "Ref":"MyStack"
        },
        "InstanceType":"c3.large",
        "SshKeyName":{
          "Ref":"KeyName"
        }
      }
    },

OpsWorks App

In the snippet below, you see the CloudFormation definition for the OpsWorks app. It refers to the previously created OpsWorks stack and uses the current stack name for the app name – making it unique. In the OpsWorks type, I’m using php. For other supported types, see CreateApp.

I’m using other for the AppSource type (OpsWorks doesn’t seem to make the documentation obvious in terms of the types that AppSource supports, so I resorted to using the OpsWorks console to determine the possibilities). I’m using other because my source type is CodeCommit, which isn’t currently an option in OpsWorks.

    "MyOpsWorksApp":{
      "Type":"AWS::OpsWorks::App",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Type":"php",
        "Shortname":"phptestapp",
        "Name":{
          "Ref":"AWS::StackName"
        },
        "AppSource":{
          "Type":"other"
        }
      }
    },

CodePipeline

In the snippet below, you see the CodePipeline definition for the Deploy stage and the DeployPHPApp action in CloudFormation. It takes MyApp as an Input Artifact – which is an Output Artifact of the Source stage and action that obtains code assets from CodeCommit.

The action uses a Deploy category and OpsWorks as the Provider. It takes four inputs for the configuration: StackId, AppId, DeploymentType, LayerId. With the exception of DeploymentType, these values are obtained as references from previously created AWS resources in this CloudFormation template.

For more information, see CodePipeline Concepts.

         {
            "Name":"Deploy",
            "Actions":[
              {
                "InputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Name":"DeployPHPApp",
                "ActionTypeId":{
                  "Category":"Deploy",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"OpsWorks"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "StackId":{
                    "Ref":"MyStack"
                  },
                  "AppId":{
                    "Ref":"MyOpsWorksApp"
                  },
                  "DeploymentType":"deploy_app",
                  "LayerId":{
                    "Ref":"MyLayer"
                  }
                },
                "RunOrder":1
              }
            ]
          }

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the OpsWorks environment including all the resources previously described such as CodePipeline, OpsWorks, IAM Roles, etc.

When launching a stack, you’ll enter a value the KeyName parameter from the drop down. Optionally, you can enter values for your CodeCommit repository name and branch if they are different than the default values.

opsworks_pipeline_cfn
Figure 3- Parameters for Launching the CloudFormation Stack

You will charged for your AWS usage – particularly EC2, CodePipeline and S3.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name OpsWorksPipelineStack --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/codepipeline-opsworks.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters  ParameterKey=KeyName,ParameterValue=YOURKEYNAME

Outputs

Once the CloudFormation stack successfully launches, there’s an output for the CodePipelineURL. You can click on this value to launch the pipeline that’s running that’s getting the source assets from CodeCommit and launch an OpsWorks stack and associated resources. See the screenshot below.

cfn_opsworks_pipeline_outputs
Figure 4 – CloudFormation Outputs for CodePipeline/OpsWorks stack

Once the pipeline is complete, you can access the OpsWorks stack and click on the Public IP link for one of the instances to launch the PHP application that was deployed using OpsWorks as shown in Figures 5 and 6 below.

opsworks_public_ip.jpg
Figure 5 – Public IP for the OpsWorks instance

 

opsworks_app_before.jpg
Figure 6 – OpsWorks PHP app once initially deployed

Commit Changes to CodeCommit

Make some visual changes to the code (e.g. your local CodeCommit version of index.php) and commit these changes to your CodeCommit repository to see these software get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to rust orange"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser – as shown in Figure 7.

opsworks_app_after.jpg
Figure 7 – Application after code changes committed to CodeCommit, orchestrated by CodePipeline and deployed by OpsWorks

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/opsworks. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Useful Resources and References

OpsWorks Reference

Below, I’ve documented some additional information that might be useful on the OpsWorks service itself including its available integrations, supported versions and features.

  • OpsWorks supports three application source types: GitHub, S3, and HTTP.
  • You can store up to five versions of an OpsWorks application: the current revision plus four more for rollbacks.
  • When using the create-deployment method, you can target the OpsWorks stack, app, or instance
  • OpsWorks require internet access for the OpsWorks endpoint instance
  • Chef supports Windows in version 12
  • You cannot mix Windows and Linux instances in an OpsWorks stack
  • To change the default OS in OpsWorks, you need to change the OS and reprovision the instances
  • You cannot change the VPC for an OpsWorks instance
  • You can add ELB, EIPs, Volumes and RDS to an OpsWorks stack
  • OpsWorks autoheals at the layer level
  • You can assign multiple Chef recipes to an OpsWorks layer event
  • The three instance types in OpsWorks are: 24/7, time-based, load-based
  • To initiate a rollback in OpsWorks, you use create-deployment command
  • The following commands are available when using OpsWorks create-deployment along with possible use cases:
    • install_dependencies
    • update_dependencies – Patches to the Operating System. Not available after Chef 12.
    • update_custom_cookbooks – pulling down changes in your Chef cookbooks
    • execute_recipes – manually run specific Chef recipes that are defined in your layers
    • configure – service discovery or whenever endpoints change
    • setup
    • deploy
    • rollback
    • start
    • stop
    • restart
    • undeploy
  • To enable the use of multiple custom cookbook repositories in OpsWorks, you can enable custom cookbook at the stack and then create a cookbook that has a Berkshelf file with multiple sources. Before Chef 11.10, you couldn’t use multiple cookbook repositories.
  • You can define Chef databags in OpsWorks Users, Stacks, Layers, Apps and Instances
  • OpsWorks Auto Healing is triggered when an OpsWorks Agent detects loss of communication and stops, then restarts the instances. If it fails, it goes into manual intervention
  • OpsWorks will not auto heal an upgrade to the OS
  • OpsWorks does not auto heal by monitoring performance, only failures.

Acknowledgements

My colleague Casey Lee provided some of the background information on OpsWorks features. I also used several resources from AWS including the PHP sample app and the step-by-step tutorial on the OpsWorks/CodePipeline integration.

 

 

 

Automate CodeCommit and CodePipeline in AWS CloudFormation

Amazon Web Services (AWS) recently announced the integration of AWS CodeCommit with AWS CodePipeline. This means you can now use CodeCommit as a version-control repository as part of your pipelines! AWS describes how to manucodepipeline_codecommitally configure this integration at Simple Pipeline Walkthrough (AWS CodeCommit Repository).

One of the biggest benefits of using CodeCommit is its seamless integration with other AWS services including IAM.

After describing how to create and connect to a new CodeCommit repository, in this blog post, I’ll explain how to fully automate the provisioning of all of the AWS resources in CloudFormation to achieve Continuous Delivery. This includes CodeCommit, CodePipeline, CodeDeploy and IAM using a sample application provided by AWS.

Create a CodeCommit Repository

To create a new CodeCommit version-control repository, go to your AWS console and select CodeCommit under Developer Tools. Click the Create new repository button, enter a unique repository name and a description and click Create repository. Next, you will connect to the repository.

create_codecommit_repo

Connect to the CodeCommit Repository

There are a couple of ways to connect to an existing CodeCommit repository. One is via https and the other ssh. We’re going to focus on the connecting via SSH in this post. For additional information, see Connect to an AWS CodeCommit Repository. The instructions in this section are based on CodeCommit Setup from Andrew Puch.

The first thing you’ll do is create an IAM user. You do not need to create a password or access keys at this time. To do this, go to the AWS IAM console, then the Users section and create a new user. Once you’ve created the new user, open a terminal and type the following. NOTE: These instructions are designed for Mac/Linux environments.

cd $HOME/.ssh
ssh-keygen

When prompted after typing ssh-keygen, use a name like codecommit_rsa and leave all fields blank and just hit enter

cat codecommit_rsa.pub

Go to IAM, select the user you previously created, click on the Security Credentials tab and click the Upload SSH public key button. Copy the contents of the codecommit_rsa.pub to the text box in the Upload SSH public key section and save.

Go back to your terminal and type:

touch config
chmod 600 config
sudo vim config

The YOUR_SSH_KEY_ID_FROM_IAM variable below is the row value for the SSH Key Id that you created when uploading the public key. You will replace this placeholder and brackets with the value for the SSH Key Id from the Security Credentials tab for the IAM user you’d previously created.

Host git-codecommit.*.amazonaws.com
  User [YOUR_SSH_KEY_ID_FROM_IAM]
  IdentityFile ~/.ssh/codecommit_rsa

To verify your SSH connection works, type:

ssh git-codecommit.us-east-1.amazonaws.com

To clone a local copy of the CodeCommit repo you just created, type something like (your region and/or repo name may be different):

git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/codecommit-demo

You’ll be using this local copy of your CodeCommit repo later.

Manually Integrate CodeCommit with CodePipeline

Follow the instructions for manually integrating CodeCommit with CodePipeline as described in Simple Pipeline Walkthrough (AWS CodeCommit Repository). Name the pipeline CodeCommitPipeline.

Create a CloudFormation Template

In this solution, the CloudFormation template is composed of several components. It includes launching EC2 instances and installing the CodeDeploy agent on the instances, provisioning a CodeDeploy application and DeploymentGroup, creating an IAM role that defines a policy for all resources used in the solution including CodeCommit, and provisioning the CodePipeline pipeline stages and action – including the inclusion of CodeCommit.

codepipeline_cc_arch

Launch EC2 instances and install CodeDeploy agent

To launch the EC2 instances that are used by CodeDeploy to deploy the sample application, I’m using a CloudFormation template provided by AWS as part of a nested stack. I’m passing in the name of my EC2 Key Pair along with the tag that CodeDeploy will use to run a deployment against EC2 instances labeled with this tag.

    "CodeDeployEC2InstancesStack":{
      "Type":"AWS::CloudFormation::Stack",
      "Properties":{
        "TemplateURL":"http://s3.amazonaws.com/aws-codedeploy-us-east-1/templates/latest/CodeDeploy_SampleCF_Template.json",
        "TimeoutInMinutes":"60",
        "Parameters":{
          "TagValue":{
            "Ref":"TagValue"
          },
          "KeyPairName":{
            "Ref":"EC2KeyPairName"
          }
        }
      }
    },

Create CodeDeploy Application and Deployment Group

In the snippet below, I’m creating a simple CodeDeploy application along with a DeploymentGroup. I’m passing in the location of the S3 bucket and key that hosts the sample application provided by AWS. You can find the sample application at AWS CodeDeploy Resource Kit. In my template, I’m entering aws-codedeploy-us-east-1 for the S3Bucket parameter and samples/latest/SampleApp_Linux.zip for the S3Key parameter. It translates to this location in S3: s3://aws-codedeploy-us-east-1/samples/latest/SampleApp_Linux.zip. Finally, the stack from the CloudFormation template that provisions the EC2 instances for CodeDeploy provides CodeDeployTrustRoleARN as an output that I use in defining the permissions for my CodeDeploy DeploymentGroup.

    "MyApplication":{
      "Type":"AWS::CodeDeploy::Application",
      "DependsOn":"CodeDeployEC2InstancesStack"
    },
    "MyDeploymentGroup":{
      "Type":"AWS::CodeDeploy::DeploymentGroup",
      "DependsOn":"MyApplication",
      "Properties":{
        "ApplicationName":{
          "Ref":"MyApplication"
        },
        "Deployment":{
          "Description":"First time",
          "IgnoreApplicationStopFailures":"true",
          "Revision":{
            "RevisionType":"S3",
            "S3Location":{
              "Bucket":{
                "Ref":"S3Bucket"
              },
              "BundleType":"Zip",
              "Key":{
                "Ref":"S3Key"
              }
            }
          }
        },
        "Ec2TagFilters":[
          {
            "Key":{
              "Ref":"TagKey"
            },
            "Value":{
              "Ref":"TagValue"
            },
            "Type":"KEY_AND_VALUE"
          }
        ],
        "ServiceRoleArn":{
          "Fn::GetAtt":[
            "CodeDeployEC2InstancesStack",
            "Outputs.CodeDeployTrustRoleARN"
          ]
        }
      }
    },

Create an IAM role and policy to include CodeCommit

In previous posts on CodePipeline, I’d relied on the fact that, by default, AWS has created an AWS-CodePipeline-Service role in IAM. This is, frankly, a lazy and error-prone way of getting the proper permissions assigned to my AWS resources. The reason it’s error prone is because anyone else using the template might have modified the permissions of this built-in IAM role. Because the CodeCommit integration is new, I needed to add the CodeCommit permissions to my IAM policy so I decided to create a new IAM role on the fly as part of my CloudFormation template. This provides the added benefit of assuming nothing else had been previously created in the user’s AWS account.

Below, I’ve included the relevant IAM policy snippet that provides CodePipeline access to certain CodeCommit actions.

"PolicyName":"codepipeline-service",
  "PolicyDocument":{
    "Statement":[
      {
        "Action":[
          "codecommit:GetBranch",
          "codecommit:GetCommit",
          "codecommit:UploadArchive",
          "codecommit:GetUploadArchiveStatus",
          "codecommit:CancelUploadArchive"
        ],
          "Resource":"*",
          "Effect":"Allow"
      },

 

Create a pipeline in CodePipeline using CloudFormation

To get the configuration from the pipeline you manually created in the “Manually Integrate CodeCommit with CodePipeline” step from above, go to your AWS CLI and type:

aws codepipeline get-pipeline --name CodeCommitPipeline > pipeline.json

You will use the contents of the pipeline.json in a latter step.

Below, you can see that I’m creating the initial part of the pipeline in CloudFormation. I’m referring to the IAM role that I created previously in the template. This uses the AWS::CodePipeline::Pipeline CloudFormation resource.

  "CodePipelineStack":{
      "Type":"AWS::CodePipeline::Pipeline",
      "Properties":{
        "RoleArn":{
          "Fn::Join":[
            "",
            [
              "arn:aws:iam::",
              {
                "Ref":"AWS::AccountId"
              },
              ":role/",
              {
                "Ref":"CodePipelineRole"
              }
            ]
          ]
        },

Source Stage for CodeCommit

I got the contents for the stages and actions by copying the contents from the pipeline.json that I’d created above and pasted them into my CloudFormation template in the CodePipeline resource section. After copying the contents, I updated the template to use title case vs. camel case for some of the attribute names in order to conform to the CloudFormation DSL.

For the CodePipeline Source stage and action of this CloudFormation template, I’m referring to the CodeCommit Provider as my Source category. There are no input artifacts and the OutputArtifacts is defined as MyApp. This is used in all downstream stages as part of each action’s InputArtifacts.

        "Stages":[
          {
            "Name":"Source",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"Source",
                "ActionTypeId":{
                  "Category":"Source",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"CodeCommit"
                },
                "OutputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Configuration":{
                  "BranchName":{
                    "Ref":"RepositoryBranch"
                  },
                  "RepositoryName":{
                    "Ref":"RepositoryName"
                  }
                },
                "RunOrder":1
              }
            ]
          },

Beta Stage for CodeDeploy

The beta stage refers to the CodeDeploy DeploymentGroup and Application that were created in previously-defined resources in this CloudFormation template. In the Configuration for this action, I’m referring to these previously-defined references using the Ref intrinsic function in CloudFormation.

          {
            "Name":"Beta",
            "Actions":[
              {
                "InputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Name":"DemoFleet",
                "ActionTypeId":{
                  "Category":"Deploy",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"CodeDeploy"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "ApplicationName":{
                    "Ref":"MyApplication"
                  },
                  "DeploymentGroupName":{
                    "Ref":"MyDeploymentGroup"
                  }
                },
                "RunOrder":1
              }
            ]
          }

Store the Pipeline Artifact

You can store the artifact that’s transitioned through the actions in CodePipeline using any S3 bucket for which you have permissions. The template I provide as part of this sample solution dynamically generates a bucket name that should work for anyone’s AWS account as it uses the bucket that AWS CodePipeline defines which refers to the user’s current region and account id.

     ],
        "ArtifactStore":{
          "Type":"S3",
          "Location":{
            "Fn::Join":[
              "",
              [
                "codepipeline-",
                {
                  "Ref":"AWS::Region"
                },
                "-",
                {
                  "Ref":"AWS::AccountId"
                }
              ]
            ]
          }
        }
      }
    }

Launch the Stack

To launch the CloudFormation stack, simply click the button below to launch the template from the CloudFormation console in your AWS account. You’ll need to enter values for the following parameters: Stack name, EC2 KeyPair NameCodeCommit Repository Name, CodeCommit Repository Branch and, optionally, Tag Value for CodeDeploy EC2 instances. You can keep the default values for the other parameters.

codepipeline_cc_cfn

To launch the same stack from your AWS CLI, type the following (while modifying the same values described above):

aws cloudformation create-stack --stack-name CodePipelineCodeCommitStack 
--template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/codepipeline-codecommit.json 
--region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" 
--parameters ParameterKey=EC2KeyPairName,ParameterValue=YOUREC2KEYPAIR 
ParameterKey=TagValue,ParameterValue=EC2TAG4CODEDEPLOY 
ParameterKey=RepositoryName,ParameterValue=codecommit-demo 
ParameterKey=RepositoryBranch,ParameterValue=master

Access the Application

Once the CloudFormation stacks have successfully completed, go to CodeDeploy and select Deployments. For example, if you’re in the us-east-1 region, the URL might look like: https://console.aws.amazon.com/codedeploy/home?region=us-east-1#/deployments. Click on the link for the Deployment Id of the deployment you just launched from CloudFormation. Then, click on the link for the Instance Id. From the EC2 instance, copy the Public DNS value and paste into your browser and hit enter. You should see a page like the one below.

codedeploy_deployment

Commit Changes to CodeCommit

Make some visual changes to the code and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to dark orange"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser.

codedeploy_deployment-new

Troubleshooting

In this section, I describe how to fix some of the common errors that you might experience.

After you’ve included CodeCommit as a Source provider in the Source action of a Source stage and run the pipeline, you might get an “Insufficient permissions” error in CodePipeline – like the one you see below.

codepipeline_codecommit_error

To fix this, make sure that the IAM role you’re using has the proper permissions to the appropriate codecommit.* actions in the IAM policy for the role. In the example, I’ve done this by defining the IAM role in CloudFormation and then assigning this role to the pipeline in CodePipeline.

Another error you might see when launching the example stack is when the S3 bucket does not exist or your user does not have the proper permissions to the bucket. Unfortunately, if this happens, all you’ll see is an “Internal Error” in the Source action/stage like the one below.

codepipeline_codecommit_s3_error

Lastly, if you use the default CodeCommit repository name and you’ve not created a repo with the same name or have not matched the CloudFormation parameter value with your CodeCommit repository name, you’ll see an error like this:

codepipeline_codecommit_repo_error

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/stelligent_commons/blob/master/cloudformation/codecommit/codepipeline-codecommit.json. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Resources

Serverless Delivery: Orchestrating the Pipeline (Part 3)

In the second post of this series, we focused on how to get our serverless application running with Lambda, API Gateway and S3. Our application is now able to run on a serverless platform, but we still have not applied the fundamentals of continuous delivery that we talked about in the first part of this series.

In this third and final part of this series on serverless delivery, we will implement a continuous delivery pipeline using serverless technology. Our pipeline will be orchestrated by CodePipeline with actions implemented in Lambda functions. The definition of the CodePipeline resource as well as the Lambda functions that support it are all defined in the same CloudFormation stack that we looked at last week.

Visualize The Goal

To help visualize what we are building, here is a picture of what the final pipeline looks like.

codepipeline

If you’re new to CodePipeline, let’s go over a few important terms:

  • Job – An execution of the pipeline.
  • Stage – A group of actions in the pipeline. Stages never run in parallel to each other and only one job can actively be running in a stage at a time. If a stage is currently running and a new job arrives at the stage it will block until the prior job completes. If multiple new jobs arrive, only the newest will be run while the rest will be dropped.
  • Action – A step to be performed. Actions can be in parallel or series to each other. In this pipeline, all our actions will be implemented by Lambda invocations.
  • Artifact – Each action can declare input and output artifacts that will be stored in an S3 bucket. These are objects that it will either expect to have before it runs, or objects that it will produce and make available after it runs.

The pipeline we have built for our application consists of the following four stages:

  • Source – The source stage has only one action to acquire the source code for later stages.
  • Commit – The commit stage has two actions that are responsible for:
    • Resolving project dependencies
    • Processing (e.g., compile, minify, uglify) the source code
    • Static analysis of the code
    • Unit testing of the application
    • Packaging the application for subsequent deployment
  • Acceptance – The acceptance stage has actions that will:
    • Update the Lambda function from latest source
    • Update S3 bucket with latest artifacts
    • Update API Gateway
    • End-to-end testing of the application
  • Production – The production stage performs the same steps as the Acceptance stage but against the production Lambda, S3 and API Gateway resources

Here is a more detailed picture of the pipeline. We will spend the rest of this post breaking down each step of the pipeline.

pipeline-overview

Start with Source Stage

Diagram Step: 1

The source stage only has one action in it, a 3rd party action provided by GitHub. The action will register a hook with the repo that you provide to kickoff a new job for the pipeline whenever code is pushed to the GitHub repository. Additionally, the action will pull the latest code from the branch you specified and zip it up into an object in an S3 bucket for later actions to reference.

{
  "Name": "Source",
  "Actions": [
    {
      "InputArtifacts": [],
      "Name": "Source",
      "ActionTypeId": {
        "Category": "Source",
        "Owner": "ThirdParty",
        "Version": "1",
        "Provider": "GitHub"
      },
      "Configuration": {
        "Owner": "stelligent",
        "Repo": "dromedary",
        "Branch": "serverless",
        "OAuthToken": "XXXXXX",
      },
      "OutputArtifacts": [
        {
          "Name": "SourceOutput"
        }
      ],
      "RunOrder": 1
    }
  ]
}

 

This approach helps solve a common challenge with source code management using Lambda. Obviously no one wants to upload code through the console, so many end up using CloudFormation to manage their Lambda functions. The challenge is that the CloudFormation Lambda resource expects your code to be zipped in an S3 bucket. This means you either need to use S3 as the “source of truth” for your source code, or have a process to keep it in sync fro the real “source of truth”. By building a pipeline, you can keep your source in GitHub and use the next actions that we are about to go through to deploy the Lambda function.

Build from Commit Stage

Diagram Steps: 2,3,4

The commit stage of the pipeline consists of two actions that are implemented with Lambda invocations. The first action is responsible for resolving the application dependencies via NPM. This can be an expensive operation taking many minutes, and is needed by many downstream actions, so the dependencies are zipped up and become an output artifact of this first action. Here are the details of the action:

  • Download & Unzip – Get the source artifact from S3 and unzip into a temp directory
  • Run NPM – Run npm install  the extracted source folder
  • Zip & Upload – Zip up the source folder with its dependencies in node_modules and upload the artifact to S3

Download the input artifact is accomplished with the following code:

var artifact = null;
jobDetails.data.inputArtifacts.forEach(function (a) {
  if (a.name == artifactName && a.location.type == 'S3') {
    artifact = a;
  }
});

if (artifact != null) {
  var params = {
    Bucket: artifact.location.s3Location.bucketName,
    Key: artifact.location.s3Location.objectKey
  };
  return getS3Object(params, destDirectory);
} else {
  return Promise.reject("Unknown Source Type:" + JSON.stringify(sourceOutput));
}

Likewise, the output artifact is uploaded with the following:

var artifact = null;
jobDetails.data.outputArtifacts.forEach(function (a) {
  if (a.name == artifactName && a.location.type == 'S3') {
    artifact = a;
  }
});

if (artifact != null) {
  var params = {
    Bucket: artifact.location.s3Location.bucketName,  
    Key: artifact.location.s3Location.objectKey
  };
  return putS3Object(params, zipfile);
} else {
  return Promise.reject("Unknown Source Type:" + JSON.stringify(sourceOutput));
}

 

Diagram Steps: 5,6,7

The second action in the commit stage is responsible for acquiring the source and dependencies, processing the source code, performing static analysis, running unit tests and packaging the output artifacts. This is accomplished by an Lambda action that invokes a Gulp task on the project. This allows the details of these steps to be defined in Gulp alongside the source code and able to change at a different pace than the pipeline. Here is the CloudFormation for this action:

{
  "InputArtifacts":[
    {
      "Name": "SourceInstalledOutput"
    }
  ],
  "Name":"TestAndPackage",
  "ActionTypeId":{
    "Category":"Invoke",
    "Owner":"AWS",
    "Version":"1",
    "Provider":"Lambda"
  },
  "Configuration":{
    "FunctionName":{
      "Ref":"CodePipelineGulpLambda"
    },
    "UserParameters": "task=package&DistSiteOutput=dist/site.zip&DistLambdaOutput=dist/lambda.zip”

  },
  "OutputArtifacts": [
    {
      "Name": "DistSiteOutput"
    },
    {
      "Name": "DistLambdaOutput"
    }
  ],
  "RunOrder":2
}

Notice the UserParameters  setting defined in the resource above. CodePipeline treats it as an opaque string that is passed into the Lambda function. I chose to use a query string format to pass multiple values into the Lambda function. The task  parameter defines what gulp task to run and the DistSiteOutput  and DistLambdaOutput  parameters tell the Lambda function where to expect to find artifacts to then upload to S3.

For more details on how to implement CodePipeline actions in Lambda, check out the entire source of these functions at index.js or read the post Mocking CodePipeline with Lambda.

Test in Acceptance Stage

Diagram Steps: 8,9,10,11

The Acceptance stage is responsible for acquiring the packaged application artifacts and deploying the application to a test environment and then running a Gulp task to execute the end-to-end tests against that environment. Let’s look at the details of each of these four actions in this stage:

  • Deploy App – The Lambda for the application is updated with the code from the Commit stage as a new version. Additionally, the test  alias is moved to this new version. As you may recall from part 2 this alias is used by the test stage of the API Gateway to determine which version of the Lambda function to invoke.

LambdaVersionAliases

  • Deploy API – At this point, this is a no-op. My goal is to have this action use a Swagger file in the source code to update the API Gateway resources, methods, and integrations. This will allow these API changes to affect the API Gateway on each build, where with this current solution would require an update of the CloudFormation stack outside the pipeline to change the API Gateway.
  • Deploy Site – This action publishes all static content (HTML, CSS, JavaScript and images) to a test S3 bucket. Additionally, it published a config.json file to the bucket that the application uses to determine the endpoint for the APIs. Here’s a sample of the file that is created:
{
  "apiBaseurl":"https://rue1bmchye.execute-api.us-west-2.amazonaws.com/test/",
  "version":"20160324-231829"
}
  • End-to-end Testing – This action invokes a Gulp action to run the functional tests. Additionally, it sets and an environment variable with the endpoint URL for the application for the Gulp process to test against.

Sidebar: Continuation Token

One challenge of using Lambda for actions is the current 300 second function execution timeout limit. If you have an action that will take longer than 300 seconds (e.g., launching a CloudFormation stack) you can utilize the continuation token. A continuation token is an opaque value that you can return to CodePipeline to indicate that you are not complete with your action yet. CodePipeline will then reinvoke your action, passing in the continuation token you provided in the prior invocation.

The following code uses the UserParameters  as a maximum number of attempts and uses continuationToken  as a number of attempts. If the action needs more time, it compares the maxAttempts  with the priorAttempts  and if there are still more attempts available, it calls into CodePipeline to signal success and passes a continuation token to indicate that the action needs to be reinvoked.

var jobData = event["CodePipeline.job"].data;
var maxAttempts = parseInt(jobData.actionConfiguration.configuration.UserParameters) || 0
var priorAttempts = parseInt(jobData.continuationToken) || 0;

if(priorAttempts < maxAttempts) {
    console.log("Retrying later.");

    var params = {
        jobId: event["CodePipeline.job"].id,
        continuationToken: (priorAttempts+1).toString()
    };
    codepipeline.putJobSuccessResult(params);

}

Deploy from Production Stage

The Production stage uses the same action definitions from the Acceptance stage to deploy and test the application. The only difference is that it passes in the production S3 bucket name and Lambda ARN to deploy to.

I spent time considering how to do a Blue/Green deployment with this environment. Blue/Green deployment is an approach to reduce deployment risk by launching a duplicate environment for code changes (green environment) and then cutting over traffic from the existing (blue environment) to the new environment. This also affords a safe and quick rollback by switching traffic back to the old (blue) environment.

I looked into doing a DNS based Blue/Green using Route53 Resource Records. This would be accomplished by creating a new API Gateway and Lambda function for each job and using weighted routing to move traffic over from the old API Gateway to the new API Gateway.

I’m not convinced this level of complexity would provide much value however, because given the way Lambda manages version and API Gateway manages deployments, you can easily roll changes back very quickly by moving the Lambda version alias. One limitation though is you cannot do a canary deployment with a single API Gateway and Lambda version aliases. I’m curious what your thoughts are on this, ping me on Twitter @Stelligent with #ServerlessDelivery.

Sidebar: Gulp + CloudFormation

You’ll also notice that there is a gulpfile.js in the dromedary-serverless repo to make it easier to launch and manage the CloudFormation stack. Specifically, you can run gulp pipeline:up  to launch the stack, gulp pipeline:wait  to wait for the pipeline creation to complete and gulp pipeline:status  to see the status of the stack and its outputs. This code has been factored out into its own repo named serverless-pipeline if you’d like to add this type of integration between Gulp and CloudFormation in your own project.

Try it out!

Want to experiment with this stack in your own account? The CloudFormation templates are available for you to run with the link below. First, you’ll want to fork the dromedary repository into your GitHub account. Then you’ll need to provide the following parameters to the stack:

  • Hosted Zone – you’ll need to setup a Route53 hosted zone in your account before launching the stack for the Route53 resource records to be created in.
  • Test DNS Name – a fully qualified hostname (within the Hosted Zone you created) for the test resources (e.g.test.example.com ).
  • Production DNS Name – a fully qualified hostname (within the Hosted Zone you created) for the production resources (e.g.prod.example.com ).
  • OAuth2 Token – your OAuth2 token (see here for details)
  • User Name – your GitHub username

stack-parameters

Conclusion

In this series, we have addressed how achieve the fundamentals of continuous delivery in a serverless environment. Let’s review those fundamentals and how we addressed them:

  • Continuous – We used CodePipeline to run a series of actions against every commit to our GitHub repository.
  • Quality – We built static analysis, unit tests and end-to-end tests into our pipeline and ran them for every commit.
  • Automated – The provisioning of the pipeline and the application was done from a single CloudFormation stack
  • Reproducible – Other than creation of a Route53 Hosted Zone, there were no prerequisites to running this CloudFormation stack
  • Serverless – All the tools chosen where AWS managed services, including Lambda, API Gateway, S3, Route53 and CodePipeline. No servers were harmed in the making of this series.

Please follow us on Twitter to be informed of future articles on Serverless Delivery and other exciting topics.  Also, keep your eye out for a new book set to be released later this year by Stelligent CTO, Paul Duvall, on Continuous Delivery in AWS – which will contain a chapter on serverless delivery.

 

Resources