AWS CodeStar – Quickly develop, build, and deploy applications on AWS

AWS CodeStar is a new service that changes the way development teams deliver software in AWS. CodeStar makes the process of setting up software applications for continuous delivery easier to manage through integrated authorization and access management, centralized member collaboration, and automated environment provisioning.

adh-team-whowhat1
(1) “Working with AWS CodeStar Teams.” Working with AWS CodeStar Teams – AWS CodeStar. Amazon Web Services, 2017. Web. 01 May 2017. – http://docs.aws.amazon.com/codestar/latest/userguide/working-with-teams.html

Through the use of CodeStar you can now automatically create entire environments for your application and all of its associated AWS resources. Furthermore, CodeStar is great for groups who are engaging in brand new start up applications and projects. Because of the simplicity of CodeStar, development teams can create efficient software workflows that will be able to build, test, and release software on AWS much faster than before. Some of the benefits of CodeStar include:

  • Automatic Provisioning of Resources: When you create a project through CodeStar, AWS will automatically provision a handful of the underlying resources that will be part of your software’s environment through the use of AWS CloudFormation. Some of these resources could include AWS Elastic Beanstalk, AWS EC2 instances, AWS S3 Buckets, and an AWS CodeCommit repository. One of the most significant resources that CodeStar creates is a continuous delivery pipeline. This pipeline is built using AWS CodePipeline and initially contains two stages: a Source (Commit) stage and an Application (Deploy) stage. If you need additional stages, you can modify your CodePipeline pipeline accordingly.
  • Pre-built Code Templates: When you begin the process of creating a project with CodeStar you are given the option to choose from many pre-built code templates used to build applications that will run on AWS Elastic Beanstalk, AWS EC2, or AWS Lambda. These pre-built templates come with already-setup sample code applications that are ready to be modified and as the user you can choose between five programming languages to build your software in. These five languages include Ruby, Python, PHP, Java, and Javascript. After you choose your programming language you then have the option to choose from three ways of editing your project code which include the use of Visual Studio, Eclipse, or Command Line Tools.

For the remainder of this blog I will demonstrate how to setup and build a CodeStar project using a Ruby on Rails template and will deploy the sample application on an AWS EC2 instance.

CodeStar Project with Ruby on Rails

Creating your CodeStar Project

  1. The first thing you will need to do to create your CodeStar project is to log into your AWS console, go to the CodeStar console, and select “Create New Project”.
  2. You will be directed to a page that displays the many variety of project templates for you to choose from. The types of  applications this service supports range from templates ready to deploy on:
    1. AWS Elastic Beanstalk (Automated management of capacity and load balancing), Amazon EC2 with AWS CodeDeploy (Flexible deployment onto any type of instance), and AWS Lambda (Lambda is serverless technology and uses AWS CodeBuild to build your artifacts automatically)
      1. Side note: As of now it is not possible to create a CodeStar project via a CloudFormation template. It is also not possible to start a CodeStar project with your already-built application or to use GitHub as your code repository. The only way to achieve this would be to modify the Source stage of the CodePipeline that gets created for you once it is complete.
    2. For my example I am going to choose the “Ruby on Rails Web Application” that will be running on an Amazon EC2 instance.

Screen Shot 2017-04-25 at 5.16.23 PM

3. You will then be prompted to enter in the name for your project (Project name) and will be able to edit the Project ID as well. You can also choose whether or not to allow AWS CodeStar to administer AWS resources on your behalf by either checking/unchecking the box on the bottom of the page. If you chose a template that has a project running on EC2 (such as my example) then you will be able to edit the EC2 configuration as well. This includes choosing:

  1. Your own VPC (you have the choice of being assigned a default VPC and Subnet or choosing an existing one. You cannot create a VPC here.)
    1. Side note: To create an AWS VPC and a subnet you must go into the Networking & Content Delivery Console: VPC section and create them.
  2. Your Subnet to deploy your instance into
  3. The instance type (I chose t2.micro) 

Screen Shot 2017-04-25 at 5.40.22 PM

4. Select your AWS EC2 Keypair and select “Create Project”

5. You will then be able to choose how you want to edit your project code from the following three choices (Visual Studio, Eclipse, or Command line tools). For my example I chose Command line tools. At the bottom of the page will also be the code repository URL for your project and you can choose an access method between SSH and HTTPS.

6. The next page will be the Connect to your tools page which is where you’ll select your local machine’s operating system (macOS, Windows, Linux) and your connection method (HTTPS, SSH).

  1. For HTTPS connection: If you haven’t done so already you will need to install a Git client on your local machine (there is a link to install it in Step 1). You will also need to generate your AWS IAM user Git credentials by clicking the “here” link in Step 2. Once you have completed the first two steps you can then clone your repository onto your local machine by copying the Git command in Step 3 and pasting it to whatever directory you would like in your terminal.  Once you  have cloned the git repository into  your terminal you will be prompted for your user name and password which will be the Git credentials that you generate for your IAM user. Hit the “Skip” button below to continue onto your management dashboard
  2. For SSH connection: If you haven’t done so already you will need to install a Git client on your local machine (there is a link to install it in Step 1). You will then need to register your SSH Public Key (for help on how to do this please go to this link here located in the instructions in Step 2). Once you have registered your SSH key you will need to go into your terminal into your ~/.ssh directory and create a file named “config”. Add the following lines into this file:
Host git-codecommit.*.amazonaws.com
User Your-IAM-SSH-Key-ID-Here
IdentityFile ~/.ssh/Your-Private-Key-File-Name-Here

Once you have saved the file, you will need to ensure it has the right permissions by running the following command in your ~/.ssh directory:

chmod 600 config

After you have followed these steps you can clone the project repository onto your local machine by copying and pasting the command located in Step 4.

As mentioned earlier in this article, when you go through the process of creating your CodeStar project, if you selected the box that “allows AWS to administer resources on your behalf” CodeStar creates a CloudFormation stack that automatically deploys the environment and resources for your application. Here is what the CloudFormation stack and its resources looks like if you chose to create the Ruby on Rails application on an EC2 instance:

Screen Shot 2017-05-01 at 5.14.33 PM

Pre-configured Management Dashboard

After you have created your CodeStar project  you will be given a pre-configured centralized management dashboard from which you will be able to view a variety of events that are going on with your application project. Things that are viewable in the default dashboard include your:

  • Application’s resource activity metrics via AWS CloudWatch
  • Code commits history
  • Your application’s endpoint (Outlined in red: my example contains a public EC2 DNS endpoint)
  • A visual of your AWS CodePipeline in which you can see real time progress of your software’s continuous delivery cycle.
  • You also have the option to add the Atlassian Jira Software extension to your dashboard so that you can directly track your application project’s issues and its collaborator’s tasks

Screen Shot 2017-04-27 at 3.40.31 PM.png

From the dashboard you can Configure issue tracking which enables to integrate the Jira extension into your project for easy tracking. You are also able to setup your team members who will be given access to work on your project and determine which role they will have on it. You will just have to pick their IAM user name, choose whether remote access is allowed, and select the role for them between:

  • Viewer
  • Contributer
  • Owner

Start Modifying Your Rails Application

For this example I will be opening up my sample Rails application by going to the application endpoint link on the CodeStar dashboard. The first modification that I will be making will be to the opening “hello page” of the application. Here is what the opening page of the sample application looks like when I go to the application endpoint:

Screen Shot 2017-04-27 at 3.03.10 PM

Assuming that you have cloned the Git repository for your project onto your local machine, you can now start to modify your Rails application and make changes using your own text editor. For this example I am just going to remove the links on the home page (/app/views/hello_page/hello.html.erb) and change some of the wording. After making my slight changes to the “hello page” and saving it, I can just into my Git repository on my local machine’s terminal and proceed to type the following commands to push my most recent changes:

git status
  • This will show you what changes have been made to your project

Screen Shot 2017-04-27 at 4.41.34 PM.png

git add app/views/hello_page/hello.html.erb
  • This will add all of the changes that are ready to be made to the hello page
git commit -m “[your message about the changes that have been made]”
git push
  • This will push your newly modified project into your code pipeline and will automatically trigger the continuous deployment cycle.

Here is what will happen to your CodePipeline on your dashboard when you “git push” your changes:

Screen Shot 2017-04-27 at 4.37.40 PM.png

Once the pipeline has has succeeded through the Application stage, refresh your browser page with your application’s endpoint and see the new changes that have been made to your Rails application:

Screen Shot 2017-04-27 at 4.34.47 PM.png

From here on out you have a full Ruby on Rails application framework running on an Amazon EC2 instance where you can start to build/modify your own custom application. For more information about what you can do with your new Rails application please refer to the README that can be accessed by clicking on the “Code” box on the left side of you CodeStar dashboard.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post we talked about how to use the newly added AWS CodeStar service and discovered the benefits that it can offer to a variety of users. You learned about the different types of projects that CodeStar can create and how to easily interact with those projects upon their creation.

Let us know if you have any comments or questions @stelligent or @TreyMcElhattan

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Docker lifecycle automation and testing with Ruby in AWS

My friend and colleague, Stephen Goncher and I got to spend some real time recently implementing a continuous integration and continuous delivery pipeline using only Ruby. We were successful in developing a new module in our pipeline gem that handles many of the docker engine needs without having to skimp out on testing and code quality. By using the swipely/docker-api gem we were able to write well-tested, DRY pipeline code that can be leveraged by future users of our environment with confidence.

Our environment included the use of Amazon Web Service’s Elastic Container Registry (ECR) which proved to be more challenging to implement than we originally considered. The purpose of this post is to help others implement some basic docker functionality in their pipelines more quickly than we did. In addition, we will showcase some of the techniques we used to test our docker images.

Quick look at the SDK

It’s important that you make the connection in your mind now that each interface in the docker gem has a corresponding API call in the Docker Engine. With that said, it would be wise to take a quick stroll through the documentation and API reference before writing any code. There’s a few methods, such as Docker.authenticate! that will require some advanced configuration that is vaguely documented and you’ll need to combine all the sources to piece them together.

For those of you who are example driven learners, be sure to check out an example project on github that we put together to demonstrate these concepts.

Authenticating with ECR

We’re going to save you the trouble of fumbling through the various documentation by providing an example to authenticate with an Amazon ECR repository. The below example assumes you have already created a repository in AWS. You’ll also need to have an instance role attached to the machine you’re executing this snippet from or have your API key and secret configured.

Snippet 1. Using ruby to authenticate with Amazon ECR

require 'aws-sdk-core'
require 'base64'
require 'docker'

# AWS SDK ECR Client
ecr_client = Aws::ECR::Client.new

# Your AWS Account ID
aws_account_id = '1234567890'

# Grab your authentication token from AWS ECR
token = ecr_client.get_authorization_token(
 registry_ids: [aws_account_id]
).authorization_data.first

# Remove the https:// to authenticate
ecr_repo_url = token.proxy_endpoint.gsub('https://', '')

# Authorization token is given as username:password, split it out
user_pass_token = Base64.decode64(token.authorization_token).split(':')

# Call the authenticate method with the options
Docker.authenticate!('username' => user_pass_token.first,
                     'password' => user_pass_token.last,
                     'email' => 'none',
                     'serveraddress' => ecr_repo_url)

Pro Tip #1: The docker-api gem stores the authentication credentials in memory at runtime (see: Docker.creds.) If you’re using something like a Jenkins CI server to execute your pipeline in separate stages, you’ll need to re-authenticate at each step. Here’s an example of how the sample project accomplishes this.

Snippet 2. Using ruby to logout

Docker.creds = nil

Pro Tip #2: You’ll need to logout or deauthenticate from ECR in order to pull images from the public/default docker.io repository.

Build, tag and push

The basic functions of the docker-api gem are pretty straightforward to use with a vanilla configuration. When you tie in a remote repository such as Amazon ECR there can be some gotcha’s. Here are some more examples of the various stages of a docker image you’ll encounter with your pipeline. Now that you’re authenticated, let’s get to doing some real work!

The following snippets assume you’re authenticated already.

Snippet 3. The complete lifecycle of a basic Docker image

# Build our Docker image with a custom context
image = Docker::Image.build_from_dir(
 '/path/to/project',
 { 'dockerfile' => 'ubuntu/Dockerfile' }
)

# Tag our image with the complete endpoint and repo name
image.tag(repo: 'example.ecr.amazonaws.com/stelligent-example',
          tag: 'latest')

# Push only our tag to ECR
image.push(nil, tag: 'latest')

Integration Tests for your Docker Images

Here at Stelligent, we know that the key to software quality is writing tests. It’s part of our core DNA. So it’s no surprise we have some method to writing integration tests for our docker images. The solution will use Serverspec to launch the intermediate container, execute the tests and compile the results while we use the docker-api gem we’ve been learning to build the image and provide the image id into the context.

Snippet 5. Writing a serverspec test for a Docker Image

require 'serverspec'

describe 'Dockerfile' do
 before(:all) do
   set :os, family: :debian
   set :backend, :docker
   set :docker_image, '123456789' # image id
 end

 describe file('/usr/local/apache2/htdocs/index.html') do
   it { should exist }
   it { should be_file }
   it { should be_mode 644 }
   it { should contain('Automation for the People') }
 end

 describe port(80) do
   it { should be_listening }
 end
end

Snippet 6. Executing your test

$ rspec spec/integration/docker/stelligent-example_spec.rb

You’re Done!

Using a tool like the swipely/docker-api to drive your automation scripts is a huge step forward in providing fast, reliable feedback in your Docker pipelines compared to writing bash. By doing so, you’re able to write unit and integration tests for your pipeline code to ensure both your infrastructure and your application is well-tested. Not only can you unit test your docker-api implementation, but you can also leverage the AWS SDK’s ability to stub responses and take your testing a step further when implementing with Amazon Elastic Container Repository.

See it in Action

We’ve put together a short (approx. 5 minute) demo of using these tools. Check it out from github and take a test drive through the life cycle of Docker within AWS.


Working with cool tools like Docker and its open source SDKs is only part of the exciting work we do here at Stelligent. To take your pipeline a step further from here, you should check out mu — a microservices platform that will deploy your newly tested docker containers. You can take that epic experience a step further and become a Stelligentsia because we are hiring innovative and passionate engineers like you!

Microservice testing with mu: injecting quality into the pipeline

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this second post of the blog series focused on the mu tool, we will use mu to incorporate automated testing in the microservice pipeline we built in the first post.  

Why should I care about testing?

Most people, when asked why they want to adopt continuous delivery, will reply that they want to “go faster”.  Although continuous delivery will enable teams to get to production quicker, people often overlook the fact that it will also improve the quality of the software…at the same time.

Martin Fowler, in his post titled ContinuousDelivery, says you’re doing continuous delivery when:

  • Your software is deployable throughout its lifecycle
  • Your team prioritizes keeping the software deployable over working on new features
  • Anybody can get fast, automated feedback on the production readiness of their systems any time somebody makes a change to them
  • You can perform push-button deployments of any version of the software to any environment on demand

It’s important to recognize that the first three points are all about quality.  Only when a team focuses on injecting quality throughout the delivery pipeline can they safely “go faster”.  Fowler’s list of continuous delivery characteristics is helpful in assessing when a team is doing it right.  In contrast, here is a list of indicators that show when a team is doing it wrong:

  • Testing is done late in a sprint or after multiple sprints
  • Developers don’t care about quality…that is left to the QA team
  • A limited number of people are able to execute tests and assess production readiness
  • Majority of tests require manual execution

This problem is only compounded with microservices.  By increasing the number of deployable artifacts by a factor of 10x or 100x, you are increasing the complexity of the system and therefore the volume of testing required.  In short, if you are trying to do microservices and continuous delivery without considering test automation, you are doing it wrong.

Let mu help!

blog1The continuous delivery pipeline that mu creates for your microservice will run automated tests that you define on every execution of the pipeline.  This provides quick feedback to all team members as to the production readiness of your microservice.

mu accomplishes this by adding a step to the pipeline that runs a CodeBuild project to execute your tests.  Any tool that you can run from within CodeBuild can be used to test your microservice.

Let’s demonstrate this by adding automated tests to the microservice pipeline we created in the first post for the banana service.

Define tests with Postman

First, we’ll use Postman to define a test collection for our microservice.  Details on how to use Postman are beyond the scope of this post, but here are few good videos to learn more:

I started by creating a test collection named “Bananas”.  Then I created requests in the collection for the various REST endpoints I have in my microservice.  The requests use a Postman variable named “BASE_URL” in the URL to allow these tests to be run in other environments.  Finally, I defined tests in the JavaScript DSL that is provided by Postman to validate the results match my expectations.

Below, you will find an example of one of the requests in my collection:

blog2

Once we have our collection created and we confirm that our tests pass locally, we can export the collection as a JSON file and save it in our microservices repository.  For this example, I’ve exported the collection to “src/test/postman/collection.json”.

blog3.png

Run tests with CodeBuild

Now that we have our end to end tests defined in a Postman collection, we can use Newman to run these tests from CodeBuild.  The pipeline that mu creates will check for the existence of a file named buildspec-test.yml and if it exists, will use that for running the tests.  

There are three important aspects of the buildspec:

  • Install the Newman tool via NPM
  • Run our test collection with Newman
  • Keep the results as a pipeline artifact

Here’s the buildspec-test.yml file that was created:

version: 0.1

## Use newman to run a postman collection.  
## The env.json file is created by the pipeline with BASE_URL defined

phases:
  install:
    commands:
      - npm install newman --global
  build:
    commands:
      - newman run -e env.json -r html,json,junit,cli src/test/postman/collection.json

artifacts:
  files:
    - newman/*

The final change that we need to make for mu to run our tests in the pipeline is to specify the image for CodeBuild to use for running our tests.  Since the tool we use for testing requires Node.js, we will choose the appropriate image to have the necessary dependencies available to us.  So our updated mu.yml file now looks like:

environments:
- name: acceptance
- name: production
service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  pipeline:
    source:
      provider: GitHub
      repo: myuser/banana-service
    build:
      image: aws/codebuild/java:openjdk-8
    acceptance:
      image: aws/codebuild/eb-nodejs-4.4.6-amazonlinux-64:2.1.3

Apply these updates to our pipeline my running mu:

$ mu pipeline up
Upserting Bucket for CodePipeline
Upserting Pipeline for service 'banana-service' …

Commit and push our changes to cause a new run of the pipeline to occur:

$ git add --all && git commit -m "add test automation" && git push

We can see the results by monitoring the build logs:

$ mu pipeline logs -f
2017/04/19 16:39:33 Running command newman run -e env.json -r html,json,junit,cli src/test/postman/collection.json
2017/04/19 16:39:35 newman
2017/04/19 16:39:35
2017/04/19 16:39:35 Bananas
2017/04/19 16:39:35
2017/04/19 16:39:35  New Banana
2017/04/19 16:39:35   POST http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas [200 OK, 354B, 210ms]
2017/04/19 16:39:35     Has picked date
2017/04/19 16:39:35     Not peeled
2017/04/19 16:39:35
2017/04/19 16:39:35  All Bananas
2017/04/19 16:39:35   GET http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas [200 OK, 361B, 104ms]
2017/04/19 16:39:35     Status code is 200
2017/04/19 16:39:35     Has bananas
2017/04/19 16:39:35
2017/04/19 16:39:35
2017/04/19 16:39:35                           executed    failed
2017/04/19 16:39:35
2017/04/19 16:39:35               iterations         1         0
2017/04/19 16:39:35
2017/04/19 16:39:35                 requests         2         0
2017/04/19 16:39:35
2017/04/19 16:39:35             test-scripts         2         0
2017/04/19 16:39:35
2017/04/19 16:39:35       prerequest-scripts         0         0
2017/04/19 16:39:35
2017/04/19 16:39:35               assertions         5         0
2017/04/19 16:39:35
2017/04/19 16:39:35  total run duration: 441ms
2017/04/19 16:39:35
2017/04/19 16:39:35  total data received: 331B (approx)
2017/04/19 16:39:35
2017/04/19 16:39:35  average response time: 157ms
2017/04/19 16:39:35

Conclusion

Adopting continuous delivery for microservices demands the injection of test automation into the pipeline.  As demonstrated in this post, mu gives you the freedom to choose whatever test framework you desire and executes those test for you on every pipeline execution.  Only once your pipeline is doing the work of assessing the microservice readiness for production can you achieve the goal of delivering faster while also increasing quality.

In the upcoming posts in this blog series, we will look into:

  • Custom Resources –  create custom resources like DynamoDB with mu during our microservice deployment
  • Service Discovery – use mu to enable service discovery via `Consul` to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Introducing mu: a tool for managing your microservices in AWS

mu is a tool that Stelligent has created to make it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this first post of the blog series focused on the mu tool, we will be introducing the motivation for the tool and demonstrating the deployment of a microservice with it.  

Why microservices?

The architectural pattern of decomposing an application into microservices has proven extremely effective at increasing an organization’s ability to deliver software faster.  This is due to the fact that microservices are independently deployable components that are decoupled from other components and highly cohesive around a single business capability.  Those attributes of a microservice yield smaller team sizes that are able to operate with a high level of autonomy to deliver what the business wants at the pace the market demands.

What’s the catch?

When teams begin their journey with microservices, they usually face cost duplication on two fronts:  infrastructure and re-engineering. The first duplication cost is found in the “infrastructure overhead” used to support the microservice deployment.  For example, if you are deploying your microservices on AWS EC2 instances, then for each microservice, you need a cluster of EC2 instances to ensure adequate capacity and tolerance to failures.  If a single microservice requires 12 t2.small instances to meet capacity requirements and we want to be able to survive an outage in 1 out of 4 availability zones, then we would need to run 16 instances total, 4 per availability zone.  This leaves an overhead cost of 4 t2.small instances.  Then multiply this cost by the number of microservices for a given application and it is easy to see that the overhead cost of microservices deployed in this manner can add up quickly.

Containers to the rescue!

An approach to addressing this challenge of overhead costs is to use containers for deploying microservices.  Each microservice would be deployed as a series of containers to a cluster of hosts that is shared by all microservices.  This allows for greater density of microservices on EC2 instances and allows the overhead to be shared by all microservices.  Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers.  ECS leverages many AWS services to provide a robust container management solution.  Additionally, a developer can use tools like CodeBuild and CodePipeline to create continuous delivery pipelines for their microservices.

That sounds complicated…

This approach leads to the second duplication cost of microservices: the cost of “reengineering”.  There is a significant learning curve for developers to learn how to use all these different AWS resources to deploy their microservices in an efficient manner.  If each team is using their autonomy to engineer a platform on AWS for their microservices then a significant level of engineering effort is being duplicated.  This duplication not only causes additional engineering costs, but also impedes a team’s ability to deliver the differentiating business capabilities that they were commissioned to do in the first place.

Let mu help!

To address these challenges, mu was created to simplify the declaration and administration of the AWS resources necessary to support microservices.  mu is a tool that a developer uses from their workstation to deploy their microservices to AWS quickly and efficiently as containers.  It codifies best practices for microservices, containers and continuous delivery pipelines into the AWS resources it creates on your behalf.  It does this from a simple CLI application that can be installed on the developer’s workstation in seconds.  Similar to how the Serverless Framework improved the developer experience of Lambda and API Gateway, this tool makes it easier for developers to use ECS as a microservices platform.

Additionally, mu does not require any servers, databases or other AWS resources to support itself.  All state information is managed via CloudFormation stacks.  It will only create resources (via CloudFormation) necessary to run your microservices.  This means at any point you can stop using mu and continue to manage the AWS resources that it created via AWS tools such as the CLI or the console.

Core components

The mu tool consists of three main components:

  • Environments – an environment includes a shared network (VPC) and cluster of hosts (ECS and EC2 instances) necessary to run microservices as clusters.  The environments include the ability to automatically scale out or scale in based on resource requirements across all the microservices that are deployed to it.  Many environments can exist (e.g. development, staging, production)
  • Services – a microservice that will be deployed to a given environment (or environments) as a set of containers.
  • Pipeline – a continuous delivery pipeline that will manage the building, testing, and deploying of a microservice in the various environments.

mu-architecture

Installing and configuring mu

First let’s install mu:

$ curl -s http://getmu.io/install.sh | sh

If you’re appalled at the idea of curl | bash installers, then you can always just download the latest version directly.

mu will use the same mechanism as aws-cli to authenticate with the AWS services.  If you haven’t configured your AWS credentials yet, the easiest way to configure them is to install the aws-cli and then follow the aws configure instructions:

$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

Setup your microservice

In order for mu to setup a continuous delivery pipeline for your microservice, you’ll need to run mu from within a git repo.  For this demo, we’ll be using the stelligent/banana-service repo for our microservice.  If you want to follow along and try this on your own, you’ll want to fork the repo and clone your fork.

Let’s begin with cloning the microservice repo:

$ git clone git@github.com:myuser/banana-service.git
$ cd banana-service

Next, we will initialize mu configuration for our microservice:

$ mu init --env
Writing config to '/Users/casey.lee/Dev/mu/banana-service/mu.yml'
Writing buildspec to '/Users/casey.lee/Dev/mu/banana-service/buildspec.yml'

We need to update the mu.yml that was generated with the URL paths that we want to route to this microservice and the CodeBuild image to use:

environments:
- name: acceptance
- name: production
service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  pipeline:
    source:
      provider: GitHub
      repo: myuser/banana-service
    build:
      image: aws/codebuild/java:openjdk-8

Next, we need to update the generated buildspec.yml to include the gradle build command:

version: 0.1
phases:
  build:
    commands:
      - gradle build
artifacts:
  files:
    - '**/*'

Finally, commit and push our changes:

$ git add --all && git commit -m "mu init" && git push

Create the pipeline

Make sure you have GitHub token with repo and admin:repo_hook scopes to provide to the pipeline in order to integrate with your GitHub repo.  Then you can create the pipeline:

$ mu pipeline up
Upserting Bucket for CodePipeline
Upserting Pipeline for service 'banana-service' ...
  GitHub token: XXXXXXXXXXXXXXX

Now that the pipeline is created, it will build and deploy for every commit to your git repo.  You can monitor the status of the pipeline as it builds and deploys the microservice:

$ mu svc show

Pipeline URL:   https://console.aws.amazon.com/codepipeline/home?region=us-west-2#/view/mu-pipeline-banana-service-Pipeline-1B3A94CZR6WH
+------------+----------+------------------------------------------+-------------+---------------------+
|   STAGE    |  ACTION  |                 REVISION                 |   STATUS    |     LAST UPDATE     |
+------------+----------+------------------------------------------+-------------+---------------------+
| Source     | Source   | 1f1b09f0bbc3f42170b8d32c68baf683f1e3f801 | Succeeded   | 2017-04-07 15:12:35 |
| Build      | Artifact |                                        - | Succeeded   | 2017-04-07 15:14:49 |
| Build      | Image    |                                        - | Succeeded   | 2017-04-07 15:19:02 |
| Acceptance | Deploy   |                                        - | InProgress  | 2017-04-07 15:19:07 |
| Acceptance | Test     |                                        - | -           |                   - |
| Production | Approve  |                                        - | -           |                   - |
| Production | Deploy   |                                        - | -           |                   - |
| Production | Test     |                                        - | -           |                   - |
+------------+----------+------------------------------------------+-------------+---------------------+

Deployments:
+-------------+-------+-------+--------+-------------+------------+
| ENVIRONMENT | STACK | IMAGE | STATUS | LAST UPDATE | MU VERSION |
+-------------+-------+-------+--------+-------------+------------+
+-------------+-------+-------+--------+-------------+------------+

You can also monitor the build logs:

$ mu pipeline logs -f
[Container] 2017/04/07 22:25:43 Running command mu -c mu.yml svc deploy acceptance 
[Container] 2017/04/07 22:25:43 Upsert repo for service 'banana-service' 
[Container] 2017/04/07 22:25:43   No changes for stack 'mu-repo-banana-service' 
[Container] 2017/04/07 22:25:43 Deploying service 'banana-service' to 'dev' from '324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f' 

Once the pipeline has completed deployment of the service, you can view logs from service:

$ mu service logs -f acceptance                                                                                                                                                                         
  .   ____          _          __ _ _
 /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| | ) ) ) )
  ' | ____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

:: Spring Boot ::        (v1.4.0.RELEASE) 
2017-04-07 22:30:08.788  INFO 5 --- [           main] com.stelligent.BananaApplication         : Starting BananaApplication on 6a4d5544d9de with PID 5 (/app.jar started by root in /) 
2017-04-07 22:30:08.824  INFO 5 --- [           main] com.stelligent.BananaApplication         : No active profile set, falling back to default profiles: default 
2017-04-07 22:30:09.342  INFO 5 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@108c4c35: startup date [Fri Apr 07 22:30:09 UTC 2017]; root of context hierarchy 
2017-04-07 22:30:09.768  INFO 5 --- [           main] com.stelligent.BananaApplication         : Starting BananaApplication on 7818361f6f45 with PID 5 (/app.jar started by root in /) 

Testing the service

Finally, we can get the information about the ELB endpoint in the acceptance environment to test the service:

$ mu env show acceptance                                                                                                                                                                        

Environment:    acceptance
Cluster Stack:  mu-cluster-dev (UPDATE_COMPLETE)
VPC Stack:      mu-vpc-dev (UPDATE_COMPLETE)
Bastion Host:
Base URL:       http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com
Container Instances:
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
|    EC2 INSTANCE     |   TYPE   |     AMI      |     AZ     | CONNECTED | STATUS | # TASKS | CPU AVAIL | MEM AVAIL |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
| i-093b788b4f39dd14b | t2.micro | ami-62d35c02 | us-west-2a | true      | ACTIVE |       3 |       604 |       139 |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+

Services:
+----------------+---------------------------------------------------------------------+------------------+---------------------+
|    SERVICE     |                                IMAGE                                |      STATUS      |     LAST UPDATE     |
+----------------+---------------------------------------------------------------------+------------------+---------------------+
| banana-service | 324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f | CREATE_COMPLETE  | 2017-04-07 15:25:43 |
+----------------+---------------------------------------------------------------------+------------------+---------------------+


$ curl -s http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas | jq

[
  {
    "pickedAt": "2017-04-10T10:34:27.911",
    "peeled": false,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas/1"
      }
    ]
  }
]

Cleanup

To cleanup the resources that mu created, run the following commands:

$ mu pipeline term
$ mu env term acceptance
$ mu env term production

Conclusion

As you can see, mu addresses infrastructure and engineering overhead costs associated with microservices.  It makes deployment of microservices via containers simple and cost-efficient.  Additionally, it ensures the deployments are repeatable and non-dramatic by utilizing a continuous delivery pipeline for orchestrating the flow of software changes into production.

In the upcoming posts in this blog series, we will look into:

  • Test Automation –  add test automation to the continuous delivery pipeline with mu
  • Custom Resources –  create custom resources like DynamoDB with mu during our microservice deployment
  • Service Discovery – use mu to enable service discovery via Consul to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started.  Keep in touch with us in our Gitter room and share your feedback!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If you are Stelligent is hiring and we would love to hear from you!

Using Parameter Store with AWS CodePipeline

Systems Manager Parameter Store is a managed service (part of AWS EC2 Systems Manager (SSM)) that provides a convenient way to efficiently and securely get and set commonly used configuration data across multiple resources in your software delivery lifecycle.

codepipeline_ssm

In this post, we will be focusing on the basic usage of Parameter Store and how to effectively use it as part of a continuous delivery pipeline using AWS CodePipeline. The following describes some of the capabilities of Parameter Store and the resources with which they can be used:

  • Managed Service: Parameter Store is managed by AWS. This means that you won’t have to put in the engineering work to setup something like Vault, Zookeeper, etc. just to store the configuration that your application/service needs.
  • Access Controls: Through the use of AWS Identity Access Management access to Parameter Store can be limited by enabling or restricting access to the service itself, or by enabling or restricting access to particular parameters.
  • Encryption: The Parameter Store gives a user the ability to also encrypt parameters using the AWS Key Management Service (KMS). When creating a parameter, you can specify that the parameter is encrypted with a KMS key. 
  • Audit: All calls to Parameter Store are tracked and recorded in AWS CloudTrail so they can be audited.

At the end of this post, you will be able to launch an example solution via AWS CloudFormation.

Working with Parameter Store

Prerequisites

In order to follow the examples below, you’ll need to have the AWS CLI setup on your local workstation. You can find a guide to install the AWS CLI here.

Creating a Parameter in the Parameter Store

To manually create a parameter in the parameter store there a few easy steps to follow:

  1. The user must sign into their AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on Parameter Store.
  3. Click Get Started Now or Create Parameter and input the following information:
    1. Name: The name that you want the parameter to be called
    2. Description(optional): A description of what the parameter does or contains
    3. Type: You can choose either a String, String List, or Secure String
  4. Click Create Parameter and it will bring you to the Parameter Store console where you can see your newly created parameter

To create a parameter using the AWS CLI, here are examples of creating a String, SecureString, and String List:

String:

 aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."

StringList:

aws ssm put-parameter --name "HostedZoneNames" --type "StringList" --value “stelligent.com.,google.com.,amazon.com.

SecureString:

 aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

After running these commands, your parameter store console would look something like this:

Screen Shot 2017-03-06 at 5.06.40 PM

Getting Parameter Values using the AWS CLI

To get a parameter String, StringList, or SecureString from the from the Parameter Store using the AWS CLI you must use the following syntax in your terminal:

String:

aws ssm get-parameters --names "HostedZoneName"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "String", 
            "Name": "HostedZoneName", 
            "Value": "stelligent.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.24 PMStringList:

 aws ssm get-parameters --names "HostedZoneNames"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "StringList", 
            "Name": "HostedZoneNames", 
            "Value": "stelligent.com.,google.com.,amazon.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.45 PMSecureString:

aws ssm get-parameters --names "Password"

The output in your terminal would look like this (the value of the parameter is encrypted):

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "SecureString", 
            "Name": "Password", 
            "Value": "AQECAHicQXIA+CERB7LyH8+YXXUK1vqiI87oM0Wq7kgMCmGqUQAAAGowaAYJKoZIhvcNAQcGoFswWQIBADBUBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDE0kvmQLY6Ertt5BGwIBEIAnlfTl1XxzRwUzkFCBYn8P0lJ6dOdjPNQNbYgjD1+KTk/SlNJznvrF"
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.59 PM

Deleting a Parameter from the Parameter Store

To delete a parameter from the Parameter Store manually, you must use the following steps:

  1. Sign into your AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on the Parameter Store tab.
  3. Select the parameter that you wish to delete
  4. Click the Actions button and select Delete Parameter from the menu

To delete a parameter using the AWS CLI you must use the following syntax in your terminal(this works for String, StringList, and SecureString)

aws ssm delete-parameter --name "HostedZoneName"
aws ssm delete-parameter --name "HostedZoneNames"
aws ssm delete-parameter --name "Password"

Using Parameter Store in AWS CodePipeline

Parameter Store can be very useful when constructing and running a deployment pipeline. Parameter Store can be used alongside a simple token/replace script to dynamically generate configuration files without having to manually modify those files. This is useful because you can pass through frequently used pieces of data configuration easily and efficiently as part of a continuous delivery process. An illustration of the AWS infrastructure architecture is shown below.

new-designer (4)

In this example, we have a deployment pipeline modeled via AWS CodePipeline that consists of two stages: a Source stage and a Build stage.

First, let’s take a look at the Source stage.

MyCodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      ArtifactStore:
        Location: !Ref S3Bucket
        Type: S3
      RoleArn: !GetAtt [CodePipelineRole, Arn]
      Stages:
        - Name: Source
          Actions:
          - Name: GitHubSource
            ActionTypeId:
              Category: Source
              Owner: ThirdParty
              Provider: GitHub
              Version: 1
            OutputArtifacts:
              - Name: OutputArtifact
            Configuration:
              Owner: stelligent
              Repo: parameter-store-example
              Branch: master
              OAuthToken: !Ref GitHubToken

As part of the Source stage, the pipeline will get source assets from the GitHub repository that contains the configuration file that will be modified along with a Ruby script that will get the parameter from the Parameter Store and replace the variable tokens inside of the configuration file. 

After the Source stage completes, there’s a Build stage, where we’ll be doing all of the actual work to modify our configuration file.

The Build stage uses the CodeBuild project (defined as the ConfigFileBuild action) to run the Ruby script that will modify the configuration file and replace the variable tokens inside of it with the requested parameters from the Parameter Store.

  ConfigFileBuild:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Ref AWS::StackName
      Description: Changes sample configuration file
      ServiceRole: !GetAtt CodeBuildRole.Arn
      Artifacts:
        Type: CODEPIPELINE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_LARGE
        Image: aws/codebuild/eb-ruby-2.3-amazonlinux-64:2.1.6
      Source:
        Type: CODEPIPELINE
        BuildSpec: |
          version: 0.1
          phases:
            pre_build:
              commands:
                - gem install aws-sdk
            build:
              commands:
                - ruby sample_ruby_ssm.rb
          artifacts:
            files:
              - '**/*'

The CodeBuild project contains the buildspec that actually runs the  Ruby script that will be making the configuration changes (sample_ruby_ssm.rb).

Copy of AWS Simple Icons v2.1 (6).png

Here is what the Ruby script looks like:

require 'aws-sdk'

client = Aws::SSM::Client.new(region: 'us-east-1')
resp = client.get_parameters({
  names: ["HostedZoneName", "Password"], # required
  with_decryption: true,
})

hostedzonename = resp.parameters[0].value
password = resp.parameters[1].value

file_names = ['sample_ssm_config.json']

file_names.each do |file_name|
  text = File.read(file_name)

  # Display text for usability
  puts text

  # Substitute Variables
  new_contents = text.gsub(/HOSTEDZONE/, hostedzonename)
  new_contents = new_contents.gsub(/PASSWORD/, password)

  # To write changes to the file, use:
  File.open(file_name, "w") {|file| file.puts new_contents.to_s }
end

Here is what the configuration file with the variable tokens (HOSTEDZONE, PASSWORD) looks like before it gets modified:

{
  "Parameters" : {
    "HostedZoneName" : "HOSTEDZONE",
    "Password" : "PASSWORD"
  }
}

Here is what the configuration file would consist of after the Ruby script pulls the requested parameters from the Parameter Store and replaces the variable tokens (HOSTEDZONE, PASSWORD). The Password parameter is being decrypted through the ruby script in this process.

{
  "Parameters" : {
    "HostedZoneName" : "stelligent.com.",
    "Password" : "Password123$"
  }
}

IMPORTANT NOTE:  In this example above you can see that the “Password” parameter is being returned in plain text (Password123$). The reason that is happening is because when this Ruby script runs, it is returning the secured string parameter with the decrypted value (with_decryption: true). The purpose of showing this example in this way is purely just to illustrate what returning multiple parameters into a configuration file would look like. In a real-world situation you would never want to return any password displayed in its plain text because that can present security issues and is bad practice in general. In order to return that “Password” parameter value in its encrypted form all you would simply have to do is modify the Ruby script on the 6th line and change the “with_decryption: true” to “with_decryption: false“.  Here is what the modified configuration file would look like with the “Password” parameter being returned in its encrypted form:

Screen Shot 2017-03-10 at 10.57.27 AM

Launch the Solution via CloudFormation

To run this deployment pipeline and see Parameter Store in action, you can click the “Launch Stack” button below which will take you directly to the CloudFormation console within your AWS account and load the CloudFormation template. Walk through the CloudFormation wizard to launch the stack. 

In order to be able to execute this pipeline you must have the following:

  • The AWS CLI already installed on your local workstation. You can find a guide to install the AWS CLI here
  • A generated GitHub Oauth token for your GitHub user. Instructions on how to generate an Oauth token can be found here
  • In order for the Ruby script that is part of this pipeline process to run you must create these two parameters in your Parameter Store:
    1. HostedZoneName
    2. Password
aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."
aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

As you begin to launch the pipeline in CloudFormation (Launch Stack button is located below), you will be prompted to enter this one parameter:

  • GitHubToken (Your generated GitHub Oauth token)

Once you have passed in this initial parameter, you can begin to launch the pipeline that will make use of the Parameter Store.

NOTE: You will be charged for your CodePipeline and CodeBuild usage.

Once the stack is CREATE_COMPLETE, click on the Outputs tab and then the value for the CodePipelineUrl output to view the pipeline in CodePipeline.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post, you learned how to use the EC2 Systems Manager Parameter Store and some of its features. You learned how to create, delete, get, and set parameters manually as well as through the use of the AWS CLI. You also learned how to use the Parameter Store in a practical situation by incorporating it in the process of setting configuration data that is used as part of a CodePipeline continuous delivery pipeline.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/parameter-store-example. Let us know if you have any comments or questions @stelligent or @TreyMcElhattan

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

AWS CodeBuild is Here

At re:Invent 2016 AWS introduced AWS CodeBuild, a new service which compiles from source, runs tests, and produces ready to deploy software packages.  AWS CodeBuild handles provisioning, management, and scaling of your build servers.  You can either use pre-packaged build environments to get started quickly, or create custom build environments using your own build tools.  CodeBuild charges by the minute for compute resources, so you aren’t paying for a build environment while it is not in use.

AWS CodeBuild Introduction

Stelligent engineer, Harlen Bains has posted An Introduction to AWS CodeBuild to the AWS Partner Network (APN) Blog.  In the post he explores the basics of AWS CodeBuild and then demonstrates how to use the service to build a Java application.

Integrating AWS CodeBuild with AWS Developer Tools

In the follow-up post:  Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite, Stelligent CTO and AWS Community Hero,  Paul Duvall expands on how to integrate and automate the orchestration of CodeBuild with the rest of the AWS Developer Tools suite – including AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline using AWS’ provisioning tool, AWS CloudFormation.  He goes over the benefits of automating all the actions and stages into a deployment pipeline, while also providing an example with detailed screencast.

In the Future

Look to the Stelligent Blog for announcements, evaluations, and guides on new AWS products.  We are always looking for engineers who love to make things work better, faster, and just get a kick out of automating everything.  If you live and breathe DevOps, continuous delivery, and AWS, we want to hear from you.

Provision a hosted Git repo with AWS CodeCommit using CloudFormation

Recently, AWS announced that you can now automate the provisioning of a hosted Git repository with AWS CodeCommit using CloudFormation. This means that in addition to the console, CLI, and SDK, you can use declarative code to provision a new CodeCommit repository – providing greater flexibility in versioning, testing, and integration.

In this post, I’ll describe how engineers can provision a CodeCommit Git repository in a CloudFormation template. Furthermore, you’ll learn how to automate the provisioning of a deployment pipeline that uses this repository as its Source action to deploy an application using CodeDeploy to an EC2 instance. You’ll see examples, patterns, and a short video that walks you through the process.

Prerequisites

Here are the prerequisites for this solution:

These will be explained in greater detail in the Deployment Steps section.

Architecture and Implementation

In the figure below, you see the architecture for launching a pipeline that deploys software to an EC2 instance from code stored in a CodeCommit repository. You can click on the image to launch the template in CloudFormation Designer.

  • CloudFormation – All of the resource generation of this solution is described in CloudFormation  which is a declarative code language that can be written in JSON or YAML.
  • CodeCommit – With the addition of the AWS::CodeCommit::Repository resource, you can define your CodeCommit Git repositories in CloudFormation.
  • CodeDeploy – CodeDeploy automates the deployment to the EC2 instance that was provisioned by the nested stack.
  • CodePipeline – I’m defining CodePipeline’s stages and actions in CloudFormation code which includes using CodeCommit as a Source action and CodeDeploy for a Deploy action (For more information, see Action Structure Requirements in AWS CodePipeline).
  • EC2 – A nested CloudFormation stack is launched to provision a single EC2 instance on which the CodeDeploy agent is installed. The CloudFormation template called through the nested stack is provided by AWS.
  • IAM – An Identity and Access Management (IAM) Role is provisioned via CloudFormation which defines the resources that the pipeline can access.
  • SNS – A Simple Notification Service (SNS) Topic is provisioned via CloudFormation. The SNS topic is used by the CodeCommit repository for notifications.

CloudFormation Template

In this section, I’ll show code snippets from the CloudFormation template that provisions the entire solution. The focus of my samples is on the CodeCommit resources. There are several other resources defined in this template including EC2, IAM, SNS, CodePipeline, and CodeDeploy. You can find a link to the template at the bottom of  this post.

CodeCommit

In the code snippet below, you see that I’m using the AWS::CodeCommit::Repository CloudFormation resource. The repository name is provided as parameter to the template. I created a trigger to receive notifications when the master branch gets updated using an SNS Topic as a dependent resource that is created in the same CloudFormation template. This is based on the sample code provided by AWS.

    "MyRepo":{
      "Type":"AWS::CodeCommit::Repository",
      "DependsOn":"MySNSTopic",
      "Properties":{
        "RepositoryName":{
          "Ref":"RepoName"
        },
        "RepositoryDescription":"CodeCommit Repository",
        "Triggers":[
          {
            "Name":"MasterTrigger",
            "CustomData":{
              "Ref":"AWS::StackName"
            },
            "DestinationArn":{
              "Ref":"MySNSTopic"
            },
            "Events":[
              "all"
            ]
          }
        ]
      }
    },

CodePipeline

In this CodePipeline snippet, you see how I’m using the CodeCommit repository resource as an input for the Source action in CodePipeline. In doing this, it polls the CodeCommit repository for any changes. When it discovers changes, it initiates an instance of the deployment pipeline in CodePipeline.

        "Stages":[
          {
            "Name":"Source",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"Source",
                "ActionTypeId":{
                  "Category":"Source",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"CodeCommit"
                },
                "OutputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Configuration":{
                  "BranchName":{
                    "Ref":"RepositoryBranch"
                  },
                  "RepositoryName":{
                    "Ref":"RepoName"
                  }
                },
                "RunOrder":1
              }
            ]
          },

You can see an illustration of this pipeline in the figure below.

cpl-cc

Costs

Since costs can vary widely in using certain AWS services and other tools, I’ve provided a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. The AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost
  • CodeCommit – If you’re using on small project of less than six users, there’s no additional cost. See AWS CodeCommit Pricing for more information.
  • CodeDeploy – No additional cost
  • CodePipeline – $1 a month per pipeline unless you’re using it as part of the free tier. For more information, see AWS CodePipeline pricing.
  • EC2 – Approximately $15/month if you’re running once t1.micro instance 24/7. See AWS EC2 Pricing for more information.
  • IAM – No additional cost
  • SNS – Considering you probably won’t have over 1 million Amazon SNS requests for this particular solution, there’s no cost. For more information, see AWS SNS Pricing.

So, for this particular sample solution, you’ll spend around $16/month iff you run the EC2 instance for an entire month. If you just run it once and terminate it, you’ll spend a little over $1.

Patterns

Here are some patterns to consider when using CodeCommit with CloudFormation.

  • CodeCommit Template – While this solution embeds the CodeCommit creation as part of a single CloudFormation template, it’s unlikely you’ll be updating the CodeCommit repository generation with every application change so you might create a template that focuses on the CodeCommit creation and run it as part of an infrastructure pipeline that gets updated when new CloudFormation is committed to it.
  • Centralized Repos – Most likely, you’ll want to host your CodeCommit repositories in a single AWS account and use cross-account IAM roles to share access across accounts in your organization. While you can create CodeCommit repos in any AWS account, it’ll likely lead to unnecessary complexity when engineers want to know where the code is located.

The last is more of a conundrum than a pattern. As one my colleagues posted in Slack:

I’m stuck in a recursive loop…where do I store my CloudFormation template for my CodeCommit repo?

Good question. I don’t have a good answer for that one just yet. Anyone have thoughts on this one? It gets very “meta”.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region
  3. Create a key pair. To do this, in the navigation pane of the Amazon EC2 console, choose Key Pairs, Create Key Pair, type a name, and then choose Create.

Step 2. Launch the Stack

Click on the Launch Stack button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, security, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 7 minutes

The template includes default settings that you can customize by following the instructions in this post.

Create Details

Here’s a listing of the key AWS resources that are created when this stack is launched:

  • IAM – InstanceProfile, Policy, and Role
  • CodeCommit Repository – Hosts the versioned code
  • EC2 instance – with CodeDeploy agent installed
  • CodeDeploy – application and deployment
  • CodePipeline – deployment pipeline with CodeCommit Integration

CLI Example

Alternatively, you can launch the same stack from the command line as shown in the samples below.

Base Command

From an instance that has the AWS CLI installed, you can use the following snippet as a base command prepended to one of two options described in the Parameters section below.

aws cloudformation create-stack --profile {AWS Profile Name} --stack-name {Stack Name} --capabilities CAPABILITY_IAM --template-url "https://s3.amazonaws.com/stelligent-public/cloudformation-templates/github/labs/codecommit/codecommit-cpl-cfn.json"
Parameters

I’ve provided two ways to run the command – from a custom parameters file or from the CLI.

Option 1 – Custom Parameters JSON File

By attaching the command below to the base command, you can pass parameters from a file as shown in the sample below.

--parameters file:///localpath/to/example-parameters-cpl-cfn.json
Option 2 – Pass Parameters on CLI

Another way to launch the stack from the command line is to provide custom parameters populated with parameter values as shown in the sample below.

--parameters ParameterKey=EC2KeyPairName,ParameterValue=stelligent-dev ParameterKey=EmailAddress,ParameterValue=jsmith@example.com ParameterKey=RepoName,ParameterValue=my-cc-repo

Step 3. Test the Deployment

Click on the CodePipelineURL Output in your CloudFormation stack. You’ll see that the pipeline has failed on the Source action. This is because the Source action expects a populated repository and it’s empty. The way to resolve this is to commit the application files to the newly-created CodeCommit repository. First, you’ll need to clone the repository locally. To do this, get the CloneUrlSsh Output from the CloudFormation stack you launched in Step 2. A sample command is shown below. You’ll replace {CloneUrlSsh} with the value from the CloudFormation stack output. For more information on using SSH to interact with CodeCommit, see the Connect to the CodeCommit Repository section at: Create and Connect to an AWS CodeCommit Repository.

git clone {CloneUrlSsh}
cd {localdirectory}

Once you’ve cloned the repository locally, download the sample application files from SampleApp_Linux.zip and place the files directly into your local repository. Do not include the SampleApp_Linux folder. Go to the local directory and type the following to commit and push the new files to the CodeCommit repository:

git add .
git commit -am "add all files from the AWS sample linux codedeploy application"
git push

Once these files have been committed, the pipeline will discover the changes in CodeCommit and run a new pipeline instance and both stages and actions should succeed as a result of this change.

Access the Application

Once the CloudFormation stack has successfully completed, go to CodeDeploy and select Deployments. For example, if you’re in the us-east-1 region, the URL might look like: https://console.aws.amazon.com/codedeploy/home?region=us-east-1#/deployments (You can also find this link in the CodeDeployURL Output of the CloudFormation stack you launched). Next, click on the link for the Deployment Id of the deployment you just launched from CloudFormation. Then, click on the link for the Instance Id. From the EC2 instance, copy the Public IP value and paste into your browser and hit enter. You should see a page like the one below.

codedeploy_before

Commit Changes to CodeCommit

Make some visual changes to the index.html (look for background-color) and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned the local version of your CodeCommit repo (in the directory created by your git clone command). To push these changes to the remote repository, see the commands below.

git commit -am "change bg color to burnt orange"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser. You’ll see that the color of the index page of the application has changed.

codedeploy_after

How-To Video

In this video, I walkthrough the deployment steps described above.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post, you learned how to define and launch a stack capable of launching a CloudFormation stack that provisions a CodeCommit Git repository in code. Additionally, the example included the automation of a CodePipeline deployment pipeline (which included the CodeCommit integration) along with creating and running the deployment on an EC2 instance using CodeDeploy.

Furthermore, I described the prerequisites, architecture, implementation, costs, patterns and deployment steps of the solution.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/blob/master/labs/codecommit/. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Microservices Platform with ECS

UPDATE: The work for this blog post inspired the creation of a tool named mu to simplify the management of your microservices in ECS.  Learn more by visiting getmu.io!

Architecting applications with microservices is all the rage with developers right now, but running them at scale with cost efficiency and high availability can be a real challenge. In this post, we will address this challenge by looking at an approach to building microservices with Spring Boot and deploying them with CloudFormation on AWS EC2 Container Service (ECS) and Application Load Balancers (ALB). We will start with describing the steps to build the microservice, then walk through the platform for running the microservices, and finally deploy our microservice on the platform.

Spring Boot was chosen for the microservice development as it is a very popular framework in the Java community for building “stand-alone, production-grade Spring based Applications” quickly and easily. However, since ECS is just running Docker containers you can substitute your preferred development framework for Spring Boot and the platform described in this post will be still be able to run your microservice.

This post builds upon a prior post called Automating ECS: Provisioning in CloudFormation that does an awesome job of explaining how to use ECS. If you are new to ECS, I’d highly recommend you review that before proceeding. This post will expand upon that by using the new Application Load Balancer that provides two huge features to improve the ECS experience:

  • Target Groups: Previously in a “Classic” Elastic Load Balancer (ELB), all targets had to be able to handle all possible types of requests that the ELB received. Now with target groups, you can route different URLs to different target groups, allowing heterogeneous deployments. Specifically, you can have two target groups that handle different URLs (eg. /bananas and /apples) and use the ALB to route traffic appropriately.
  • Per Target Ports: Previously in an ELB, all targets had to listen on the same port for traffic from the ELB. In ECS, this meant that you had to manage the ports that each container listened on. Additionally, you couldn’t run multiple instances of a given container on a single ECS container instance since they would have different ports. Now, each container can use an ephemeral port (next available assigned by ECS) making port management and scaling up on a single ECS container instance a non-issue.

The infrastructure we create will look like the diagram below. Notice that there is a single shared ECS cluster and a single shared ALB with a target group, EC2 Container Registry (ECR) and ECS Service for each microservice deployed to the platform. This approach enables a cost efficient solution by using a single pool of compute resources for all the services. Additionally, high availability is accomplished via an Auto Scaling Group (ASG) for the ECS container instances that spans multiple Availability Zones (AZ).

ms-architecture-3
Setup Your Development Environment

You will need to install the Spring Boot CLI to get started. The recommended way is to use SDKMAN! for the installation. First install SDKMAN! with:

 $ curl -s "https://get.sdkman.io" | bash

Then, install Spring Boot with:

$ sdk install springboot

Alternatively, you could install with Homebrew:

$ brew tap pivotal/tap
$ brew install springboot

Scaffold Your Microservice Project

For this example, we will be creating a microservice to manage bananas. Use the Spring Boot CLI to create a project:

$ spring init --build=gradle --package-name=com.stelligent --dependencies=web,actuator,hateoas -n Banana banana-service

This will create a new subdirectory named banana-service with the skeleton of a microservice in src/main/java/com/stelligent and a build.gradle file.

Develop the Microservice

Development of the microservice is a topic for an entire post of its own, but let’s look at a few important bits. First, the application is defined in BananaApplication:

@SpringBootApplication
public class BananaApplication {

  public static void main(String[] args) {
    SpringApplication.run(BananaApplication.class, args);
  }
}

The @SpringBootApplication annotation marks the location to start component scanning and enables configuration of the context within the class.

Next, we have the controller class with contains the declaration of the REST routes.

@RequestMapping("/bananas")
@RestController
public class BananaController {

  @RequestMapping(method = RequestMethod.POST)
  public @ResponseBody BananaResource create(@RequestBody Banana banana)
  {
    // create a banana...
  }

  @RequestMapping(path = "/{id}", method = RequestMethod.GET)
  public @ResponseBody BananaResource retrieve(@PathVariable long id)
  {
    // get a banana by its id
  }

}

These sample routes handle a POST of JSON banana data to /bananas for creating a new banana, and a GET from /bananas/1234 for retrieving a banana by it’s id. To view a complete implementation of the controller including support for POST, PUT, GET, PATCH, and DELETE as well as HATEOAS for links between resources, check out BananaController.java.

Additionally, to look at how to accomplish unit testing of the services, check out the tests created in BananaControllerTest.java using WebMvcTest, MockMvc and Mockito.

Create Microservice Platform

The platform will consist of a separate CloudFormation stack that contains the following resources:

  • VPC – To provide the network infrastructure to launch the ECS container instances into.
  • ECS Cluster – The cluster that the services will be deployed into.
  • Auto Scaling Group – To manage the ECS container instances that contain the compute resources for running the containers.
  • Application Load Balancer – To provide load balancing for the microservices running in containers. Additionally, this provides service discovery for the microservices.

ms-architecture-1.png

The template is available at platform.template. The AMIs used by the Launch Configuration for the EC2 Container Instances must be the ECS optimized AMIs:

Mappings:
  AWSRegionToAMI:
    us-east-1:
      AMIID: ami-2b3b6041
    us-west-2:
      AMIID: ami-ac6872cd
    eu-west-1:
      AMIID: ami-03238b70
    ap-northeast-1:
      AMIID: ami-fb2f1295
    ap-southeast-2:
      AMIID: ami-43547120
    us-west-1:
      AMIID: ami-bfe095df
    ap-southeast-1:
      AMIID: ami-c78f43a4
    eu-central-1:
      AMIID: ami-e1e6f88d

Additionally, the EC2 Container Instances must have the ECS Agent configured to register with the newly created ECS Cluster:

  ContainerInstances:
    Type: AWS::AutoScaling::LaunchConfiguration
    Metadata:
      AWS::CloudFormation::Init:
        config:
          commands:
            01_add_instance_to_cluster:
              command: !Sub |
                #!/bin/bash
                echo ECS_CLUSTER=${EcsCluster}  >> /etc/ecs/ecs.config

Next, an Application Load Balancer is created for the later stacks to register with:

 EcsElb:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Subnets:
      - !Ref PublicSubnetAZ1
      - !Ref PublicSubnetAZ2
      - !Ref PublicSubnetAZ3
  EcsElbListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      LoadBalancerArn: !Ref EcsElb
      DefaultActions:
      - Type: forward
        TargetGroupArn: !Ref EcsElbDefaultTargetGroup
      Port: '80'
      Protocol: HTTP

Finally we have a Gradle task in our build.gradle for upserting the platform CloudFormation stack based on a custom task named StackUpTask defined in buildSrc.

task platformUp(type: StackUpTask) {
    region project.region
    stackName "${project.stackBaseName}-platform"
    template file("ecs-resources/platform.template")
    waitForComplete true
    capabilityIam true
    if(project.hasProperty('keyName')) {
        stackParams['KeyName'] = project.keyName
    }
}

Simply run the following to create/update the platform stack:

$ gradle platformUp

Deploy Microservice

Once the platform stack has been created, there are two additional stacks to create for each microservice. First, there is a repo stack that creates the EC2 Container Registry (ECR) for the microservice. This stack also creates a target group for the microservice and adds the target group to the ALB with a rule for which URL path patterns should be routed to the target group.

The second stack is for the service and creates the ECS task definition based on the version of the docker image that should be run, as well as the ECS service which specifies how many tasks to run and the ALB to associate with.

The reason for the two stacks is that you must have the ECR provisioned before you can push a docker image to it, and you must have a docker image in the ECR before creating the ECS service. Ideally, you would create the repo stack once, then configure a CodePipeline job to continuously push changes to the code to ECR as new images and then updating the service stack to reference the newly pushed image.

ms-architecture-2.png

The entire repo template is available at repo.template, an important new resource to check out is the ALB Listener Rule that provides the URL patterns that should be handled by the new target group that is created:

EcsElbListenerRule:
    Type: AWS::ElasticLoadBalancingV2::ListenerRule
    Properties:
      Actions:
      - Type: forward
        TargetGroupArn: !Ref EcsElbTargetGroup
      Conditions:
      - Field: path-pattern
        Values: [“/bananas”]
      ListenerArn: !Ref EcsElbListenerArn
      Priority: 1

The entire service template is available at service.template, but notice that the ECS Task Definition uses port 0 for HostPort. This allows for ephemeral ports that are assigned by ECS to remove the requirement for us to manage container ports:

 MicroserviceTaskDefinition:
    Type: AWS::ECS::TaskDefinition
    Properties:
      ContainerDefinitions:
      - Name: banana-service
        Cpu: '10'
        Essential: 'true'
        Image: !Ref ImageUrl
        Memory: '300'
        PortMappings:
        - HostPort: 0
          ContainerPort: 8080
      Volumes: []

Next, notice how the ECS Service is created and associated with the newly created Target Group:

 EcsService:
    Type: AWS::ECS::Service
    Properties:
      Cluster: !Ref EcsCluster
      DesiredCount: 6
      DeploymentConfiguration:
        MaximumPercent: 100
        MinimumHealthyPercent: 0
      LoadBalancers:
      - ContainerName: microservice-exemplar-container
        ContainerPort: '8080'
        TargetGroupArn: !Ref EcsElbTargetGroupArn
      Role: !Ref EcsServiceRole
      TaskDefinition: !Ref MicroserviceTaskDefinition

Finally, we have a Gradle task in our service build.gradle for upserting the repo CloudFormation stack:

task repoUp(type: StackUpTask) {
 region project.region
 stackName "${project.stackBaseName}-repo-${project.name}"
 template file("../ecs-resources/repo.template")
 waitForComplete true
 capabilityIam true

 stackParams['PathPattern'] ='/bananas'
 stackParams['RepoName'] = project.name
}

And then another to upsert the service CloudFormation stack:

task serviceUp(type: StackUpTask) {
 region project.region
 stackName "${project.stackBaseName}-service-${project.name}"
 template file("../ecs-resources/service.template")
 waitForComplete true
 capabilityIam true

 stackParams['ServiceDesiredCount'] = project.serviceDesiredCount
 stackParams['ImageUrl'] = "${project.repoUrl}:${project.revision}"

 mustRunAfter dockerPushImage
}

And finally, a task to coordinate the management of the stacks and the build/push of the image:

task deploy(dependsOn: ['dockerPushImage', 'serviceUp']) {
  description "Upserts the repo stack, pushes a docker image, then upserts the service stack"
}

dockerPushImage.dependsOn repoUp

This then provides a simple command to deploy new or update existing microservices:

$ gradle deploy

Defining a similar build.gradle file in other microservices to deploy them to the same platform.

Blue/Green Deployment

When running the gradle deploy, the existing service stack is updated to use a new task definition that references a new docker image in ECR. This CloudFormation update causes ECS to do a rolling replacement of the containers, launching new containers with the new image and killing containers with the old image.

However, if you are looking for a more traditional blue/green deployment, this could be accomplished by creating a new service stack (the green stack) with the new docker image, rather than updating the existing. The new stack would attach to the existing ALB target group at which point you could update the existing service stack (the blue stack) to no longer reference the ALB target group, which would take it out of service without killing the containers.

Next Steps

Stay tuned for future blog posts that builds on this platform by accomplishing service discovery in a more decoupled manner through the use of Eureka as a service registry, Ribbon as a service client, and Zuul as an edge router.

Additionally, this solution isn’t complete since there is no Continuous Delivery pipeline defined. Look for an additional post showing how to use CodePipeline to orchestrate the movement of changes to the microservice source code into production.

The code for the examples demonstrated in this post are located at https://github.com/stelligent/microservice-exemplar. Let us know if you have any comments or questions @stelligent.

Are you interested in building resilient applications in AWS? Stelligent is hiring!

DevOps in AWS Radio: Orchestrating Docker containers with AWS ECS, ECR and CodePipeline (Episode 4)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak about the AWS EC2 Container Service (ECS), AWS EC2 Container Registry (ECR), HashiCorp Consul, AWS CodePipeline, and other tools in providing Docker-based solutions for customers. Here are the show notes:

DevOps in AWS News

Episode Topics

  1. Benefits of using ECS, ECR, Docker, etc.
  2. Components of ECS, ECR and Service Discovery
  3. Orchestrating and automating the deployment pipeline using CloudFormation, CodePipeline, Jenkins, etc. 

Blog Posts

  1. Automating ECS: Provisioning in CloudFormation (Part 1)
  2. Automating ECS: Orchestrating in CodePipeline and CloudFormation (Part 2)

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Automating Habitat with AWS CodePipeline

This article outlines a proof-of-concept (POC) for automating Habitat operations from AWS CodePipeline. Habitat is Chef’s new application automation platform that provides a packaging system that results in apps that are “immutable and atomically deployed, with self-organizing peer relationships.”  Habitat is an innovative technology for packaging applications, but a Continuous Delivery pipeline is still required to automate deployments.  For this exercise I’ve opted to build a lightweight pipeline using CodePipeline and Lambda.

An in-depth analysis of how to use Habitat is beyond the scope for this post, but you can get a good introduction by following their tutorial. This POC essentially builds a CD pipeline to automate the steps described in the tutorial, and builds the same demo app (mytutorialapp). It covers the “pre-artifact” stages of the pipeline (Source, Commit, Acceptance), but keep an eye out for a future post which will flesh out the rest.

Also be sure to read the article “Continuous deployment with Habitat” which provides a good overview of how the developers of Habitat intend it to be used in a pipeline, including links to some repos to help implement that vision using Chef Automate.

Technology Overview

Application

The application we’re automating is called mytutorialapp. It is a simple “hello world” web app that runs on nginx. The application code can be found in the hab-demo repository.

Pipeline

The pipeline is provisioned by a CloudFormation stack and implemented with CodePipeline. The pipeline uses a Lambda function as an Action executor. This Lambda function delegates command execution to  an EC2 instance via an SSM Run Command: aws:runShellScript. The pipeline code can be found in the hab-demo-pipeline repository. Here is a simplified diagram of the execution mechanics:

hab_pipeline_diagram

Stack

The CloudFormation stack that provisions the pipeline also creates several supporting resources.  Check out the pipeline.json template for details, but here is a screenshot to show what’s included:

hab_demo_cfn_results

Pipeline Stages

Here’s an overview of the pipeline structure. For the purpose of this article I’ve only implemented the Source, Commit, and Acceptance stages. This portion of the pipeline will get the source code from a git repo, build a Habitat package, build a Docker test environment, deploy the Habitat package to the test environment, run tests on it and then publish it to the Habitat Depot. All downstream pipeline stages can then source the package from the Depot.

  • Source
    • Clone the app repo
  • Commit
    • Stage-SourceCode
    • Initialize-Habitat
    • Test-StaticAnalysis
    • Build-HabitatPackage
  • Acceptance
    • Create-TestEnvironment
    • Test-HabitatPackage
    • Publish-HabitatPackage

Action Details

Here are the details for the various pipeline actions. These action implementations are defined in a “pipeline-runner” Lambda function and invoked by CodePipeline. Upon invocation, the scripts are executed on an EC2 box that gets provisioned at the same time as the code pipeline.

Commit Stage

Stage-SourceCode

Pulls down the source code artifact from S3 and unzips it.

Initialize-Habitat

Sets Habitat environment variables and generates/uploads a key to access my Origin on the Habitat Depot.

Test-StacticAnalysis

Runs static analysis on plan.sh using bash -n.

Build-HabitatPackage

Builds the Habitat package

Acceptance Stage

Build-TestEnvironment

Creates a Docker test environment by running a Habitat package export command inside the Habitat Studio.

Test-HabitatPackage

Runs a Bats test suite which verifies that the webserver is running and the “hello world” page is displayed.

Publish-HabitatPackage

Uploads the Habitat package to the Depot. In a later pipeline stage, a package deployment can be sourced directly from the Depot.

Wrapping up

This post provided an early look at a mechanism for automating Habitat deployments from AWS CodePipeline. There is still a lot of work to be done on this POC project so keep an eye out for later posts that describe the mechanics of the rest of the pipeline.

Do you love Chef and Habitat? Do you love AWS? Do you love automating software development workflows to create CI/CD pipelines? If you answered “Yes!” to any of these questions then you should come work at Stelligent. Check out our Careers page to learn more.