Continuous Delivery to S3 via CodePipeline and CodeBuild

In this blog post, you see a demonstration of Continuous Delivery of a static website to Amazon S3 via AWS CodeBuild  and AWS CodePipeline. At the conclusion, you will be able to provision all of the AWS resources by clicking a “Launch Stack” button and going through the AWS CloudFormation steps to launch a solution stack.

Using S3 is useful when you want to host static files such as HTML and image files as a website for others to access. Fortunately, S3 provides us the capability to configure an S3 bucket for static website hosting. For more information on manually configuring this for a custom domain, see Example: Setting up a Static Website Using a Custom Domain.

However, once you go through this process manually a few times, and if you’re like me, you’ll quickly grow tired of manually uploading new files, deleting old files, and setting the permissions for the files in the S3 bucket.

In this example, all the source files are hosted in GitHub and can be made available to developers. All of the steps in the process are orchestrated via CodePipeline and the build and deployment actions are performed by CodeBuild. The provisioning of all of the AWS resources is defined in a CloudFormation template.

By automating the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so without needing to repeatedly manually upload files to S3. Instead, you just commit the changes to the GitHub repository and the pipeline orchestrates the rest. While this is a simple example, you can follow the same model and tools for much larger and sophisticated applications.

Figure 1 shows this deployment pipeline in action.

devops-quick-demo

Figure 1 – Deployment Pipeline in CodePipeline to deploy a static website to S3

The remainder of this post describes how to configure the solution in your AWS account.

Prerequisites

Here are the prerequisites for this solution:

  • AWS Account – Follow these instructions to create an AWS Account: Creating an AWS Account and grant IAM privileges to access at least CodeBuild, CodePipeline, CloudFormation, IAM, and S3.
  • Fork GitHub Repo – Fork and clone your own stelligent/devops-essentials GitHub repository
  • OAuth Token – Create an OAuth token in GitHub and provide access to the admin:repo_hook and repo scopes.

To see these steps in more detail, go to devopsessentialsaws.com and go to section 2.1 Configure course prerequisites.

Architecture and Implementation

In Figure 2, you see the architecture for provisioning an infrastructure that launches a deployment pipeline to orchestrate the build the solution. You can click on the image to launch the template in CloudFormation Designer within your AWS account.

Figure 2 – CloudFormation Template for provisioning AWS resources

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML (or generated by more expressive domain-specific languages)
  • AWS CodePipeline – The CodePipeline stages and actions are defined in a CloudFormation template. This includes CodePipeline’s integration with CodeCommit, CodeBuild, and CloudFormation (For more information, see Action Structure Requirements in AWS CodePipeline).
  • GitHub – CodePipeline connects with an existing GitHub repository using the GitHub Source provider action.
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project to package and store the Lambda function
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline, CloudFormation, and other resources can access.

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including S3 and IAM.

S3 Buckets

There are two S3 buckets provisioned in this CloudFormation template. The SiteBucket resource defines the S3 bucket that hosts all the files that are copied from the downloaded source files from Git. The PipelineBucket hosts the input artifacts for CodePipeline that are referenced across stages in the deployment pipeline.

  SiteBucket:
    Type: AWS::S3::Bucket
    Properties:
      AccessControl: PublicRead
      BucketName: !Ref SiteBucketName
      WebsiteConfiguration:
        IndexDocument: index.html
  PipelineBucket:
    Type: AWS::S3::Bucket

IAM Role

The IAM role for CodePipeline provides the CodePipeline the necessary permissions for access to the necessary resource to deploy the static website resources.

  CodePipelineRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - codepipeline.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: codepipeline-service
        PolicyDocument:
          Statement:
          - Action:
            - codebuild:*
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:GetObject
            - s3:GetObjectVersion
            - s3:GetBucketVersioning
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:PutObject
            Resource:
            - arn:aws:s3:::codepipeline*
            Effect: Allow
          - Action:
            - s3:*
            - cloudformation:*
            - iam:PassRole
            Resource: "*"
            Effect: Allow
          Version: '2012-10-17'

CodePipeline

The CodePipeline pipeline CloudFormation snippet shown below defines the two stages and two actions that orchestrate the deployment of the static website. The Source action within the Source stage configures GitHub as the source provider. Then, it moves to the Deploy stage which runs CodeBuild to copy all the HTML and other assets to an S3 bucket that’a configured to be hosted as a website.

  Pipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: ThirdParty
            Version: '1'
            Provider: GitHub
          OutputArtifacts:
          - Name: SourceOutput
          Configuration:
            Owner: !Ref GitHubUser
            Repo: !Ref GitHubRepo
            Branch: !Ref GitHubBranch
            OAuthToken: !Ref GitHubToken
          RunOrder: 1
      - Name: Deploy
        Actions:
        - Name: Artifact
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          InputArtifacts:
          - Name: SourceOutput
          OutputArtifacts:
          - Name: DeployOutput
          Configuration:
            ProjectName: !Ref CodeBuildDeploySite
          RunOrder: 1
      ArtifactStore:
        Type: S3
        Location: !Ref PipelineBucket

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodePipeline – Customers can create new pipelines without incurring any charges on that pipeline for the first thirty calendar days. After that period, the new pipelines will be charged at the existing rate of $1 per active pipeline per month. For more information, see AWS CodePipeline pricing.
  • GitHub – No charge for public repositories
  • IAM – No additional cost.
  • S3 – If you launch the solution and delete the S3 bucket, it’ll be pennies (if that). See S3 Pricing.

The bottom line on pricing for this particular example is that you will charged no more than a few pennies if you launch the solution run through a few changes and then terminate the CloudFormation stack and associated AWS resources.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region

Step 2. Launch the Stack

Click on the “Launch Stack” button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 5 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

Here are the steps to test the deployment:

  1. Once the CloudFormation stack is complete, select checkbox next to the stack and go to the Outputs tab
  2. Click on the PipelineUrl link to launch the CodePipeline pipeline.
  3. Click on the SiteUrl link to launch the website that was configured and launched as part of the deployment pipeline
  4. From your Terminal, type (replacing YOURGITHUBUSERID with your GitHub userid):
    git clone https://github.com/YOURGITHUBUSERID/devops-essentials
  5. Make obvious visual changes to any of your local files (for example, change .bg-primary{color:#fff;background-color: in your forked repo version of devops-essentials/html/css/bootstrap.min.css) and type the following from your Terminal:
    git commit -am "add new files" && git push
  6. Go back to your pipeline in CodePipeline and see the changes successfully flow through the pipeline.

DevOps Essentials on AWS Video Course

devops_essentials_aws_cover_large

This and many more topics are covered in the DevOps Essentials on AWS Complete Video Course (Udemy, InformIT, SafariBooksOnline). In it, you’ll learn how to automate the infrastructure and deployment pipelines using AWS services and tools so if you’re some type of software orDevOps-focused engineer or architect interested in learning how to use AWS Developer and AWS Management Tools to create a full-lifecycle software delivery solution, it’s the course for you. The focus of the course is on deployment pipeline architectures and its implementations.

Additional Resources

Here are some of the supporting resources discussed in this post:

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Acknowledgements

My colleague Casey Lee created the initial CodePipeline/CodeBuild/S3 CloudFormation template that’s the basis for this solution.

Introduction to NixOS

NixOS, and declarative immutable systems, are a great fit for CI/CD pipelines.  With the entire system in code, ensuring and auditing reproducible environments becomes easy.  Applications can also be “nixified,” so both system and application are fully declarative and in version control. The NixOS system is mounted read-only, which makes for a good fit in immutable autoscaling groups.  Instance userdata may contain a valid NixOS configuration, which is assumed on boot, so as to implement any necessary post-bake changes.

“NixOS. The Purely Functional Linux Distribution. NixOS is a Linux distribution with a unique approach to package and configuration management. Built on top of the Nix package manager, it is completely declarative, makes upgrading systems reliable, and has many other advantages.”
https://nixos.org/

There are four main parts to Nix and NixOS (Expressions, OS, Modules, and Tests). We will examine each one:

The Nix expression language

This is the “package manager” functionality of NixOS, which can be downloaded from here. Each open source project, including the Linux OS, has a “nix expression” that describes how it is to be built. This includes explicitly declaring each dependency. Each dependency is also declared with a nix expression, so the entire system is declarative. All Nix packages exist in the Nixpkgs Github. Here is the one for ElectricsSheep, a distributed screen saver for evolving artificial organisms:

The underlying mechanism of dependency and package management is a system of symbolic links. Packages are built and deployed in an immutable “nix store”. This read only location exists at /nix/store. In this location, the package name has a hash added, which is computed as a result of evaluating all build input dependencies. Therefore, we can have the same package available many times, each with a unique hash, and a unique version of dependencies. Symbolic links from /run/current-system/sw/bin/ to the hashed package name in nix store determine which package is called, as /run/current-system/sw/bin/ would be in the user’s $PATH.

Should any change to a packaging expression happen, all packages depending on it would rebuild. If a mistake is found, it is easy to revert the change to the dependency, and rebuild. Ensuring the reproducibility of system packages is a huge win, and this solution to “dependency hell” works very well in practice. The Nix package manager can be run on any Linux distribution, such as Fedora or Ubuntu, and also works on Darwin/OSX as well. Custom code can also be packaged with nix expressions, and so both the app and the OS are fully declarative and reproducible.

The NixOS Linux distribution

Nix packaging of open source software, including Linux kernel and boot processes, make up the NixOS Linux distribution. Official releases are available online. NixOS Channels are the method for specifying which version of NixOS is to be installed. NixOS is controlled by the /etc/nixos/configuration.nix file, which declaratively defines the NixOS environment, including defining which NixOS channel is to be used on the system. Whenever configuration.nix is updated, a nixos-rebuild switch can be executed, which switches to the new configuration immediately, as well as adding a new “generation” to Grub/EFI, so the new version can be booted. Here is an example configuration.nix:

As of NixOS 16.03, AWS EC2 Instance Metadata support is built in. However, instead of the usual Cloudinit directives, the NixOS instance expects the Userdata to be a valid configuration.nix. Upon boot, the system will “switch” to what is defined in the provided configuration.nix via EC2 Userdata. This allows for immutable declarative instances in AWS AutoScaling Groups.

NixOS Configuration Management Modules

NixOS modules define how services, via SystemD, are to be configured and run. Modules are written so that parameters can be set which correspond how the systemd process is to be run.

This is the Buildbot NixOS Module, which writes out buildbot configuration, based on module parameters, and then ensures the service is running:

NixOS Tests

NixOS tests are a mechanism to ensure NixOS expressions and modules are working as expected.  Virtual machine(s) are spun up so as to perform the declared tests. Currently VM’s spun up for testing use the QEMU hypervisor, although NixOS tests are moving to libvirt, ideally supporting autodetection of available system virtualization technologies. As an example, here is the buildbot continuous integration server test:

After downloading and building all dependencies, the test will perform a build that starts a QEMU/KVM virtual machine containing the nix system.  It is also possible to bring up this test system interactively to facilitate debugging. The virtual machine mounts the Nix store of the host which makes vm creation very fast, as no disk image needs to be created. These tests can then be implemented in a continuous integration environment such as buildbot or hydra.

In addition to declaratively expressing and testing system packages, applications can be nixified in the same way.  Application tests can then be written in the same manner, so both system and application can go thru a continuous integration pipeline. In a Docker microservices environment, where applications are defined as immutable containers, NixOS is the perfect host node OS, running the Docker daemon and Nomad, Kube, etc.

NixOS is in fast active development. Many users are also NixOS contributors, so most nix packaging of open source projects stay up-to-date. Unstable and release channels are available. Installation is very well documented online. The ability to easily “switch” between configuration versions, or “generations,” which include their own grub/efi boot entry, makes for a great workstation distro.  The declarative reproducibility of a long term stable release, with cloudinit userdata support, makes for a great server distribution.

Thanks for reading,
@hackoflamb

DevOps in AWS Radio: AWS CodeStar (Episode 7)

In this episode, Paul Duvall and Brian Jakovich are joined by Trey McElhattan from Stelligent to cover recent DevOps in AWS news and speak about AWS CodeStar.

Stay tuned for Trey’s blog post on his experiences in using AWS CodeStar!

Here are the show notes:

DevOps in AWS News

Episode Topics

  1. What is AWS CodeStar? What are its key features?
  2. Which AWS tools does it use?
  3. What are the alternatives to using AWS CodeStar?
  4. If you’d like to switch one of the tools that CodeStar uses, how would you do this (e.g. use a different monitoring tools than CloudWatch)?
  5. Which are supported and how: SDKs, CLI, Console, CloudFormation, etc.?
  6. What’s the pricing model for CodeStar?

Additional Resources

  1. New- Introducing AWS CodeStar – Quickly Develop, Build, and Deploy Applications on AWS
  2. AWS CodeStar Product Details
  3. AWS CodeStar Main Page

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Docker lifecycle automation and testing with Ruby in AWS

My friend and colleague, Stephen Goncher and I got to spend some real time recently implementing a continuous integration and continuous delivery pipeline using only Ruby. We were successful in developing a new module in our pipeline gem that handles many of the docker engine needs without having to skimp out on testing and code quality. By using the swipely/docker-api gem we were able to write well-tested, DRY pipeline code that can be leveraged by future users of our environment with confidence.

Our environment included the use of Amazon Web Service’s Elastic Container Registry (ECR) which proved to be more challenging to implement than we originally considered. The purpose of this post is to help others implement some basic docker functionality in their pipelines more quickly than we did. In addition, we will showcase some of the techniques we used to test our docker images.

Quick look at the SDK

It’s important that you make the connection in your mind now that each interface in the docker gem has a corresponding API call in the Docker Engine. With that said, it would be wise to take a quick stroll through the documentation and API reference before writing any code. There’s a few methods, such as Docker.authenticate! that will require some advanced configuration that is vaguely documented and you’ll need to combine all the sources to piece them together.

For those of you who are example driven learners, be sure to check out an example project on github that we put together to demonstrate these concepts.

Authenticating with ECR

We’re going to save you the trouble of fumbling through the various documentation by providing an example to authenticate with an Amazon ECR repository. The below example assumes you have already created a repository in AWS. You’ll also need to have an instance role attached to the machine you’re executing this snippet from or have your API key and secret configured.

Snippet 1. Using ruby to authenticate with Amazon ECR

require 'aws-sdk-core'
require 'base64'
require 'docker'

# AWS SDK ECR Client
ecr_client = Aws::ECR::Client.new

# Your AWS Account ID
aws_account_id = '1234567890'

# Grab your authentication token from AWS ECR
token = ecr_client.get_authorization_token(
 registry_ids: [aws_account_id]
).authorization_data.first

# Remove the https:// to authenticate
ecr_repo_url = token.proxy_endpoint.gsub('https://', '')

# Authorization token is given as username:password, split it out
user_pass_token = Base64.decode64(token.authorization_token).split(':')

# Call the authenticate method with the options
Docker.authenticate!('username' => user_pass_token.first,
                     'password' => user_pass_token.last,
                     'email' => 'none',
                     'serveraddress' => ecr_repo_url)

Pro Tip #1: The docker-api gem stores the authentication credentials in memory at runtime (see: Docker.creds.) If you’re using something like a Jenkins CI server to execute your pipeline in separate stages, you’ll need to re-authenticate at each step. Here’s an example of how the sample project accomplishes this.

Snippet 2. Using ruby to logout

Docker.creds = nil

Pro Tip #2: You’ll need to logout or deauthenticate from ECR in order to pull images from the public/default docker.io repository.

Build, tag and push

The basic functions of the docker-api gem are pretty straightforward to use with a vanilla configuration. When you tie in a remote repository such as Amazon ECR there can be some gotcha’s. Here are some more examples of the various stages of a docker image you’ll encounter with your pipeline. Now that you’re authenticated, let’s get to doing some real work!

The following snippets assume you’re authenticated already.

Snippet 3. The complete lifecycle of a basic Docker image

# Build our Docker image with a custom context
image = Docker::Image.build_from_dir(
 '/path/to/project',
 { 'dockerfile' => 'ubuntu/Dockerfile' }
)

# Tag our image with the complete endpoint and repo name
image.tag(repo: 'example.ecr.amazonaws.com/stelligent-example',
          tag: 'latest')

# Push only our tag to ECR
image.push(nil, tag: 'latest')

Integration Tests for your Docker Images

Here at Stelligent, we know that the key to software quality is writing tests. It’s part of our core DNA. So it’s no surprise we have some method to writing integration tests for our docker images. The solution will use Serverspec to launch the intermediate container, execute the tests and compile the results while we use the docker-api gem we’ve been learning to build the image and provide the image id into the context.

Snippet 5. Writing a serverspec test for a Docker Image

require 'serverspec'

describe 'Dockerfile' do
 before(:all) do
   set :os, family: :debian
   set :backend, :docker
   set :docker_image, '123456789' # image id
 end

 describe file('/usr/local/apache2/htdocs/index.html') do
   it { should exist }
   it { should be_file }
   it { should be_mode 644 }
   it { should contain('Automation for the People') }
 end

 describe port(80) do
   it { should be_listening }
 end
end

Snippet 6. Executing your test

$ rspec spec/integration/docker/stelligent-example_spec.rb

You’re Done!

Using a tool like the swipely/docker-api to drive your automation scripts is a huge step forward in providing fast, reliable feedback in your Docker pipelines compared to writing bash. By doing so, you’re able to write unit and integration tests for your pipeline code to ensure both your infrastructure and your application is well-tested. Not only can you unit test your docker-api implementation, but you can also leverage the AWS SDK’s ability to stub responses and take your testing a step further when implementing with Amazon Elastic Container Repository.

See it in Action

We’ve put together a short (approx. 5 minute) demo of using these tools. Check it out from github and take a test drive through the life cycle of Docker within AWS.


Working with cool tools like Docker and its open source SDKs is only part of the exciting work we do here at Stelligent. To take your pipeline a step further from here, you should check out mu — a microservices platform that will deploy your newly tested docker containers. You can take that epic experience a step further and become a Stelligentsia because we are hiring innovative and passionate engineers like you!

Sharing for the People: Stelligentsia Publications

Many moons ago, Jonny coined the term “Stelligentsia” to refer to our small, merry band of technologists at the time. Times have changed and the team has grown by a factor of 10 but we strive to live up to the name as all things DevOps and AWS continues to evolve.

stelligent-autopeople

We find the best way to do this is not only to continually learn and improve, but to also share our knowledge with others – a core value at Stelligent. We believe that you are more valuable when sharing knowledge with as many people as possible than keeping it locked inside your head only letting paying customers access to the principles and practices derived from these experiences.

Over the years at Stelligent, we’ve published many resources that are available to everyone. In this post, we do our best to list some of the blogs, articles, open source projects, refcardz, books, and other publications authored by Stelligentsia. If we listed everything, it’d likely be less useful, so we did our best to curate the publications listed in this post. We hope this collection provides a valuable resource to you and your organization.

Open Source

You can access our entire GitHub organization by going to https://github.com/stelligent/. The listing below represents the most useful, relevant, repositories in our organization.

Blogs

Stelligent

Here are the top 10 blog entries from Stelligentsia in 2016:

For many more detailed posts from Stelligentsia, visit the Stelligent Blog.

Series and Categories

  • Continuous Security – Five-part Stelligent Blog Series on applying security into your deployment pipelines
  • CodePipeline – We had over 20 CodePipeline-related posts in 2016 that describe how to integrate the deployment pipeline orchestration service with other tools and services
  • Serverless Delivery – Parts 1, 2, and 3

AWS Partner Network

Linkedin

Articles

Over the years, we have published many articles across the industry. Here are a few of them.

IBM developerWorks: Automation for the People

You can find a majority of the articles in this series by going to Automation for the People. Below, we provide a list to all accessible articles:

IBM developerWorks: Agile DevOps

You can find 7 of the 10 articles in this series by going to Agile DevOps. Below, we provide a list to all 10 articles:

DZone

  • Continuous Delivery Patterns – A description and visualization of Continuous Delivery patterns and antipatterns derived from the book on Continuous Delivery and Stelligent’s experiences.
  • Continuous Integration Patterns and Antipatterns – Reviews Patterns (a solution to a problem) and Anti-Patterns (ineffective approaches sometimes used to “fix” a problem) in the CI process.

Videos and Screencasts

Collectively, we have spoken at large conferences, meetups, and other events. We’ve included video recordings of some of these along with some recorded screencasts on DevOps, Continuous Delivery, and AWS.

AWS re:Invent

How-To Videos

  • DevOps in the Cloud – DevOps in the Cloud LiveLessons walks viewers through the process of putting together a complete continuous delivery platform for a working software application written in Ruby on Rails
  • DevOps in AWS – DevOps in AWS focuses on how to implement the key architectural construct in continuous delivery: the deployment pipeline. Viewers receive step-by-step instructions on how to do this in AWS based on an open-source software system that is in production. They also learn the DevOps practices teams can embrace to increase their effectiveness.

Stelligent’s YouTube

Websites

Podcasts

  • DevOps in AWS Radio – How to create a one-click (or “no click”) implementation of software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes

Social Media

Books

  • Continuous Integration: Improving Software Quality and Reducing Risk – Through more than forty CI-related practices using application examples in different languages, readers learn that CI leads to more rapid software development, produces deployable software at every step in the development lifecycle, and reduces the time between defect introduction and detection, saving time and lowering costs. With successful implementation of CI, developers reduce risks and repetitive manual processes, and teams receive better project visibility.

Additional Resources

Stelligent Reading List – A reading list for all Stelligentsia that we update from time to time.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

AWS CodeBuild is Here

At re:Invent 2016 AWS introduced AWS CodeBuild, a new service which compiles from source, runs tests, and produces ready to deploy software packages.  AWS CodeBuild handles provisioning, management, and scaling of your build servers.  You can either use pre-packaged build environments to get started quickly, or create custom build environments using your own build tools.  CodeBuild charges by the minute for compute resources, so you aren’t paying for a build environment while it is not in use.

AWS CodeBuild Introduction

Stelligent engineer, Harlen Bains has posted An Introduction to AWS CodeBuild to the AWS Partner Network (APN) Blog.  In the post he explores the basics of AWS CodeBuild and then demonstrates how to use the service to build a Java application.

Integrating AWS CodeBuild with AWS Developer Tools

In the follow-up post:  Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite, Stelligent CTO and AWS Community Hero,  Paul Duvall expands on how to integrate and automate the orchestration of CodeBuild with the rest of the AWS Developer Tools suite – including AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline using AWS’ provisioning tool, AWS CloudFormation.  He goes over the benefits of automating all the actions and stages into a deployment pipeline, while also providing an example with detailed screencast.

In the Future

Look to the Stelligent Blog for announcements, evaluations, and guides on new AWS products.  We are always looking for engineers who love to make things work better, faster, and just get a kick out of automating everything.  If you live and breathe DevOps, continuous delivery, and AWS, we want to hear from you.

Testing Nix Packages in Docker

In this blog post, we will cover developing nix packages and testing them in docker.  We will set up a container with the proper environment to build nix packages, and then we will test build existing packages from the nixpkgs repo.  This lays the foundation for using docker to test your own nix packages.

First, a quick introduction to Nix. Nix is a purely functional package manager. Nix expressions, a simple functional language, declaratively define each package.  A Nix expression describes everything that goes into a package build action, known as a derivation.  Because it’s a functional language, it’s easy to support building variants of a package: turn the Nix expression into a function and call it any number of times with the appropriate arguments. Due to the hashing scheme, variants don’t conflict with each other in the Nix store.  More details regarding Nix packages and NixOS can be found here.

To begin, we must write the Dockerfile that will be used to build the development environment.  We need to declare the baseimage to be used (centos7), install a few dependencies in order to install nix and clone nixpkgs repo.  We also need to set up the nix user and permissions:

FROM centos:latest

MAINTAINER “Fernando J Pando” <nando@********.com>

RUN yum -y install bzip2 perl-Digest-SHA git

RUN adduser nixuser && groupadd nixbld && usermod -aG nixbld nixuser

RUN mkdir -m 0755 /nix && chown nixuser /nix

USER nixuser

WORKDIR /home/nixuser

We clone the nixpkgs github repo (this can be set to any fork/branch for testing):

RUN git clone https://github.com/nixos/nixpkgs.git

We download the nix installer (latest stable 1.11.4):

RUN curl https://nixos.org/nix/install -o nix.install.sh

We are now ready to set up the environment that will allow us to build nix packages in this container.  There are several environment variables that need to be set for this to work, and we can pull them from the environment script provided by nix installation (~/.nix-profile/etc/profile.d/nix.sh).  For easy reference, here is that file:


# Set the default profile.
if ! [ -L &amp;amp;quot;$NIX_LINK&amp;amp;quot; ]; then
echo &amp;amp;quot;creating $NIX_LINK&amp;amp;quot; &amp;amp;gt;&amp;amp;amp;2
_NIX_DEF_LINK=/nix/var/nix/profiles/default
/nix/store/xmlp6pyxi6hi3vazw9821nlhhiap6z63-coreutils-8.24/bin/ln -s &amp;amp;quot;$_NIX_DEF_LINK&amp;amp;quot; &amp;amp;quot;$NIX_LINK&amp;amp;quot;
fi

export PATH=$NIX_LINK/bin:$NIX_LINK/sbin:$PATH

# Subscribe the user to the Nixpkgs channel by default.
if [ ! -e $HOME/.nix-channels ]; then
echo &amp;amp;quot;https://nixos.org/channels/nixpkgs-unstable nixpkgs&amp;amp;quot; &amp;amp;gt; $HOME/.nix-channels
fi

# Append ~/.nix-defexpr/channels/nixpkgs to $NIX_PATH so that
# &amp;amp;lt;nixpkgs&amp;amp;gt; paths work when the user has fetched the Nixpkgs
# channel.
export NIX_PATH=${NIX_PATH:+$NIX_PATH:}nixpkgs=$HOME/.nix-defexpr/channels/nixpkgs

# Set $SSL_CERT_FILE so that Nixpkgs applications like curl work.
if [ -e /etc/ssl/certs/ca-certificates.crt ]; then # NixOS, Ubuntu, Debian, Gentoo, Arch
export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
elif [ -e /etc/ssl/certs/ca-bundle.crt ]; then # Old NixOS
export SSL_CERT_FILE=/etc/ssl/certs/ca-bundle.crt
elif [ -e /etc/pki/tls/certs/ca-bundle.crt ]; then # Fedora, CentOS
export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
elif [ -e &amp;amp;quot;$NIX_LINK/etc/ssl/certs/ca-bundle.crt&amp;amp;quot; ]; then # fall back to cacert in Nix profile
export SSL_CERT_FILE=&amp;amp;quot;$NIX_LINK/etc/ssl/certs/ca-bundle.crt&amp;amp;quot;
elif [ -e &amp;amp;quot;$NIX_LINK/etc/ca-bundle.crt&amp;amp;quot; ]; then # old cacert in Nix profile
export SSL_CERT_FILE=&amp;amp;quot;$NIX_LINK/etc/ca-bundle.crt&amp;amp;quot;
fi

We pull out the relevant environment given a Docker base Centos7 image:

ENV USER=nixuser

ENV HOME=/home/nixuser

ENV _NIX_DEF_LINK=/nix/var/nix/profiles/default

ENV SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt

ENV NIX_LINK=$HOME/.nix-profile

ENV PATH=$NIX_LINK/bin:$NIX_LINK/sbin:$PATH

ENV NIX_PATH=${NIX_PATH:+$NIX_PATH:}nixpkgs=$HOME/.nix-defexpr/channels/nixpkgs

RUN echo “https://nixos.org/channels/nixpkgs-unstable nixpkgs” > $HOME/.nix-channels

RUN ln -sfv $_NIX_DEF_LINK $NIX_LINK

Now that the environment has been set up, we are ready to install nix:

RUN bash /home/nixuser/nix.install.sh

The last step is to set the working directory to the cloned github nixpkgs directory, so testing will execute from there when the container is run:

WORKDIR /home/nixuser/nixpkgs

At this point, we are ready to build the container:

docker build . -t nand0p/nixpkgs-devel

Conversely, you can pull this container image from docker hub:

docker pull nand0p/nixpkgs-devel

(NOTE: The docker hub image will contain a version of the nixpkgs repo from the time the container image was built.   If you are testing on the bleeding edge, always build the container fresh before testing.)

Once the container is ready, we can now begin test building an existing nix package:

docker run -ti nand0p/nixpkgs-devel nix-build -A nginx

The above command will test build the nginx package inside your new docker container. It will also build all the dependencies.  In order to test the package interactively to ensure to resulting binary works as expected, the container can be launched with bash:

docker run -ti nand0p/nixpkgs-devel bash
[nixuser@aa2c8e29c5e9 nixpkgs]$

Once you have shell inside the container, you can build and test run the package:

[nixuser@aa2c8e29c5e9 nixpkgs]$ nix-build -A nginx
/nix/store/7axviwfzbsqy50zznfxb7jzfvmg9pmwx-nginx-1.10.1

[nixuser@aa2c8e29c5e9 nixpkgs]$ /nix/store/7axviwfzbsqy50zznfxb7jzfvmg9pmwx-nginx-1.10.1/bin/nginx -V
nginx version: nginx/1.10.1
built by gcc 5.4.0 (GCC)
built with OpenSSL 1.0.2h 3 May 2016
TLS SNI support enabled
configure arguments: –prefix=/nix/store/7axviwfzbsqy50zznfxb7jzfvmg9pmwx-nginx-1.10.1 –with-http_ssl_module –with-http_v2_module –with-http_realip_module –with-http_addition_module –with-http_xslt_module –with-http_image_filter_module –with-http_geoip_module –with-http_sub_module –with-http_dav_module –with-http_flv_module –with-http_mp4_module –with-http_gunzip_module –with-http_gzip_static_module –with-http_auth_request_module –with-http_random_index_module –with-http_secure_link_module –with-http_degradation_module –with-http_stub_status_module –with-ipv6 –with-file-aio –add-module=/nix/store/vbfb8z3hgymbaz59wa54qf33yl84jii7-nginx-rtmp-module-v1.1.7-src –add-module=/nix/store/ps5s8q9v91l03972gw0h4l9iazx062km-nginx-dav-ext-module-v0.0.3-src –add-module=/nix/store/46sjsnbkx7rgwgn3pgigcfaygrs2cx30-headers-more-nginx-module-v0.26-src –with-cc-opt=’-fPIE -fstack-protector-all –param ssp-buffer-size=4 -O2 -D_FORTIFY_SOURCE=2′ –with-ld-opt=’-pie -Wl,-z,relro,-z,now’

Thanks for reading, and stay tuned for part II, where we will modify a nix package for our custom needs, and test it against our own fork of the nixpkgs repo.

Stelligent Bookclub: “Building Microservices” by Sam Newman

At Stelligent, we put a strong focus on education and so I wanted to share some books that have been popular within our team. Today we explore the world of microservices with “Building Microservices” by Sam Newman.

Microservices are an approach to distributed systems that promotes the use of small independent services within a software solution. By adopting microservices, teams can achieve better scaling and gain autonomy, that allows teams to chose their technologies and iterate independently from other teams.

As a result, a change to one part of the system could unintentionally break a different part, which in turn might lead to hard-to-predict outages

Microservices are an alternative to the development of a monolithic codebase in many organizations – a codebase that contains your entire application and where new code piles on at alarming rates. Monoliths become difficult to work with as interdependencies within the code begin to develop.

As a result, a change to one part of the system could unintentionally break a different part, which in turn might lead to hard-to-predict outages. This is where Newman’s argument about the benefits of microservices really comes into play.

  • Reasons to split the monolith
    • Increase pace of change
    • Security
    • Smaller team structure
    • Adopt the proper technology for a problem
    • Remove tangled dependencies
    • Remove dependency on databases for integration
    • Less technical debt

By splitting monoliths at their seams, we can slowly transform a monolithic codebase into a group of microservices. Each service his loosely coupled and highly cohesive, as a result changes within a microservice do not change it’s function to other parts of the system. Each element works in a blackbox where only the inputs and outputs matter. When splitting a monolith, databases pose some of the greatest challenge; as a result, Newman devotes a significant chunk of the text/book to explaining various useful techniques to reduce these dependencies.

Ways to reduce dependencies

  • Clear well documented api
  • Loose coupling and high cohesion within a microservice
  • Enforce standards on how services can interact with each other

Though Newman’s argument for the adoption of microservices is spot-on, his explanation on continuous delivery and scaling micro-services is shallow. For anyone who has a background in CD or has read “Continuous Delivery” these sections do not deliver. For example, he takes the time to talk about machine images at great length but lightly brushes over build pipelines. The issue I ran into with scaling microservices is Newman suggests that ideally each microservice should ideally be put on its own instance where it exists independently of all other services. Though this is a possibility and it would be nice to have this would be highly unlikely to happen in a production environment where cost is a consideration. Though he does talk about using traditional virtualization, Vagrant, linux containers, and Docker to host multiple services on a single host he remains platform agnostic and general. As a result he misses out on the opportunity to talk about services like Amazon ECS, Kubernetes, or Docker Swarm. Combining these technologies with reserved cloud capacity would be a real world example that I feel would have added a lot to this section

Overall Newman’s presentation of microservices is a comprehensive introduction for IT professionals. Some of the concepts covered are basic but there are many nuggets of insight that are worth reading for. If you are looking to get a good idea about how microservices work, pick it up. If you’re looking to advance your microservice patterns or suggest some, feel free to comment below!

Interested in working someplace that gives all employees an impressive book expense budget? We’re hiring.

Beyond Continuous Deployment

If I were to whittle the principle behind all modern software development approaches into one word, that word would be: feedback. By “modern approaches”, I’m referring to DevOps, Continuous Integration (CI), Continuous Delivery (CD), Continuous Deployment, Microservices, and so on. For definitions of these terms, see the Stelligent Glossary.

It’s not just feedback: it’s fast and effective feedback. It’s incorporating that feedback into subsequent behavior. It’s amplifying this feedback back into the development process to affect future work as soon as possible.

In this post, I describe how we can move beyond continuous deployment by focusing on the principle of feedback.

Feedback

I think it’s important to define all of this as one word and one principle because it’s so easy to get lost in the weeds of tools, processes, patterns, and practices. When I’m presented with a new concept, a problem, or an approach in my work, I often ask myself: “How will this affect feedback? Will it increase fast, effective feedback or decrease it?” These questions often help guide my decision making.

Amazon Web Services (AWS) has a great talk on how Amazon embraced DevOps in their organization (well before it was called “DevOps”). As part of the talk, they show an illustration similar to the one you see below in which they describe the feedback loop between developers and customers.

deployment_pipeline_feedback
Deployment Pipeline Feedback Loop (Based on: https://aws.amazon.com/devops/what-is-devops/)

They go on to describe two key points with this feedback loop:

  1. The faster you’re able to get through the feedback loop determines your customer responsiveness and your ability to innovate.
  2. In the eyes of your customers, you’re only delivering value when you’re spending time on the left side – developing new features.

The key, as AWS describes, is that any time you spend on building the pipeline itself or hand-holding changes through this pipeline, you’re not delivering value – at least in the eyes of your customer. So, you want to maximize the time you’re spending on the left side (developing new features) and minimize the time you’re spending in the middle – while delivering high-quality software that meets the needs of your customers.

They go on to describe DevOps as anything that helps increase these feedback loops, which might include changes to the organization, process, tooling or culture.

The Vision

There’s a vision on feedback that I’ve discussed with a few people and only recently realized that I hadn’t shared with the wider software community. In many ways, I still feel like it’s “Day 1” when it comes to software delivery. As mentioned, there’s been the introduction of some awesome tools, approaches, and practices in the past few years like Cloud, Continuous Delivery, Microservices, and Serverless but we’ll be considering all of this ancient times several years from now.

Martin Fowler is fond of saying that software integration should be a “non event”. I wholeheartedly agree but the reality is that even in the best CI/CD environments, there are still lots of events in the form of interruptions and wait time associated with the less creative side of software development (i.e. the delivery of the software to users).

The vision I describe below is inspired by an event on Continuous Integration that I attended in 2008 that Andy Glover describes on The Disco Blog. I’m still working on the precise language of this vision, but by focusing on fast and effective feedback, it led me to what I describe here:

Beyond Continuous Deployment

Using smart algorithms, code is automatically integrated and pushed to production in nanoseconds as a developer continues to work as long as it passes myriad validation and verification checks in the deployment pipeline. The developer is notified of success or failure in nanoseconds passively through their work environment.

I’m sure there are some physics majors who might not share my “nanoseconds” view on this, but sometimes approaching problems devoid of present day limitations can lead to better future outcomes. Of course, I don’t think people will complain if it’s “seconds” instead of “nanoseconds” as it moves closer toward the vision.

This vision goes well beyond the idea of today’s notion of “Continuous Deployment” which relies on developers to commit code to a version-control repository according to the individual developer’s idiosyncrasies and schedule. In this case, smart algorithms would determine when and how often code is “committed” to a version-control repository and, moreover, these same algorithms are responsible for orchestrating it into the pipeline on its way to production.

These smart algorithms could be applied when a code block is complete or some other logical heuristic. It’d likely use some type of machine learning algorithm to determine these logical intervals, but it might equate to hundreds of what we call “commits” per developer, per day. As a developer, you’d continue writing code as these smart algorithms automatically determine the best time to integrate your changes with the rest of the code base. In your work environment, you might see some passive indicators of success or failure as you continue writing code (e.g. color changes and/or other passive notifiers). The difference is that your work environment is informing you not just based on some simple compilation, but based upon the full creation and verification of the system as part of a pipeline – resulting in an ultra fast and effective feedback cycle.

The “developer’s work environment” that I describe in the vision could be anything from what we think of as an Integrated Development Environment (IDE) to a code editor, to a developer’s environment managed in the cloud. It doesn’t really matter because the pipeline runs based on the canonical source repository as defined and integrated through the smart algorithms and orchestrated through a canonical pipeline.

Some deployment pipelines today effectively use parallel actions to increase the throughput of the system change. But, even in the most effective pipelines, there’s still a somewhat linear process in which some set of actions relies upon preceding actions to succeed in order to initiate its downstream action(s). The pipelines that would enable this vision would need to rethink the approach to parallelization in order to provide feedback as fast I’m suggesting.

This approach will also likely require more granular microservices architectures as a means of decreasing the time it takes to provide fast and effective feedback.

In this vision, you’d continue to separate releases from deployments whereas deployments will regularly occur in this cycle, but releases would be associated with more business-related concerns. For example, you might have thousands of deployments for a single application/service in a week, but maybe only a single release during that same time.

If a deployment were to fail, it wouldn’t deploy the failure to production. It only deploys to production if it passes all of the defined checks that are partially built from machine learning and other autonomic system techniques.

Summary

By focusing on the principle of feedback, you can eliminate a lot of the “noise” when it comes to making effective decisions on behalf of your customers and your teams. Your teams need fast and effective feedback to be more responsive to customers. I shared how you can often arise at better decisions by focusing on principles over practices. Finally, this vision goes well beyond today’s notion of Continuous Deployment to enable even more effective engineer and customer responsiveness.

Resources

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Automating and Orchestrating OpsWorks in CloudFormation and CodePipeline

pipeline_opsworks_consoleIn this post, you’ll learn how to provision, configure, and orchestrate a PHP application using the AWS OpsWorks application management service into a deployment pipeline using AWS CodePipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to the AWS CodeCommit version-control repository. This way, team members can release new changes to users whenever they choose to do so: aka, Continuous Delivery.

Recently, AWS announced the integration of OpsWorks into AWS CodePipeline so I’ll be describing various components and services that support this solution including CodePipeline along with codifying the entire infrastructure in AWS CloudFormation. As part of the announcement, AWS provided a step-by-step tutorial of integrating OpsWorks with CodePipeline that I used as a reference in automating the entire infrastructure and workflow.

This post describes how to automate all the steps using CloudFormation so that you can click on a Launch Stack button to instantiate all of your infrastructure resources.

OpsWorks

“AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application’s architecture and the specification of each component including package installation, software configuration and resources such as storage. Start from templates for common technologies like application servers and databases or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load and dynamic configuration to orchestrate changes as your environment scales.” [1]

OpsWorks provides a structured way to automate the operations of your AWS infrastructure and deployments with lifecycle events and the Chef configuration management tool. OpsWorks provides more flexibility than Elastic Beanstalk and more structure and constraints than CloudFormation. There are several key constructs that compose OpsWorks. They are:

  • Stack – An OpsWorks stack is the logical container defining OpsWorks layers, instances, apps and deployments.
  • Layer – There are built-in layers provided by OpsWorks such as Static Web Servers, Rails, Node.js, etc. But, you can also define your own custom layers as well.
  • Instances – These are EC2 instances on which the OpsWorks agent has been installed. There are only certain Linux and Windows operating systems supported by OpsWorks instances.
  • App – “Each application is represented by an app, which specifies the application type and contains the information that is needed to deploy the application from the repository to your instances.” [2]
  • Deployment – Runs Chef recipes to deploy the application onto instances based on the defined layer in the stack.

There are also lifecycle events that get executed for each deployment. Lifecycle events are linked to one or more Chef recipes. The five lifecycle events are setup, configure, deploy, undeploy, shutdown. Events get triggered based upon certain conditions. Some events can be triggered multiple times. They are described in more detail below:

  • setup – When an instance finishes booting as part of the initial setup
  • configure – When this event is run, it executes on all instances in all layers whenever a new instance comes in service, or an EIP changes, or an ELB is attached
  • deploy – When running a deployment on an instance, this event is run
  • undeploy – When an app gets deleted, this event is run
  • shutdown – Before an instance is terminated, this event is run

Solution Architecture and Components

In Figure 2, you see the deployment pipeline and infrastructure architecture for the OpsWorks/CodePipeline integration.

opsworks_pipeline_arch.jpg
Figure 2 – Deployment Pipeline Architecture for OpsWorks

Both OpsWorks and CodePipeline are defined in a single CloudFormation stack, which is described in more detail later in this post. Here are the key services and tools that make up the solution:

  • OpsWorks – In this stack, code configures operations of your infrastructure using lifecycle events and Chef
  • CodePipeline – Orchestrate all actions in your software delivery process. In this solution, I provision a CodePipeline pipeline with two stages and one action per stage in CloudFormation
  • CloudFormation – Automates the provisioning of all AWS resources. In this solution, I’m using CloudFormation to automate the provisioning for OpsWorks, CodePipeline,  IAM, and S3
  • CodeCommit – A Git repo used to host the sample application code from this solution
  • PHP – In this solution, I leverage AWS’ OpsWorks sample application written in PHP.
  • IAM – The CloudFormation stack defines an IAM Instance Profile and Roles for controlled access to AWS resources
  • EC2 – A single compute instance is launched as part of the configuration of the OpsWorks stack
  • S3 – Hosts the deployment artifacts used by CodePipeline.

Create and Connect to a CodeCommit Repository

While you can store your software code in any version-control repository, in this solution, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code off of the Amazon OpsWorks PHP Simple Demo App located at https://github.com/awslabs/opsworks-demo-php-simple-app.

To create your own CodeCommit repo, follow these instructions: Create and Connect to an AWS CodeCommit Repository. I called my CodeCommit repository opsworks-php-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.

After you create your CodeCommit repo, copy the contents from the AWS PHP OpsWorks Demo app and commit all of the files.

Implementation

I created this sample solution by stitching together several available resources including the CloudFormation template provided by the Step-by-Step Tutorial from AWS on integrating OpsWorks with CodePipeline and existing templates we use at Stelligent for CodePipeline. Finally, I manually created the pipeline in CodePipeline using the same step-by-step tutorial and then obtained the configuration of the pipeline using the get-pipeline command as shown in the command snippet below.

aws codepipeline get-pipeline --name OpsWorksPipeline > pipeline.json

This section describes the various resources of the CloudFormation solution in greater detail including IAM Instance Profiles and Roles, the OpsWorks resources, and CodePipeline.

Security Group

Here, you see the CloudFormation definition for the security group that the OpsWorks instance uses. The definition restricts the ingress port to 80 so that only web traffic is accepted on the instance.

    "CPOpsDeploySecGroup":{
      "Type":"AWS::EC2::SecurityGroup",
      "Properties":{
        "GroupDescription":"Lets you manage OpsWorks instances deployed to by CodePipeline"
      }
    },
    "CPOpsDeploySecGroupIngressHTTP":{
      "Type":"AWS::EC2::SecurityGroupIngress",
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"80",
        "ToPort":"80",
        "CidrIp":"0.0.0.0/0",
        "GroupId":{
          "Fn::GetAtt":[
            "CPOpsDeploySecGroup",
            "GroupId"
          ]
        }
      }
    },

IAM Role

Here, you see the CloudFormation definition for the OpsWorks instance role. In the same CloudFormation template, there’s a definition for an IAM service role and an instance profile. The instance profile refers to OpsWorksInstanceRole defined in the snippet below.

The roles, policies and profiles restrict the service and resources to the essential permissions it needs to perform its functions.

    "OpsWorksInstanceRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  {
                    "Fn::FindInMap":[
                      "Region2Principal",
                      {
                        "Ref":"AWS::Region"
                      },
                      "EC2Principal"
                    ]
                  }
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"s3-get",
            "PolicyDocument":{
              "Version":"2012-10-17",
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "s3:GetObject"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

Stack

The snippet below shows the CloudFormation definition for the OpsWorks Stack. It makes references to the IAM service role and instance profile, using Chef 11.10 for its configuration, and using Amazon Linux 2016.03 for its operating system. This stack is used as the basis for defining the layer, app, instance, and deployment that are described later in this section.

    "MyStack":{
      "Type":"AWS::OpsWorks::Stack",
      "Properties":{
        "Name":{
          "Ref":"AWS::StackName"
        },
        "ServiceRoleArn":{
          "Fn::GetAtt":[
            "OpsWorksServiceRole",
            "Arn"
          ]
        },
        "ConfigurationManager":{
          "Name":"Chef",
          "Version":"11.10"
        },
        "DefaultOs":"Amazon Linux 2016.03",
        "DefaultInstanceProfileArn":{
          "Fn::GetAtt":[
            "OpsWorksInstanceProfile",
            "Arn"
          ]
        }
      }
    },

Layer

The OpsWorks PHP layer is described in the CloudFormation definition below. It references the OpsWorks stack that was previously created in the same template. It also uses the php-app layer type. For a list of valid types, see CreateLayer in the AWS API documentation. This resource also enables auto healing, assigns public IPs and references the previously-created security group.

    "MyLayer":{
      "Type":"AWS::OpsWorks::Layer",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Name":"MyLayer",
        "Type":"php-app",
        "Shortname":"mylayer",
        "EnableAutoHealing":"true",
        "AutoAssignElasticIps":"false",
        "AutoAssignPublicIps":"true",
        "CustomSecurityGroupIds":[
          {
            "Fn::GetAtt":[
              "CPOpsDeploySecGroup",
              "GroupId"
            ]
          }
        ]
      },
      "DependsOn":[
        "MyStack",
        "CPOpsDeploySecGroup"
      ]
    },

OpsWorks Instance

In the snippet below, you see the CloudFormation definition for the OpsWorks instance. It references the OpsWorks layer and stack that are created in the same template. It defines the instance type as c3.large and refers to the EC2 Key Pair that you will provide as an input parameter when launching the stack.

    "MyInstance":{
      "Type":"AWS::OpsWorks::Instance",
      "Properties":{
        "LayerIds":[
          {
            "Ref":"MyLayer"
          }
        ],
        "StackId":{
          "Ref":"MyStack"
        },
        "InstanceType":"c3.large",
        "SshKeyName":{
          "Ref":"KeyName"
        }
      }
    },

OpsWorks App

In the snippet below, you see the CloudFormation definition for the OpsWorks app. It refers to the previously created OpsWorks stack and uses the current stack name for the app name – making it unique. In the OpsWorks type, I’m using php. For other supported types, see CreateApp.

I’m using other for the AppSource type (OpsWorks doesn’t seem to make the documentation obvious in terms of the types that AppSource supports, so I resorted to using the OpsWorks console to determine the possibilities). I’m using other because my source type is CodeCommit, which isn’t currently an option in OpsWorks.

    "MyOpsWorksApp":{
      "Type":"AWS::OpsWorks::App",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Type":"php",
        "Shortname":"phptestapp",
        "Name":{
          "Ref":"AWS::StackName"
        },
        "AppSource":{
          "Type":"other"
        }
      }
    },

CodePipeline

In the snippet below, you see the CodePipeline definition for the Deploy stage and the DeployPHPApp action in CloudFormation. It takes MyApp as an Input Artifact – which is an Output Artifact of the Source stage and action that obtains code assets from CodeCommit.

The action uses a Deploy category and OpsWorks as the Provider. It takes four inputs for the configuration: StackId, AppId, DeploymentType, LayerId. With the exception of DeploymentType, these values are obtained as references from previously created AWS resources in this CloudFormation template.

For more information, see CodePipeline Concepts.

         {
            "Name":"Deploy",
            "Actions":[
              {
                "InputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Name":"DeployPHPApp",
                "ActionTypeId":{
                  "Category":"Deploy",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"OpsWorks"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "StackId":{
                    "Ref":"MyStack"
                  },
                  "AppId":{
                    "Ref":"MyOpsWorksApp"
                  },
                  "DeploymentType":"deploy_app",
                  "LayerId":{
                    "Ref":"MyLayer"
                  }
                },
                "RunOrder":1
              }
            ]
          }

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the OpsWorks environment including all the resources previously described such as CodePipeline, OpsWorks, IAM Roles, etc.

When launching a stack, you’ll enter a value the KeyName parameter from the drop down. Optionally, you can enter values for your CodeCommit repository name and branch if they are different than the default values.

opsworks_pipeline_cfn
Figure 3- Parameters for Launching the CloudFormation Stack

You will charged for your AWS usage – particularly EC2, CodePipeline and S3.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name OpsWorksPipelineStack --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/codepipeline-opsworks.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters  ParameterKey=KeyName,ParameterValue=YOURKEYNAME

Outputs

Once the CloudFormation stack successfully launches, there’s an output for the CodePipelineURL. You can click on this value to launch the pipeline that’s running that’s getting the source assets from CodeCommit and launch an OpsWorks stack and associated resources. See the screenshot below.

cfn_opsworks_pipeline_outputs
Figure 4 – CloudFormation Outputs for CodePipeline/OpsWorks stack

Once the pipeline is complete, you can access the OpsWorks stack and click on the Public IP link for one of the instances to launch the PHP application that was deployed using OpsWorks as shown in Figures 5 and 6 below.

opsworks_public_ip.jpg
Figure 5 – Public IP for the OpsWorks instance

 

opsworks_app_before.jpg
Figure 6 – OpsWorks PHP app once initially deployed

Commit Changes to CodeCommit

Make some visual changes to the code (e.g. your local CodeCommit version of index.php) and commit these changes to your CodeCommit repository to see these software get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to rust orange"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser – as shown in Figure 7.

opsworks_app_after.jpg
Figure 7 – Application after code changes committed to CodeCommit, orchestrated by CodePipeline and deployed by OpsWorks

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/opsworks. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Useful Resources and References

OpsWorks Reference

Below, I’ve documented some additional information that might be useful on the OpsWorks service itself including its available integrations, supported versions and features.

  • OpsWorks supports three application source types: GitHub, S3, and HTTP.
  • You can store up to five versions of an OpsWorks application: the current revision plus four more for rollbacks.
  • When using the create-deployment method, you can target the OpsWorks stack, app, or instance
  • OpsWorks require internet access for the OpsWorks endpoint instance
  • Chef supports Windows in version 12
  • You cannot mix Windows and Linux instances in an OpsWorks stack
  • To change the default OS in OpsWorks, you need to change the OS and reprovision the instances
  • You cannot change the VPC for an OpsWorks instance
  • You can add ELB, EIPs, Volumes and RDS to an OpsWorks stack
  • OpsWorks autoheals at the layer level
  • You can assign multiple Chef recipes to an OpsWorks layer event
  • The three instance types in OpsWorks are: 24/7, time-based, load-based
  • To initiate a rollback in OpsWorks, you use create-deployment command
  • The following commands are available when using OpsWorks create-deployment along with possible use cases:
    • install_dependencies
    • update_dependencies – Patches to the Operating System. Not available after Chef 12.
    • update_custom_cookbooks – pulling down changes in your Chef cookbooks
    • execute_recipes – manually run specific Chef recipes that are defined in your layers
    • configure – service discovery or whenever endpoints change
    • setup
    • deploy
    • rollback
    • start
    • stop
    • restart
    • undeploy
  • To enable the use of multiple custom cookbook repositories in OpsWorks, you can enable custom cookbook at the stack and then create a cookbook that has a Berkshelf file with multiple sources. Before Chef 11.10, you couldn’t use multiple cookbook repositories.
  • You can define Chef databags in OpsWorks Users, Stacks, Layers, Apps and Instances
  • OpsWorks Auto Healing is triggered when an OpsWorks Agent detects loss of communication and stops, then restarts the instances. If it fails, it goes into manual intervention
  • OpsWorks will not auto heal an upgrade to the OS
  • OpsWorks does not auto heal by monitoring performance, only failures.

Acknowledgements

My colleague Casey Lee provided some of the background information on OpsWorks features. I also used several resources from AWS including the PHP sample app and the step-by-step tutorial on the OpsWorks/CodePipeline integration.