Application Auto Scaling with Amazon ECS

In this blog post, you’ll see an example of Application Auto Scaling for the Amazon ECS (EC2 Container Service). Automatic scaling of the container instances in your ECS cluster has been a feature for quite some time, but until recently you were not able to scale the tasks in your ECS service with built-in technology from AWS. In May of 2016, Automatic Scaling with Amazon ECS was announced which allowed us to configure elasticity into our deployed container services in Amazon’s cloud.

Developer Note: Skip to the “CloudFormation Examples” section to skip right to the code!

Why should you auto scale your container services?

Efficient and effective scaling of your microservices is why you should choose automatic scaling of your containers. If your primary goals include fault tolerance or elastic workloads, then leveraging a combination of cloud technology for autoscaling and infrastructure as code are the keys to success. With AWS’ Automatic Application Autoscaling, you can quickly configure elasticity into your architecture in a repeatable and testable way.

Introducing CloudFormation Support

For the first few months of this new feature it was not available in AWS CloudFormation. Configuration was either a manual process in the AWS Console or a series of API calls made from the CLI or one of Amazon’s SDKs. Finally, in August of 2016, we can now manage this configuration easily using CloudFormation.

The resource types you’re going to need to work with are:

The ScalableTarget and ScalingPolicy are the new resources that configure how your ECS Service behaves when an Alarm is triggered. In addition, you will need to create a new Role to give access to the Application Auto Scaling service to describe your CloudWatch Alarms and to modify your ECS Service — such as increasing your Desired Count.

CloudFormation Examples

The below examples were written for AWS CloudFormation in the YAML format. You can plug these snippets directly into your existing templates with minimal adjustments necessary. Enjoy!

Step 1: Implement a Role

These permissions were gathered from the various sources in AWS documentation.

ApplicationAutoScalingRole:
  Type: AWS::IAM::Role
  Properties:
    AssumeRolePolicyDocument:
      Statement:
      - Effect: Allow
        Principal:
          Service:
          - application-autoscaling.amazonaws.com
        Action:
        - sts:AssumeRole
     Path: "/"
     Policies:
     - PolicyName: ECSBlogScalingRole
       PolicyDocument:
         Statement:
         - Effect: Allow
           Action:
           - ecs:UpdateService
           - ecs:DescribeServices
           - application-autoscaling:*
           - cloudwatch:DescribeAlarms
           - cloudwatch:GetMetricStatistics
           Resource: "*"

Step 2: Implement some alarms

The below alarm will initiate scaling based on container CPU Utilization.

AutoScalingCPUAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    AlarmDescription: Containers CPU Utilization High
    MetricName: CPUUtilization
    Namespace: AWS/ECS
    Statistic: Average
    Period: '300'
    EvaluationPeriods: '1'
    Threshold: '80'
    AlarmActions:
    - Ref: AutoScalingPolicy
    Dimensions:
    - Name: ServiceName
      Value:
        Fn::GetAtt:
        - YourECSServiceResource
        - Name
    - Name: ClusterName
      Value:
        Ref: YourECSClusterName
    ComparisonOperator: GreaterThanOrEqualToThreshold

Step 3: Implement the ScalableTarget

This resource configures your Application Scaling to your ECS Service and provides some limitations for its function. Other than your MinCapacity and MaxCapacity, these settings are quite fixed when used with ECS.

AutoScalingTarget:
  Type: AWS::ApplicationAutoScaling::ScalableTarget
  Properties:
    MaxCapacity: 20
    MinCapacity: 1
    ResourceId:
      Fn::Join:
      - "/"
      - - service
        - Ref: YourECSClusterName
        - Fn::GetAtt:
          - YourECSServiceResource
          - Name
    RoleARN:
      Fn::GetAtt:
      - ApplicationAutoScalingRole
      - Arn
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs

Step 4: Implement the ScalingPolicy

This resource configures your exact scaling configuration — when to scale up or down and by how much. Pay close attention to the StepAdjustments in the StepScalingPolicyConfiguration as the documentation on this is very vague.

In the below example, we are scaling up by 2 containers when the alarm is greater than the Metric Threshold and scaling down by 1 container when below the Metric Threshold. Take special note of how MetricIntervalLowerBound and MetricIntervalUpperBound work together. When unspecified, they are effectively infinity for the upper bound and negative infinity for the lower bound. Finally, note that these thresholds are computed based on aggregated metrics — meaning the Average, Minimum or Maximum of your combined fleet of containers.

AutoScalingPolicy:
  Type: AWS::ApplicationAutoScaling::ScalingPolicy
  Properties:
    PolicyName: ECSScalingBlogPolicy
    PolicyType: StepScaling
    ScalingTargetId:
      Ref: AutoScalingTarget
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs
    StepScalingPolicyConfiguration:
      AdjustmentType: ChangeInCapacity
      Cooldown: 60
      MetricAggregationType: Average
      StepAdjustments:
      - MetricIntervalLowerBound: 0
        ScalingAdjustment: 2
      - MetricIntervalUpperBound: 0
        ScalingAdjustment: -1

Wrapping It Up

Amazon Web Services continues to provide excellent resources for automation, elasticity and virtually unlimited scalability. As you can see, with a couple solid examples underfoot you can very quickly build in that on-demand elasticity and inherent fault tolerance. After you have your tasks auto scaled, I recommend you check out the documentation on how to scale your container instances also to provide the same benefits to your ECS cluster itself.

Deploying Microservices? Let mu help!

With support for ECS Application Auto Scaling coming soon to Stelligent mu, it offers the fastest and most comprehensive platform for deploying microservices as containers.

Want to learn more about mu from its creators? Check out the DevOps in AWS Radio’s podcast or find more posts in our blog.

Additional Resources

Here are some of the supporting resources discussed in this post.

We’re Hiring!

Like what you’ve read? Would you like to join a team on the cutting edge of DevOps and Amazon Web Services? We’re hiring talented engineers like you. Click here to visit our careers page.

 

 

Stelligent is an APN Launch Partner for the AWS Management Tools Addition to the AWS Service Delivery Program

Stelligent, an AWS Partner Network (APN) Advanced Consulting Partner specializing exclusively in DevOps Automation on the Amazon Web Services (AWS) Cloud, announce that it is a launch partner for four additional services in the AWS Service Delivery Program: AWS CloudFormationAWS CloudTrail, AWS Config, and Amazon EC2 Systems Manager. This means that Stelligent has demonstrated a successful track record of delivering specific AWS services and a demonstrated ability to provide expertise in a particular service or skill area.

800x200_Management-01 (1)

“The ability to deploy high-quality code in hours, not months, is something that we can help any company – including many in the Fortune 500 – achieve,” said Paul Duvall, Stelligent CTO and co-founder. “Using AWS Management Tools along with other AWS services we can drastically reducing our customers’ development times, while increasing the rate at which they can introduce new features.”

The AWS Service Delivery Program highlights APN Partners with a track record of delivering specific AWS services to customers. Attaining an AWS Service Delivery Distinction allows partners to differentiate themselves by showcasing to AWS customers areas of specialization.

The four AWS Management Tools included in the AWS Service Delivery Program include (Source AWS):

  • AWS CloudFormation – Create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.
  • AWS CloudTrail – Track user activity and API usage
  • AWS Config – Record and evaluate configurations of your AWS resources
  • Amazon EC2 Systems Manager – Easily configure and manage Amazon EC2 and on-premises systems

Stelligent uses these AWS Management Tools in creating DevOps Automation solutions for customers so they can release new features to users, on demand, and reduce the costs of delivering software by reducing overall lead time. Resulting benefits include the following:

● the ability to release software with every successful change
● significant reduction of cycle time
● increased confidence in what is deployed
● increase in ability to experiment
● reduction of overall costs

“We are proud to work with AWS to deliver DevOps Automation solutions to our customers, allowing them to release new features to users whenever they choose,” said Duvall. “Being a launch partner in the AWS Management Tools addition to the AWS Service Delivery Program means a lot to us — this is what we live and breathe, and we do so exclusively for our customers targeting AWS. We obsess over customers, and we obsess over applying what we believe are essential practices to achieve the aims of continuous delivery. This acknowledgement will help us reach still more customers who value that passion.”

About Stelligent
Stelligent is an APN Advanced Consulting Partner and hold the AWS DevOps Competency. As a technology services company that provides DevOps Automation on Amazon Web Services (AWS) Cloud, we aim for “one-click deployment.” Our reason for being is to help our customers gain the ability to continuously deploy their software, when they want to, and with confidence. We’ve been providing DevOps Automation solutions on AWS since 2009. Follow @Stelligent on Twitter. Learn more at http://www.stelligent.com

DevOps on AWS Radio: AWS CodePipeline and Amazon Alexa (Episode 11)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps on AWS news and discuss how to use AWS CodePipeline to deploy Amazon Alexa skill.

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. What was the “Use AWS CodePipeline to Deploy Amazon Alexa Skills” blog post ?
  2. What is AWS CodePipeline and what are its benefits? What are alternatives to using CodePipeline?
  3. How do you create a pipeline in CodePipeline?
  4. Which AWS services does CodePipeline integrate with? How about non-AWS tools and services
  5. How do you automate the provisioning of CodePipeline?
  6. Describe Amazon Alexa. What kinds of things can you do with Alexa? Which devices does it support
  7. Describe Lambda.
  8. How did you orchestrate CodePipeline to deploy a Lambda function?
  9. How did you configure Alexa to run the Lambda function?
  10. How can listeners learn more about this solution

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Screencast: Full-Stack DevOps on AWS Tool

Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers. However, there is a significant learning curve for developers to get their microservices deployed. mu is a full-stack DevOps on AWS tool that simplifies and orchestrates your software delivery lifecycle (environments, services, and pipelines). It is open source and available at http://getmu.io/. You can click the YouTube link below (we’ve also provided a transcript of this screencast in this post).

Let’s demonstrate using mu to deploy a Spring Boot application to ECS. So, we see here’s our micro service (and) we’ve already got our Docker file set up. We see that we’ve got our Gradle file so that we can compile the code and then we see the various classes necessary for the service; we’re using Liquibase for managing our database so that definition file is there; we’ve got some unit tests to find so when I will go ahead take a look at the Docker file and we see that it’s pretty straightforward: it builds from the Java image; all it does is takes the jar and adds it and then for the entry point, it just runs java -jar. So, we run mu init and that’s going to create two files for us: it’s going to create a mu.yml file which we see here and so we need to add some stuff to the file it generates – specifically, we want to specify Java 8 for the (AWS) CodeBuild image then we edit the buildspec file and tell it to use Gradle build for the build command. Buildspec is a standard code build  file for defining your project so if you see our two new files: buildspec.yml and mu.yml so we go ahead and commit those (and) push those up to our source repository in this case we’re using GitHub and then we run the command mu pipeline up and what that does is it creates a CloudFormation stack for managing our CodePipeline and our CodeBuild projects so it’s going to prompt us for the GitHub token this is the access token that you’ve defined inside GitHub so that CodePipeline can access your repository so we provide that token and then we see that it’s creating various things like IAM Roles for CodeBuild to do its business and (create) the actual CodeBuild project that’s going to be used there’s a quite a few different CodeBuild projects for building and testing and deploying so now we run the command mu service show and what that’s going to show us is that there is a pipeline now created we see it has started in the first step.

Let’s go ahead and open up (AWS CodePipeline) in the console and we see that, sure enough, (the Source stage of our pipeline) is running and then we see there’s a Build stage with the Artifact and Image actions in it – that’s where we compile and build our Docker image; there’s an acceptance stage and then a Production stage both of which do a deployment and then testing so jumping back over here to the command line we can run mu service show and we see that we are in the Source action currently running and that’s just going to take a minute before we now trigger the Artifact action of the Build stage and so that’s where we’re actually doing the compiling so the command we can run here (is) mu pipeline logs -f and we add the -f so that we follow the logs – what happens is all of the output from CodeBuild gets sent to CloudWatch Logs and so the mu pipeline logs command allows us to tail CloudWatch Logs and watch the activity in real time so we see that our Maven artifacts are being resolved for dependencies and then we see “build success”, so our artifact has been built and our unit tests have passed so it’s just going to take a second here for a CodeBuild to go ahead and upload the artifact and then trigger the pipeline to move to the next stage which is our Image (action) in the Image (action) what’s going to happen is it’s going to run Docker build against our artifact (and) create a Docker image; it’s then going to push that image up to ECR. It’s also going to create that ECS repository if it doesn’t exist yet through a CloudFormation stack so we go ahead and run mu pipeline logs and we could see the Image action running we see we’re pulling down the Docker base image that Java image and then there’s our docker build and now we’re pushing back up to ECR I’ll take just a minute to upload that new docker image with our Spring Boot application on and that’s completed successfully.

So now if we jump back over to mu service show just give it a second we should see that we will progress beyond the Build stage and into the Acceptance stage in the Acceptance Stage there will be two actions first a deploy action that’s going to use the image that was created and create a new ECS service for it and so that’s what we see going on here what you’ll notice in just a second right there what’s happening is first it’s making sure the environment is up-to-date so the ECS cluster and the auto scaling group for it and all the instances for ECS; it’s making sure that’s up to date; it’s also then updating any databases that are defined and then finally deploying the service and so we see here is there’s a CREATE_IN_PROGRESS –  the status of the deployment to the Dev environment is in progress so there’s a CloudFormation stack being deployed. I go ahead and run this command mu service logs just like there’s logs for the pipeline all the logs for your service are sent to CloudWatch Logs so here we’re watching the logs for our service starting up these are the Spring Boot output messages. If you used Spring Boot before it should look familiar but this is very helpful for troubleshooting an application being able to see if logs in real time.

So the deployment is complete – (based on) the logs we saw that it is up – so we’re going to go and look at the environment here. We do mu env list. We see the Dev environment and when we show it, we can see the EC2 instance associated with it and we also see the base URL for the ELB so I’m gonna go ahead and run a curl command against that – adding the bananas URI at the end of it and pipe that to jq just to make it look pretty and sure enough, there we see we get a successful response. So, our app has been deployed successfully and we see that we are in the Approval stage and it’s waiting for approvals so we’ve completed the Acceptance stage.

Let’s take a look at CloudFormation to just see what mu has created for us. So, we see there’s over just (CloudFormation) stacks over here. Remember everything that mu does is managed through CloudFormation there’s no other database or anything behind mu – it’s just native AWS resources so, for example, if we look at the VPC there for the in dev environment we see all the things you expect to see: routes, Network ACLs, subnets, there’s a NAT gateway defined, the VPC itself and then if we go to the cluster we see the Auto Scaling Groups for the ECS container instances, we see the load balancer – the application load balancer that’s defined for the environment, all the necessary security groups and then there’s some scaling policies to scale in or out on that auto scaling group based on how many tasks are currently running. This is the service –  the banana service has been deployed to the (dev environment), we see the IAM roles, Task Definition and whatnot for the service.

Now one thing we didn’t do previously was we didn’t do any testing so what you can do is you can go ahead and create this file called buildspec-test.yml and what will happen is anything that you define in this test YAML will be run as a test action after the deployments made if standard CodeBuild buildspec file so in this case we’re going to use a tool called Newman. Newman is a nodejs command-line tool for running postman collections. Postman is a tool that GitHub created for doing testing of restful APIs. So, our postman collections. so we’re configuring this to run Newman for our tests. We’ll have to make a change to mu.yml – we have to configure the acceptance environment to use a Node.js CodeBuild image so that’s what we’ve done there so with those two changes we should be able to run mu pipeline up that will update the CodeBuild project to use the nodejs image and then once our pipeline is up to date we’ll be able to commit our change which is that buildspec-test file and once we push that up the pipeline will start running again this time tests will actually run and we’ll get some assurance that the code is ready to go onto production. So to make that change, push it and then if we look at the service we’ll see that the source action has triggered and we’ll just let this run for a while. The whole pipeline is going to have to run but things like the artifact and image won’t really cause any change because we didn’t actually change the source code but those are go ahead and run anyway so we are now in being image stage we’re taking the new jar file and building a docker image from it pushing that up to ECR we’ve now hit the Deploy stage so the latest Docker image is being used for the ECS service.

Once that completes, we will run that mu pipeline logs again to watch the CodeBuild project doing the testing and here we go so we see the testing is running it’s going to run npm install to install our dependencies namely the Newman tool and then we see some results so i see status code 200 – that looks good. Under the fail column, I see a bunch of zeros which looks great and then I see build success so not only has our application been deployed to ECS but we’ve also been able to test it and and now those tests will be run as a part of every execution of the pipeline as part of every commit. Now the other thing that we’ll recognize here is this application that we built it’s managing our inventory of bananas but what it doesn’t have is a real database behind we’re just using the H2 database that is available with Java so let’s go ahead and make a change here let’s configure mu to actually have a real database so with mu that’s as easy is as defining a database you give it a name you could specify other things like a type and whatnot but will default with the Aurora RDS and then you’re going to want to pass some environment variables so we will pass the database connection information to our spring app since we’re using Spring data source it’s just a matter of finding these three environment variables and you’ll notice that the username password and the endpoint are not actually in the mu.yml file we don’t want those things in there what what will happen is mu will create those for us and then they will make them available As CloudFormation parameters that we can reference to the dollar-sign notation that CloudFormation offers. ok so now that we’ve got that change made, go and add our new file and commit the change and push it up which should trigger a new run of the pipeline and again we’ve got to go through all those earlier actions just to ultimately get to the deploy action where the RDS database will be created now again you can choose any RDS database type but we’re using Aurora by default.

Now one question is well how does the password get defined so the way this works is we use a service that AWS has called Parameter Store which manages secrets and when mu starts up it checks if there’s a password defined and if it’s not, it generates a random 16-character string, adds it to Parameter Store and then later on when it deploys the service it pulls it out of parameter store and passes it in as an environment variable. Those parameters are encrypted with KMS – a key management system so they are secure.

Ok, so looking at the logs now from the service these are our Spring Boot startup logs. What I’m expecting to see is that rather than seeing H2 as the dialect…there you go, we see MySQL is the dialect for the connection that tells me that Spring Boot detected our environment variables and Spring Boot recognized that we are in fact trying to talk to MySQL – let me go and highlight that here. So, this tells us that our application is in fact connecting to a MySQL database which is provided by RDS and wired up via mu. So, we can look at our service again and watch the pipeline run and we can get some confirmation that we need break anything because we have those tests as a part of our pipeline now so we’ll let this go and – our tests are running. Once that completes we will have a good good feeling that this change is ready to promote the production.

Well thanks for watching and check out https://getmu.io to learn more.

Use AWS CodePipeline to Deploy Amazon Alexa Skills

If you’ve done any experimentation with the Amazon Alexa voice service, you’ve probably learned that you can use AWS Lambda to write functions that can be executed from Alexa. As a developer, what’s exciting about this is that you can create your own custom Alexa skills to perform anything suited for voice-based computing.

You’ll probably also learn that there are numerous manual actions for integrating the various tools and code to deploy an Alexa skill. Once you create the Lambda function, you need to create a zip file with any packages that the function requires and upload it to Amazon S3. Moreover, you need to store code assets somewhere and then orchestrate the build and deployment of the function(s)  that are run by your Alexa skill. Finally, you need to configure the Alexa skill itself using the Alexa Skills Kit (ASK).

In this post, you will learn how to orchestrate the deployment of an Alexa skill (written in AWS Lambda) using the AWS Developer Tools suite – including AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline. The provisioning of all of the AWS resources is defined in an AWS CloudFormation template. By automating many of the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so. You’ll see an example that walks you through the deployment process.

Figure 1 shows this deployment pipeline in action.

serverless-alexa-pipeline

Figure 1 – Deployment Pipeline in CodePipeline to deploy a Lambda function

Prerequisites

Here are the prerequisites for this solution:

Architecture and Implementation

All code assets are stored in AWS CodeCommit. We define a deployment pipeline in AWS CodePipeline to orchestrate the solution by configuring a Source action for CodeCommit, a build action with CodeBuild, and deploy actions for a CloudFormation changeset. The provisioning of AWS resources is defined in CloudFormation.

In Figure 2, you see the architecture for provisioning an infrastructure that launches a deployment pipeline to orchestrate the build and deployment of a Lambda function. You can click on the image to launch the template in CloudFormation Designer.

Figure 2 – CloudFormation Template for provisioning AWS resources

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML
  • AWS CodePipeline – The CodePipeline stages and actions are defined in a CloudFormation template. This includes CodePipeline’s integration with CodeCommit, CodeBuild, and CloudFormation (For more information, see Action Structure Requirements in AWS CodePipeline).
  • AWS CodeCommit – Creates a CodeCommit Git repository using the AWS::CodeCommit::Repository
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project to package and store the Lambda function
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline, CloudFormation, and other resources can access.
  • AWS SNS – Provisions a Simple Notification Service (SNS) Topic using the AWS::SNS::Topic resource. The SNS topic is used by the CodeCommit repository for notifications.
  • Serverless Application Model (SAM) – “The AWS Serverless Application Model (AWS SAM) extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.” [Source]
  • Amazon Alexa – the voice service that powers Amazon Echo, provides capabilities, or skills, that enable users to interact with devices in a more intuitive way using voice.
  • AWS Lambda – The serverless function run by the Alexa skill.

The index.js file stored in CodeCommit is based on the alexa-skill-kit-sdk-factskill blueprint. As part of the deployment pipeline, the Node.js function gets packaged by CodeBuild and stored in S3. In the Deploy stage, it generates a CloudFormation template based on the Serverless Application Model and executes a change set on this template. The purpose of the generated template is to provision the Lambda function from the source in S3. Figure 3 illustrates how the Alexa skill interfaces with Lambda.

serverless-alexa-lambda

Figure 3 – Alexa Skills Kit and Lambda 

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including S3, IAM, and SNS.

IAM Role

There are several IAM roles that are provisioned in the CloudFormation template. The code shown in this section is for an IAM role that is used by the AWS Serverless Application Model for deploying the Lambda function run by the Alexa skill.

  LambdaTrustRole:
    Description: Creating service role in IAM for AWS Lambda
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Action: sts:AssumeRole
          Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
      ManagedPolicyArns:
      - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Path: "/"
      Policies:
      - PolicyDocument:
          Statement:
          - Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
            Effect: Allow
            Resource: "*"
          Version: '2012-10-17'
        PolicyName: MyLambdaWorkerPolicy
      RoleName: !Ref AWS::StackName
CodePipeline

The CodePipeline pipeline CloudFormation snippet shown below defines the three stages and four actions that orchestrate the deployment of the Lambda function used by the Alexa skill. The pipeline provisions a CodeCommit source action called Source. This repository is provisioned as part of the CloudFormation template. The TemplatePath: alexa-BuildArtifact::template-export.json property definition in the GenerateChangeSet deploy action configures the name of the SAM file that is generated to provision the Lambda function that was packaged and stored in the PackageExport build action. This file is used by SAM to transform into a CloudFormation template that is executed by the ExecuteChangeSet action.

  CodePipelineStack:
    Type: AWS::CodePipeline::Pipeline
    DependsOn:
    - CodeBuildWebsite
    - LambdaTrustRole
    Properties:
      RoleArn:
        Fn::Join:
        - ''
        - - 'arn:aws:iam::'
          - Ref: AWS::AccountId
          - ":role/"
          - Ref: CodePipelineRole
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: AWS
            Version: '1'
            Provider: CodeCommit
          OutputArtifacts:
          - Name: MyApp
          Configuration:
            BranchName:
              Ref: RepositoryBranch
            RepositoryName:
              Ref: AWS::StackName
          RunOrder: 1
      - Name: Build
        Actions:
        - InputArtifacts:
          - Name: MyApp
          Name: PackageExport
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          OutputArtifacts:
          - Name: alexa-BuildArtifact
          Configuration:
            ProjectName:
              Ref: CodeBuildWebsite
          RunOrder: 1
      - Name: Deploy
        Actions:
        - InputArtifacts:
          - Name: alexa-BuildArtifact
          Name: GenerateChangeSet
          ActionTypeId:
            Category: Deploy
            Owner: AWS
            Version: '1'
            Provider: CloudFormation
          OutputArtifacts: []
          Configuration:
            ActionMode: CHANGE_SET_REPLACE
            ChangeSetName: pipeline-changeset
            RoleArn:
              Fn::GetAtt:
              - CloudFormationTrustRole
              - Arn
            Capabilities: CAPABILITY_IAM
            StackName:
              Fn::Join:
              - ''
              - - ""
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ""
            TemplatePath: alexa-BuildArtifact::template-export.json
          RunOrder: 1
        - ActionTypeId:
            Category: Deploy
            Owner: AWS
            Provider: CloudFormation
            Version: 1
          Configuration:
            ActionMode: CHANGE_SET_EXECUTE
            ChangeSetName: pipeline-changeset
            StackName:
              Fn::Join:
              - ''
              - - ""
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ""
          InputArtifacts: []
          Name: ExecuteChangeSet
          OutputArtifacts: []
          RunOrder: 2
      ArtifactStore:
        Type: S3
        Location: !Ref ArtifactBucket

Serverless Application Model

With the AWS Serverless Application Model (SAM), you can simplify the process of packaging a serverless application and deploying it with CloudFormation. The sam-template.yml below is a file that uses the SAM to define an Alexa skill function. Using the CloudFormation generate and execute change set defined in the CodePipeline provisioning in CloudFormation, this file transforms to a CloudFormation template. Fn::ImportValue pulls the export value from main CloudFormation template that provisions this solution.

AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31

Resources:
  AlexaSkillFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs4.3
      Role:
        Fn::ImportValue:
          !Join ['-', [!Ref 'AWS::StackName', 'LambdaTrustRole']]
      Events:
        AlexaSkillEvent:
          Type: AlexaSkill

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodeCommit – If used on a small project of less than six users, there’s no additional cost. See AWS CodeCommit Pricing for more information.
  • CodePipeline – Customers can create new pipelines without incurring any charges on that pipeline for the first thirty calendar days. After that period, the new pipelines will be charged at the existing rate of $1 per active pipeline per month. For more information, see AWS CodePipeline pricing.
  • Lambda –Considering you likely won’t have over 1M requests for this particular solution, there’s no cost. The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month. For more information, see AWS Lambda Pricing.
  • Alexa –There is no direct cost associated with using the Alexa service. If you’re using an Amazon Echo device, there is a one-time payment for the hardware and you’re charged every time your Lambda function is run (once it exceeds 1M free requests per month).
  • IAM – No additional cost.
  • SNS – Considering you likely won’t have over 1 million Amazon SNS requests for this particular solution, there’s no cost. For more information, see AWS SNS Pricing.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region

Step 2. Launch the Stack

Click on the “Launch Stack” button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 5 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

To test the deployment, you will need to configure the Alexa skill using the Amazon Developer Portal. You can use the Amazon Alexa Developer portal, a tool called Echosim, or an actual Amazon Echo device to test your skill.

Upload Code Assets to CodeCommit

  1. Once the CloudFormation stack is complete, select checkbox next to the stack and go to the Outputs tab
  2. Click on the PipelineUrl link to launch the CodePipeline pipeline. The Source action will be in a failed state.
  3. From the pipeline, click on the CodeCommit link and copy the command under “Clone your repository to your local computer and start working on code” to your clipboard.
  4. From your Terminal, paste the command contents to a computer for which you have configured a git client.
  5. Copy all the files from your locally-cloned Git repository (for https://github.com/stelligent/devops-essentials/tree/master/samples/serverless/alexa) to the CodeCommit repository you just cloned.
  6. From your Terminal, type
    git add .
  7. From your Terminal, type:
    git commit -am "add new files" && git push
  8. Go back to your pipeline in CodePipeline and see the changes successfully flow through the pipeline.

Configure and Test Alexa Skill

At this time, you can’t just click a “Launch Stack” button to deploy an Alexa skill. Separately, you need to configure the Alexa skill to define the intent schema, sample utterances and, most relevant, the Lambda function ARN that was deployed as part of the CodePipeline pipeline. To configure and test your Alexa skill, follow the steps defined below.

  1. Once your pipeline has successfully completed, go to https://developer.amazon.com/alexa and click the Sign In link
  2. Use your Amazon credentials to login to the Amazon Developer portal
  3. Select Alexa
  4. Under Alexa Skills Kit select Get Started
  5. Click Add a New Skill
  6. Enter a Name and Invocation Name and Choose Save
  7. Click Next
  8. In the Intent Schema text area, enter the contents from IntentSchema.json.
  9. In the Sample Utterances text area, enter the contents from SampleUtterances_en_US.txt.
  10. Click Next
  11. Choose the AWS Lambda ARN (Amazon Resource Name) radio button in the Service Endpoint Type section.
  12. Choose the North America checkbox
  13. Go to the Lambda console and choose the radio button next to the function that the CodePipeline pipeline generated. Then, choose the Actions button and select the Show ARN item and copy the contents that are displayed to your clipboard.
  14. Go back to the Amazon Developer Portal and paste your clipboard contents to the North America text box.
  15. Click Next
  16. In the Service Simulator section, enter “tell me a space fact” in the Enter Utterance text box and click Ask (the name of your skill). You should see a valid response in the Lambda Response text area. Go to SampleUtterances_en_US.txt for some other examples to simulate.

Alternatively, you can use a service the Echosim service to test your Alexa skill or an actual Amazon Echo device.

Deployment Pipeline

There are three stages and four actions that compose the pipeline that orchestrates the deployment of the Lambda function used by the Amazon Alexa service.

  • Source – In the single Source action, it uses the CodeCommit source action type to store all the code assets for the Alexa skill, infrastructure, and deployment pipeline
  • Build – In the single PackageExport action, it uses the CodeBuild build action type to package and store the Lambda function and associated files
  • Deploy
    • GenerateChangeSet – Uses the CloudFormation deploy action type to generate a change set for a CloudFormation template that defines the Lambda function
    • ExecuteChangeSet – Uses the CloudFormation deploy action type to generate a change set on the CloudFormation template to deploy the Lambda function

Figure 4 illustrates annotates the stages and actions of this deployment pipeline.

serverless-pipeline-annotated

Figure 4 – Annotated Deployment Pipeline for Solution

DevOps Essentials on AWS Complete Video Course

This and many more topics are covered in the DevOps Essentials on AWS Complete Video Course (release date: August 2017). In it, you’ll learn how to automate the infrastructure and deployment pipelines using AWS services and tools so if you’re some type of software or DevOps-focused engineer or architect interested in learning how to use AWS Developer Tools to create a full-lifecycle software delivery solution, it’s the course for you. The focus of the course is on deployment pipeline architectures and its implementations.

Additional Resources

You can also provide voice-enabled applications using Amazon Lex, Amazon Polly, and other AWS services – only without the “wake word” functionality.

Here are some of the supporting resources discussed in this post:

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

DevOps on AWS Radio: mu – DevOps on AWS tool (Episode 10)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps on AWS news and speak with Casey Lee from Stelligent about the open-source, full-stack DevOps on AWS tool called mu.

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. What is mu and what problem does it solve? What are its benefits?
  2. How does someone use mu (including prereqs)?
  3. What types of programming languages and platforms are supported?
  4. What types of AWS architectures does mu support (i.e. traditional EC2, ECS, Serverless, etc.)?
  5. Which AWS services are provisioned by mu?
  6. Does mu support non-AWS implementations?
  7. What does mu install on my AWS account?
  8. Describe mu’s support for configuration/secrets
  9. Extensibility?
  10. Price?
  11. What’s next on the mu roadmap?
  12. How can listeners learn more about mu?

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Introduction to NixOS

NixOS, and declarative immutable systems, are a great fit for CI/CD pipelines.  With the entire system in code, ensuring and auditing reproducible environments becomes easy.  Applications can also be “nixified,” so both system and application are fully declarative and in version control. The NixOS system is mounted read-only, which makes for a good fit in immutable autoscaling groups.  Instance userdata may contain a valid NixOS configuration, which is assumed on boot, so as to implement any necessary post-bake changes.

“NixOS. The Purely Functional Linux Distribution. NixOS is a Linux distribution with a unique approach to package and configuration management. Built on top of the Nix package manager, it is completely declarative, makes upgrading systems reliable, and has many other advantages.”
https://nixos.org/

There are four main parts to Nix and NixOS (Expressions, OS, Modules, and Tests). We will examine each one:

The Nix expression language

This is the “package manager” functionality of NixOS, which can be downloaded from here. Each open source project, including the Linux OS, has a “nix expression” that describes how it is to be built. This includes explicitly declaring each dependency. Each dependency is also declared with a nix expression, so the entire system is declarative. All Nix packages exist in the Nixpkgs Github. Here is the one for ElectricsSheep, a distributed screen saver for evolving artificial organisms:

The underlying mechanism of dependency and package management is a system of symbolic links. Packages are built and deployed in an immutable “nix store”. This read only location exists at /nix/store. In this location, the package name has a hash added, which is computed as a result of evaluating all build input dependencies. Therefore, we can have the same package available many times, each with a unique hash, and a unique version of dependencies. Symbolic links from /run/current-system/sw/bin/ to the hashed package name in nix store determine which package is called, as /run/current-system/sw/bin/ would be in the user’s $PATH.

Should any change to a packaging expression happen, all packages depending on it would rebuild. If a mistake is found, it is easy to revert the change to the dependency, and rebuild. Ensuring the reproducibility of system packages is a huge win, and this solution to “dependency hell” works very well in practice. The Nix package manager can be run on any Linux distribution, such as Fedora or Ubuntu, and also works on Darwin/OSX as well. Custom code can also be packaged with nix expressions, and so both the app and the OS are fully declarative and reproducible.

The NixOS Linux distribution

Nix packaging of open source software, including Linux kernel and boot processes, make up the NixOS Linux distribution. Official releases are available online. NixOS Channels are the method for specifying which version of NixOS is to be installed. NixOS is controlled by the /etc/nixos/configuration.nix file, which declaratively defines the NixOS environment, including defining which NixOS channel is to be used on the system. Whenever configuration.nix is updated, a nixos-rebuild switch can be executed, which switches to the new configuration immediately, as well as adding a new “generation” to Grub/EFI, so the new version can be booted. Here is an example configuration.nix:

As of NixOS 16.03, AWS EC2 Instance Metadata support is built in. However, instead of the usual Cloudinit directives, the NixOS instance expects the Userdata to be a valid configuration.nix. Upon boot, the system will “switch” to what is defined in the provided configuration.nix via EC2 Userdata. This allows for immutable declarative instances in AWS AutoScaling Groups.

NixOS Configuration Management Modules

NixOS modules define how services, via SystemD, are to be configured and run. Modules are written so that parameters can be set which correspond how the systemd process is to be run.

This is the Buildbot NixOS Module, which writes out buildbot configuration, based on module parameters, and then ensures the service is running:

NixOS Tests

NixOS tests are a mechanism to ensure NixOS expressions and modules are working as expected.  Virtual machine(s) are spun up so as to perform the declared tests. Currently VM’s spun up for testing use the QEMU hypervisor, although NixOS tests are moving to libvirt, ideally supporting autodetection of available system virtualization technologies. As an example, here is the buildbot continuous integration server test:

After downloading and building all dependencies, the test will perform a build that starts a QEMU/KVM virtual machine containing the nix system.  It is also possible to bring up this test system interactively to facilitate debugging. The virtual machine mounts the Nix store of the host which makes vm creation very fast, as no disk image needs to be created. These tests can then be implemented in a continuous integration environment such as buildbot or hydra.

In addition to declaratively expressing and testing system packages, applications can be nixified in the same way.  Application tests can then be written in the same manner, so both system and application can go thru a continuous integration pipeline. In a Docker microservices environment, where applications are defined as immutable containers, NixOS is the perfect host node OS, running the Docker daemon and Nomad, Kube, etc.

NixOS is in fast active development. Many users are also NixOS contributors, so most nix packaging of open source projects stay up-to-date. Unstable and release channels are available. Installation is very well documented online. The ability to easily “switch” between configuration versions, or “generations,” which include their own grub/efi boot entry, makes for a great workstation distro.  The declarative reproducibility of a long term stable release, with cloudinit userdata support, makes for a great server distribution.

Thanks for reading,
@hackoflamb

WordPress, Mu, and You

Today, we’re going to give you the “Hello, world” of web stacks and explain how you could run WordPress with mu. This is the fifth post in Stelligent’s blog series on mu, where we’ll lay out a path for using mu to run WordPress and try to hit a target that’s often hard to find in that part of our world: managing a WordPress stack with infrastructure as code, utilizing a continuous delivery pipeline. In that vein, we’re going to keep the feature list as short and simple as possible. To make this easier to follow, we’re going to keep the feature list short and simple. Look for future posts in the Stelligent blog where we’ll talk about adding more functionality.

Why would I want to run WordPress with mu?

mu will take care of a lot of nice things for you. It creates a CodePipeline stack that will manage your deployment workflow, and you’ll have test and production environments out of the box. You can inspect changes in your test instance before you let them continue through the pipeline.  It will manage your databases in RDS, so your backend data layer gets its own high availability and scaling that’s independent of your other resources; your test and production data are also kept separate. Each environment will run in Amazon’s EC2 Container Service, behind a load balancer, so the pipeline can roll out new containers without bringing the old ones down. When it’s all done, you can even export CloudFormation templates for your infrastructure and manage them on your own — you’re never locked into using mu!

We’re going to deploy a Docker container from the official WordPress image.  How do we get there? mu will manage a CodePipeline stack that orchestrates everything: it creates a custom image with your web data and the official image, then uses ECS to deploy a test instance for you to inspect, and then — if you approve it — rolls it out to production. Because we’re using mu, all of this will be done through code, and we’ll use GitHub to make that code available to CodePipeline.

Stelligent created mu to simplify using microservices on AWS. It gives you a simple, flexible front-end to some really sophisticated services that can seem bewildering until you’ve gained a lot of experience with them. All those advanced products from AWS are themselves hiding the complexity of running very robust services across data centers around the world. There is vast machinery hidden behind this relatively simple command-line interface. You benefit from all of this with a really simple process for deploying your WordPress site, and you get to do it with tools native to most developers: a code editor and a command line.

Getting Started

Let’s get down to the nitty gritty. To get started, fork our repository into your own GitHub account then clone it to your workstation. Click the “Fork” button in the top right corner of the page at stelligent/mu-wordpress:

Credit to GitHub; taken from their

Next, clone the new repo from your account to your local workstation:

git clone _your_fork_of_mu-wordpress_
cd mu-wordpress

In this directory, you’ll find a few files. Edit mu’s main config file, mu.yml, changing pipeline.source.repo to point to your own GitHub account instead of “stelligent”:

pipeline:
  source:
    provider: GitHub
    repo: _your_github_username_/mu-wordpress

Commit your changes and push them back up to your GitHub account:

git add mu.yml
git commit -m 'first config' && git push

You’re ready to start up your pipeline:

mu pipeline up

mu will ask you for a GitHub token. CodePipeline uses it to watch your repo for changes so that it can automatically deploy them. Create a new token in your own GitHub account and grant it the “admin:repo_hook” and “admin” permissions. Save it somewhere, like a nice password manager, and then provide it at mu’s prompt.

Now you can watch your pipeline get deployed:

mu pipeline logs -f

Give it a little time – it will probably take about 10 minutes for the pipeline to deploy all the resources. When it’s done, you’ll have your first environment, “test”:

mu env list

You should see a table like this, but it won’t say CREATE_COMPLETE under each environment until they’re done.

+-------------+-----------------------+---------------------+-----+
| ENVIRONMENT |         STACK         |       STATUS        | ... |
+-------------+-----------------------+---------------------+-----+
| test        | mu-cluster-test       | CREATE_COMPLETE     | ... |
+-------------+-----------------------+---------------------+-----+

On to WordPress!

Now that you have a test environment, you can initialize WordPress and make sure it’s working. Start by inspecting “test”:

mu env show test

You’ll see a block at the top that includes “Base URL”:

Environment:    test


Cluster Stack:  mu-cluster-test (UPDATE_IN_PROGRESS)
VPC Stack:      mu-vpc-test (UPDATE_COMPLETE)
Bastion Host:   1.2.3.4
Base URL:       http://mu-cl-some-long-uuid.us-east-1.elb.amazonaws.com

Append “/wp-admin” to that and load the URL in your browser:

http://mu-cl-some-long-uuid.us-east-1.elb.amazonaws.com/wp-admin

Follow the instructions there to set up a WordPress admin user and initialize the database.

First WordPress admin initialization page

Deploy to “production”

If it looks to you like it’s working, load up your CodePipeline console. You can find the URL for it by asking mu for information on your service:

mu service show | head

The first line of output will give you the URL for your pipeline stack:

Pipeline URL:   https://console.aws.amazon.com/codepipeline/home?region=...

Load that page in your web browser. Find the “Production” stage. The first step there is “Approve”, and you’ll see a button labeled “Review”. Fill that in – any text will do – and click the “Approve” button:

Approve or reject a revision in CodePipeline

CodePipeline will deploy a new copy of the container it build into your “prod” environment. When it’s done, you can do the same thing you did with test to initialize WordPress and inspect your new web site. When you’re ready, try these commands to watch it go and get the site URL:

mu pipeline logs -f
mu env show prod |head

Behind the scenes

If you’ve made it this far, I’ll bet you’re wondering what mu is actually doing for you. Let’s take a look. mu brings together a handful of Amazon products. It will:

Those are the pieces. How do they all fit together?

  • GitHub stores your infrastructural code.
  • mu acts as your front-end to AWS by generating and applying CloudFormation templates, orchestrated by CodePipeline.
  • CodeBuild combines your content with the official WordPress Docker container, storing the new image in Amazon ECR, then deploying it through ECS.
  • An ECS cluster is run for each environment we define: in this case, “test” and “prod”.
  • An AWS ALB sits in front of each cluster.
  • Your WordPress database will be provided by an Amazon RDS cluster, one for each environment. Each runs Aurora, Amazon’s highly optimized clone of MySQL.

 

Architectural diagram of mu-wordpress

Let’s talk about how your information flows through those pieces. AWS CodePipeline orchestrates all the other services in AWS. It’s used to give you a continuous delivery pipeline:

AWS CodePipeline pipeline created by mu

  1. It watches your GitHub repo for changes and automatically applies them shortly after you push.
  2. AWS CodeBuild uses buildspec.yml to run any custom steps you add there.
  3. AWS CodeBuild generates your own Docker image by combining the results of the last step with the official WordPress image and storing it in Amazon ECR.
  4. Your container is deployed to your “test” environment.
  5. You manually inspect your container and approve or reject it.
  6. If you approve it, your container is deployed to your “prod” environment.


Wrapping it up

If you want to customize your WordPress site, add content to the html directory of your repo. Anything there will be installed with WordPress as the container is built, so you have a simple way to add content to your site’s wp-content directory by adding new plugins or themes to your own codebase. Take a look at your mu.yml file, too – it includes comments that explain the settings used and suggest some things you might want to change, like the instance size deployed to each environment.

We’ve shown a fast and simple way to get WordPress running at AWS. There’s a lot of tech running behind the scenes, but mu makes it easy to get started and navigate that complexity. There’s a lot more we could do: optimizing WordPress and making your stack robust are extensive topics that deserve their own articles, and much has been written about them elsewhere. Look for future posts on the Stelligent blog as we improve our mu-wordpress code, too.

Additional Resources

Are you passionate about working with the latest AWS technologies? Are you interested in helping build tools to help automate deployments? Do you want to engage in spirited debate about how to pronounce Greek letters? Stelligent is hiring!

Docker lifecycle automation and testing with Ruby in AWS

My friend and colleague, Stephen Goncher and I got to spend some real time recently implementing a continuous integration and continuous delivery pipeline using only Ruby. We were successful in developing a new module in our pipeline gem that handles many of the docker engine needs without having to skimp out on testing and code quality. By using the swipely/docker-api gem we were able to write well-tested, DRY pipeline code that can be leveraged by future users of our environment with confidence.

Our environment included the use of Amazon Web Service’s Elastic Container Registry (ECR) which proved to be more challenging to implement than we originally considered. The purpose of this post is to help others implement some basic docker functionality in their pipelines more quickly than we did. In addition, we will showcase some of the techniques we used to test our docker images.

Quick look at the SDK

It’s important that you make the connection in your mind now that each interface in the docker gem has a corresponding API call in the Docker Engine. With that said, it would be wise to take a quick stroll through the documentation and API reference before writing any code. There’s a few methods, such as Docker.authenticate! that will require some advanced configuration that is vaguely documented and you’ll need to combine all the sources to piece them together.

For those of you who are example driven learners, be sure to check out an example project on github that we put together to demonstrate these concepts.

Authenticating with ECR

We’re going to save you the trouble of fumbling through the various documentation by providing an example to authenticate with an Amazon ECR repository. The below example assumes you have already created a repository in AWS. You’ll also need to have an instance role attached to the machine you’re executing this snippet from or have your API key and secret configured.

Snippet 1. Using ruby to authenticate with Amazon ECR

require 'aws-sdk-core'
require 'base64'
require 'docker'

# AWS SDK ECR Client
ecr_client = Aws::ECR::Client.new

# Your AWS Account ID
aws_account_id = '1234567890'

# Grab your authentication token from AWS ECR
token = ecr_client.get_authorization_token(
 registry_ids: [aws_account_id]
).authorization_data.first

# Remove the https:// to authenticate
ecr_repo_url = token.proxy_endpoint.gsub('https://', '')

# Authorization token is given as username:password, split it out
user_pass_token = Base64.decode64(token.authorization_token).split(':')

# Call the authenticate method with the options
Docker.authenticate!('username' => user_pass_token.first,
                     'password' => user_pass_token.last,
                     'email' => 'none',
                     'serveraddress' => ecr_repo_url)

Pro Tip #1: The docker-api gem stores the authentication credentials in memory at runtime (see: Docker.creds.) If you’re using something like a Jenkins CI server to execute your pipeline in separate stages, you’ll need to re-authenticate at each step. Here’s an example of how the sample project accomplishes this.

Snippet 2. Using ruby to logout

Docker.creds = nil

Pro Tip #2: You’ll need to logout or deauthenticate from ECR in order to pull images from the public/default docker.io repository.

Build, tag and push

The basic functions of the docker-api gem are pretty straightforward to use with a vanilla configuration. When you tie in a remote repository such as Amazon ECR there can be some gotcha’s. Here are some more examples of the various stages of a docker image you’ll encounter with your pipeline. Now that you’re authenticated, let’s get to doing some real work!

The following snippets assume you’re authenticated already.

Snippet 3. The complete lifecycle of a basic Docker image

# Build our Docker image with a custom context
image = Docker::Image.build_from_dir(
 '/path/to/project',
 { 'dockerfile' => 'ubuntu/Dockerfile' }
)

# Tag our image with the complete endpoint and repo name
image.tag(repo: 'example.ecr.amazonaws.com/stelligent-example',
          tag: 'latest')

# Push only our tag to ECR
image.push(nil, tag: 'latest')

Integration Tests for your Docker Images

Here at Stelligent, we know that the key to software quality is writing tests. It’s part of our core DNA. So it’s no surprise we have some method to writing integration tests for our docker images. The solution will use Serverspec to launch the intermediate container, execute the tests and compile the results while we use the docker-api gem we’ve been learning to build the image and provide the image id into the context.

Snippet 5. Writing a serverspec test for a Docker Image

require 'serverspec'

describe 'Dockerfile' do
 before(:all) do
   set :os, family: :debian
   set :backend, :docker
   set :docker_image, '123456789' # image id
 end

 describe file('/usr/local/apache2/htdocs/index.html') do
   it { should exist }
   it { should be_file }
   it { should be_mode 644 }
   it { should contain('Automation for the People') }
 end

 describe port(80) do
   it { should be_listening }
 end
end

Snippet 6. Executing your test

$ rspec spec/integration/docker/stelligent-example_spec.rb

You’re Done!

Using a tool like the swipely/docker-api to drive your automation scripts is a huge step forward in providing fast, reliable feedback in your Docker pipelines compared to writing bash. By doing so, you’re able to write unit and integration tests for your pipeline code to ensure both your infrastructure and your application is well-tested. Not only can you unit test your docker-api implementation, but you can also leverage the AWS SDK’s ability to stub responses and take your testing a step further when implementing with Amazon Elastic Container Repository.

See it in Action

We’ve put together a short (approx. 5 minute) demo of using these tools. Check it out from github and take a test drive through the life cycle of Docker within AWS.


Working with cool tools like Docker and its open source SDKs is only part of the exciting work we do here at Stelligent. To take your pipeline a step further from here, you should check out mu — a microservices platform that will deploy your newly tested docker containers. You can take that epic experience a step further and become a Stelligentsia because we are hiring innovative and passionate engineers like you!

Using Parameter Store with AWS CodePipeline

Systems Manager Parameter Store is a managed service (part of AWS EC2 Systems Manager (SSM)) that provides a convenient way to efficiently and securely get and set commonly used configuration data across multiple resources in your software delivery lifecycle.

codepipeline_ssm

In this post, we will be focusing on the basic usage of Parameter Store and how to effectively use it as part of a continuous delivery pipeline using AWS CodePipeline. The following describes some of the capabilities of Parameter Store and the resources with which they can be used:

  • Managed Service: Parameter Store is managed by AWS. This means that you won’t have to put in the engineering work to setup something like Vault, Zookeeper, etc. just to store the configuration that your application/service needs.
  • Access Controls: Through the use of AWS Identity Access Management access to Parameter Store can be limited by enabling or restricting access to the service itself, or by enabling or restricting access to particular parameters.
  • Encryption: The Parameter Store gives a user the ability to also encrypt parameters using the AWS Key Management Service (KMS). When creating a parameter, you can specify that the parameter is encrypted with a KMS key. 
  • Audit: All calls to Parameter Store are tracked and recorded in AWS CloudTrail so they can be audited.

At the end of this post, you will be able to launch an example solution via AWS CloudFormation.

Working with Parameter Store

Prerequisites

In order to follow the examples below, you’ll need to have the AWS CLI setup on your local workstation. You can find a guide to install the AWS CLI here.

Creating a Parameter in the Parameter Store

To manually create a parameter in the parameter store there a few easy steps to follow:

  1. The user must sign into their AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on Parameter Store.
  3. Click Get Started Now or Create Parameter and input the following information:
    1. Name: The name that you want the parameter to be called
    2. Description(optional): A description of what the parameter does or contains
    3. Type: You can choose either a String, String List, or Secure String
  4. Click Create Parameter and it will bring you to the Parameter Store console where you can see your newly created parameter

To create a parameter using the AWS CLI, here are examples of creating a String, SecureString, and String List:

String:

 aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."

StringList:

aws ssm put-parameter --name "HostedZoneNames" --type "StringList" --value “stelligent.com.,google.com.,amazon.com.

SecureString:

 aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

After running these commands, your parameter store console would look something like this:

Screen Shot 2017-03-06 at 5.06.40 PM

Getting Parameter Values using the AWS CLI

To get a parameter String, StringList, or SecureString from the from the Parameter Store using the AWS CLI you must use the following syntax in your terminal:

String:

aws ssm get-parameters --names "HostedZoneName"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "String", 
            "Name": "HostedZoneName", 
            "Value": "stelligent.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.24 PMStringList:

 aws ssm get-parameters --names "HostedZoneNames"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "StringList", 
            "Name": "HostedZoneNames", 
            "Value": "stelligent.com.,google.com.,amazon.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.45 PMSecureString:

aws ssm get-parameters --names "Password"

The output in your terminal would look like this (the value of the parameter is encrypted):

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "SecureString", 
            "Name": "Password", 
            "Value": "AQECAHicQXIA+CERB7LyH8+YXXUK1vqiI87oM0Wq7kgMCmGqUQAAAGowaAYJKoZIhvcNAQcGoFswWQIBADBUBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDE0kvmQLY6Ertt5BGwIBEIAnlfTl1XxzRwUzkFCBYn8P0lJ6dOdjPNQNbYgjD1+KTk/SlNJznvrF"
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.59 PM

Deleting a Parameter from the Parameter Store

To delete a parameter from the Parameter Store manually, you must use the following steps:

  1. Sign into your AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on the Parameter Store tab.
  3. Select the parameter that you wish to delete
  4. Click the Actions button and select Delete Parameter from the menu

To delete a parameter using the AWS CLI you must use the following syntax in your terminal(this works for String, StringList, and SecureString)

aws ssm delete-parameter --name "HostedZoneName"
aws ssm delete-parameter --name "HostedZoneNames"
aws ssm delete-parameter --name "Password"

Using Parameter Store in AWS CodePipeline

Parameter Store can be very useful when constructing and running a deployment pipeline. Parameter Store can be used alongside a simple token/replace script to dynamically generate configuration files without having to manually modify those files. This is useful because you can pass through frequently used pieces of data configuration easily and efficiently as part of a continuous delivery process. An illustration of the AWS infrastructure architecture is shown below.

new-designer (4)

In this example, we have a deployment pipeline modeled via AWS CodePipeline that consists of two stages: a Source stage and a Build stage.

First, let’s take a look at the Source stage.

MyCodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      ArtifactStore:
        Location: !Ref S3Bucket
        Type: S3
      RoleArn: !GetAtt [CodePipelineRole, Arn]
      Stages:
        - Name: Source
          Actions:
          - Name: GitHubSource
            ActionTypeId:
              Category: Source
              Owner: ThirdParty
              Provider: GitHub
              Version: 1
            OutputArtifacts:
              - Name: OutputArtifact
            Configuration:
              Owner: stelligent
              Repo: parameter-store-example
              Branch: master
              OAuthToken: !Ref GitHubToken

As part of the Source stage, the pipeline will get source assets from the GitHub repository that contains the configuration file that will be modified along with a Ruby script that will get the parameter from the Parameter Store and replace the variable tokens inside of the configuration file. 

After the Source stage completes, there’s a Build stage, where we’ll be doing all of the actual work to modify our configuration file.

The Build stage uses the CodeBuild project (defined as the ConfigFileBuild action) to run the Ruby script that will modify the configuration file and replace the variable tokens inside of it with the requested parameters from the Parameter Store.

  ConfigFileBuild:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Ref AWS::StackName
      Description: Changes sample configuration file
      ServiceRole: !GetAtt CodeBuildRole.Arn
      Artifacts:
        Type: CODEPIPELINE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_LARGE
        Image: aws/codebuild/eb-ruby-2.3-amazonlinux-64:2.1.6
      Source:
        Type: CODEPIPELINE
        BuildSpec: |
          version: 0.1
          phases:
            pre_build:
              commands:
                - gem install aws-sdk
            build:
              commands:
                - ruby sample_ruby_ssm.rb
          artifacts:
            files:
              - '**/*'

The CodeBuild project contains the buildspec that actually runs the  Ruby script that will be making the configuration changes (sample_ruby_ssm.rb).

Copy of AWS Simple Icons v2.1 (6).png

Here is what the Ruby script looks like:

require 'aws-sdk'

client = Aws::SSM::Client.new(region: 'us-east-1')
resp = client.get_parameters({
  names: ["HostedZoneName", "Password"], # required
  with_decryption: true,
})

hostedzonename = resp.parameters[0].value
password = resp.parameters[1].value

file_names = ['sample_ssm_config.json']

file_names.each do |file_name|
  text = File.read(file_name)

  # Display text for usability
  puts text

  # Substitute Variables
  new_contents = text.gsub(/HOSTEDZONE/, hostedzonename)
  new_contents = new_contents.gsub(/PASSWORD/, password)

  # To write changes to the file, use:
  File.open(file_name, "w") {|file| file.puts new_contents.to_s }
end

Here is what the configuration file with the variable tokens (HOSTEDZONE, PASSWORD) looks like before it gets modified:

{
  "Parameters" : {
    "HostedZoneName" : "HOSTEDZONE",
    "Password" : "PASSWORD"
  }
}

Here is what the configuration file would consist of after the Ruby script pulls the requested parameters from the Parameter Store and replaces the variable tokens (HOSTEDZONE, PASSWORD). The Password parameter is being decrypted through the ruby script in this process.

{
  "Parameters" : {
    "HostedZoneName" : "stelligent.com.",
    "Password" : "Password123$"
  }
}

IMPORTANT NOTE:  In this example above you can see that the “Password” parameter is being returned in plain text (Password123$). The reason that is happening is because when this Ruby script runs, it is returning the secured string parameter with the decrypted value (with_decryption: true). The purpose of showing this example in this way is purely just to illustrate what returning multiple parameters into a configuration file would look like. In a real-world situation you would never want to return any password displayed in its plain text because that can present security issues and is bad practice in general. In order to return that “Password” parameter value in its encrypted form all you would simply have to do is modify the Ruby script on the 6th line and change the “with_decryption: true” to “with_decryption: false“.  Here is what the modified configuration file would look like with the “Password” parameter being returned in its encrypted form:

Screen Shot 2017-03-10 at 10.57.27 AM

Launch the Solution via CloudFormation

To run this deployment pipeline and see Parameter Store in action, you can click the “Launch Stack” button below which will take you directly to the CloudFormation console within your AWS account and load the CloudFormation template. Walk through the CloudFormation wizard to launch the stack. 

In order to be able to execute this pipeline you must have the following:

  • The AWS CLI already installed on your local workstation. You can find a guide to install the AWS CLI here
  • A generated GitHub Oauth token for your GitHub user. Instructions on how to generate an Oauth token can be found here
  • In order for the Ruby script that is part of this pipeline process to run you must create these two parameters in your Parameter Store:
    1. HostedZoneName
    2. Password
aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."
aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

As you begin to launch the pipeline in CloudFormation (Launch Stack button is located below), you will be prompted to enter this one parameter:

  • GitHubToken (Your generated GitHub Oauth token)

Once you have passed in this initial parameter, you can begin to launch the pipeline that will make use of the Parameter Store.

NOTE: You will be charged for your CodePipeline and CodeBuild usage.

Once the stack is CREATE_COMPLETE, click on the Outputs tab and then the value for the CodePipelineUrl output to view the pipeline in CodePipeline.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post, you learned how to use the EC2 Systems Manager Parameter Store and some of its features. You learned how to create, delete, get, and set parameters manually as well as through the use of the AWS CLI. You also learned how to use the Parameter Store in a practical situation by incorporating it in the process of setting configuration data that is used as part of a CodePipeline continuous delivery pipeline.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/parameter-store-example. Let us know if you have any comments or questions @stelligent or @TreyMcElhattan

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.