Next-Generation Managed Services are Self-Service

The traditional managed services provider (MSP) model is broken and needs disruption. The next-generation managed services model is about guiding customers in a self-service manner.

The key drivers causing customers to seek cloud providers like Amazon Web Services (AWS) include the agility and cost efficiencies they afford. The agility helps customers be more responsive to their users. At the same time, this same agility delivered by cloud providers can also result in customers being overwhelmed in determining best practices and patterns for deploying and operating on the cloud. Moreover, as more companies realize how software is a strategic asset for their business, they often don’t want to simply outsource all their IT needs to yet another provider. They want speed and agility, but not by being at the mercy of an IT provider. They want to leverage best practices while gaining the autonomy in obtaining these capabilities in a self-service manner.

In this post, I contrast the traditional MSP to a next-generation MSP driven by DevOps and automation. In learning how MSP features can help increase agility when delivered from a new model, you should be able to better choose providers who align best with your business outcomes.

The Traditional MSP

To begin, let’s have a look at the typical capabilities of an MSP. They include:

  • Access Management – Creating user accounts and permissions to infrastructure resources
  • Change Management – Ensure changes are applied in a controlled manner
  • Continuity Management – Prevent loss via disaster recovery techniques such as backups, high availability, and restoration
  • Incident Management – Get support to fix problems
  • Patch Management – Keep infrastructure up to date and compliant
  • Provisioning Management – Provision and configure infrastructure
  • Reporting – Get access to metrics, logs, and recommendations for improvement
  • Security Management – Ensure infrastructure is secure

With a traditional MSP, these types of services are typically provided through an opaque model in which customers are reliant on the MSP to perform remedial actions to fix most problems. This is because the MSP often has the credentials and knowledge to make changes to a largely manually-provisioned infrastructure or a hodgepodge of “automated” scripts that are not provided as a system to customers.

Next-Generation MSP on AWS

A next-gen MSP provides customers capabilities in a self-service manner enabling them to get up and running quickly with a fully-automated infrastructure while benefiting from the expert guidance provided by the MSP. This means customers don’t need someone on the MSP’s support team to – say – restart a server or perform a backup. This is because these services are provided to customers through self-service means. Instead, the reason a customer might need a next-gen MSP is for their best practices expertise in architecture and automation to help more quickly guide them to better solutions.

What Does Next-Gen Look Like?

What do each of the capabilities described in the first section look like in a next-generation MSP model? At their core, they’re self service. Customers of the MSP might have a team from the MSP get their infrastructure up and running but there should be nothing preventing the customer from provisioning everything themselves either. Furthermore, there should be a way for customers to get their applications running on the infrastructure using repeatable frameworks as well.

Let’s have a look at the types of capabilities a next-gen MSP on AWS might offer:

  • Access Management  – A customer interfaces with an API and/or console provided by the MSP that automates the provisioning of AWS Organizations, AWS Accounts, and IAM users and permissions. Possible ToolsAWS Organizations, AWS IAM, AWS Service Catalog, and automation through AWS CloudFormation and other tools.
  • Change Management – Customers interface with the API/Console to manage how changes are deployed on their infrastructure. For example, they might want to modify RDS database configuration settings or the AMI the EC2 instances use. Customers can make a request to the MSP to apply or schedule these changes or they can apply the changes themselves using frameworks provided by the MSP. These changes flow through an approval process configured by the customer. Possible Tools: AWS Service Catalog, AWS CloudFormation, AWS CloudWatch Dashboards, Configuration Management Tools, and custom automation.
  • Continuity Management – Customers can schedule disaster recovery processes and scenarios through an API/Console. This includes scheduling data, storage, and source backups. It might also include the ability to schedule disaster recovery drills with experts from the MSP. Moreover, the automation provided in the DevOps frameworks provided by the MSP should support resilient, high availability solutions that can maintain the necessary infrastructure even when parts of it fails so that users do not experience errors when parts of the underlying infrastructure fails. Possible Tools: Amazon EC2 Systems Manager, AWS CodeCommitAWS Shield, Custom Reports, Amazon Glacier, AWS Service Catalog, AWS Auto Scaling, AWS CloudFormation, and Configuration Management Tools might be used. Also, tools for automation of backing up EBS volumes, RDS database snapshots, etc.
  • Incident Management – Customers can contact MSP support experts at any time of day to help guide them to solutions through various mechanisms including real-time chat, chatbots, online systems, and the phone. However, the MSP should never be required to be present to fix an infrastructure error. This is because the MSP should provide the customer access to authorized individuals who are capable of making infrastructure changes in a governed manner – if they choose to do so. The MSP can also handle daily activities of investigating and resolving alarms or incidents. Possible ToolsAmazon Connect, AWS Step Functions, Amazon Polly, Amazon Lex, Amazon CloudWatch – Logs, Events, and Monitoring, AWS CloudTrail, New Relic (App & Performance Monitoring), AWS Config (and Config Rules), and AWS EC2 Systems Manager
  • Patch Management – The MSP can manage all customer OS patching activities to help keep infrastructure resources current and secure. This would include applying updates or patches that are released from OS vendors  in a timely and consistent manner to minimize the impact on the customers’ business. Critical security patches are applied as needed, while others are applied based on the patch schedule when customers make the request. The customer can also apply these changes through governance mechanisms provided by the MSP. Possible Tools: AWS EC2 Systems Manager, AWS Service Catalog
  • Provisioning Management – The MSP launches and manages infrastructure stacks via a framework that provisions these stacks as code that builds users, security infrastructure, networks, environments, services, and deployment pipelines. The MSP should provide these same capabilities to customers as well so that they are capable of making these changes with or without the MSP. Possible Tools: AWS CloudFormation, Configuration Management Tools, and custom automation.
  • Reporting – Customers get access to the data using to manage your infrastructure, including Amazon S3 logs, CloudTrail logs, instance logs, and real-time data from the AWS Managed Services APIs. Customer can also get real-time advice through automated systems provide by the MSP. The MSP should also walk customers through metrics, their impact, as well as recommendations to optimize platform usage. Possible Tools: Amazon CloudWatch Dashboards, AWS Trusted Advisor, custom automation, and web portals
  • Security Management – The next-gen MSP provides customers information protection of assets and keeps the infrastructure secure by providing anti-malware protection, intrusion detection, and intrusion prevention systems. Possible Tools: Amazon VPC, AWS Parameter Store, AWS WAF, Amazon Inspector, AWS Shield, AWS Config and Config Rules, AWS CloudTrail, and Security Monitoring as a Service

The overarching goal of the next-generation delivery model is to provide the capability of 100% self-service capabilities for customers as part of a shared responsibility model. Alternatively, the customer might choose for the MSP to manage everything for them. In this case, the customer should be able to take over the management of the infrastructure at any time if the MSP is not meeting its needs. Customer-centric MSPs will do this by creating fully automated, continuous, and autonomic services.

Scenario: Deployment Pipeline Management

Here’s an example scenario in how a next-generation MSP might provide a deployment pipeline monitoring and guidance service to customers.

The MSP uses an open-source framework that provisions all the necessary AWS environment, deployment pipeline, and application resources to run a highly-available, secure application on AWS. Each deployment pipeline is configured to send AWS CodePipeline statistics via AWS CloudWatch Events. These events are configured to submit notifications through Amazon SNS and AWS Lambda so that all necessary parties are informed via email and Slack. What’s more, the CodePipeline statistics are aggregated and made available through Amazon CloudWatch Dashboards. All of this is configured through configuration files that are versioned in the customer’s version-control repository and automated via the open-source framework.

Once the MSP DevOps Engineers receive failure alerts through Slack, email, or the Dashboard, they help guide the customers’ engineers in resolving errors in AWS cpl-failureCodePipeline and/or its integrations with other tools like AWS CodeBuild, AWS CodeDeploy, AWS CloudFormation, static analysis, or tests. The expertise provided in this “stop the line” model helps quickly resolve issues that arise making them less costly to fix and increasing high velocity feedback between customers and its users. You might see some MSPs provide real-time expertise through automated conversational bots enabled through services like Amazon Lex. There’s a lot of space for innovation in providing these services to companies.

What’s Next?

Going forward, we expect customers to demand more self-service capabilities from their providers. Providers will enable these self-service capabilities through systematic automation and a focus on the user experience in how these features are provided to IT consumers.

Additional Resources

Application Auto Scaling with Amazon ECS

In this blog post, you’ll see an example of Application Auto Scaling for the Amazon ECS (EC2 Container Service). Automatic scaling of the container instances in your ECS cluster has been a feature for quite some time, but until recently you were not able to scale the tasks in your ECS service with built-in technology from AWS. In May of 2016, Automatic Scaling with Amazon ECS was announced which allowed us to configure elasticity into our deployed container services in Amazon’s cloud.

Developer Note: Skip to the “CloudFormation Examples” section to skip right to the code!

Why should you auto scale your container services?

Efficient and effective scaling of your microservices is why you should choose automatic scaling of your containers. If your primary goals include fault tolerance or elastic workloads, then leveraging a combination of cloud technology for autoscaling and infrastructure as code are the keys to success. With AWS’ Automatic Application Autoscaling, you can quickly configure elasticity into your architecture in a repeatable and testable way.

Introducing CloudFormation Support

For the first few months of this new feature it was not available in AWS CloudFormation. Configuration was either a manual process in the AWS Console or a series of API calls made from the CLI or one of Amazon’s SDKs. Finally, in August of 2016, we can now manage this configuration easily using CloudFormation.

The resource types you’re going to need to work with are:

The ScalableTarget and ScalingPolicy are the new resources that configure how your ECS Service behaves when an Alarm is triggered. In addition, you will need to create a new Role to give access to the Application Auto Scaling service to describe your CloudWatch Alarms and to modify your ECS Service — such as increasing your Desired Count.

CloudFormation Examples

The below examples were written for AWS CloudFormation in the YAML format. You can plug these snippets directly into your existing templates with minimal adjustments necessary. Enjoy!

Step 1: Implement a Role

These permissions were gathered from the various sources in AWS documentation.

ApplicationAutoScalingRole:
  Type: AWS::IAM::Role
  Properties:
    AssumeRolePolicyDocument:
      Statement:
      - Effect: Allow
        Principal:
          Service:
          - application-autoscaling.amazonaws.com
        Action:
        - sts:AssumeRole
     Path: "/"
     Policies:
     - PolicyName: ECSBlogScalingRole
       PolicyDocument:
         Statement:
         - Effect: Allow
           Action:
           - ecs:UpdateService
           - ecs:DescribeServices
           - application-autoscaling:*
           - cloudwatch:DescribeAlarms
           - cloudwatch:GetMetricStatistics
           Resource: "*"

Step 2: Implement some alarms

The below alarm will initiate scaling based on container CPU Utilization.

AutoScalingCPUAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    AlarmDescription: Containers CPU Utilization High
    MetricName: CPUUtilization
    Namespace: AWS/ECS
    Statistic: Average
    Period: '300'
    EvaluationPeriods: '1'
    Threshold: '80'
    AlarmActions:
    - Ref: AutoScalingPolicy
    Dimensions:
    - Name: ServiceName
      Value:
        Fn::GetAtt:
        - YourECSServiceResource
        - Name
    - Name: ClusterName
      Value:
        Ref: YourECSClusterName
    ComparisonOperator: GreaterThanOrEqualToThreshold

Step 3: Implement the ScalableTarget

This resource configures your Application Scaling to your ECS Service and provides some limitations for its function. Other than your MinCapacity and MaxCapacity, these settings are quite fixed when used with ECS.

AutoScalingTarget:
  Type: AWS::ApplicationAutoScaling::ScalableTarget
  Properties:
    MaxCapacity: 20
    MinCapacity: 1
    ResourceId:
      Fn::Join:
      - "/"
      - - service
        - Ref: YourECSClusterName
        - Fn::GetAtt:
          - YourECSServiceResource
          - Name
    RoleARN:
      Fn::GetAtt:
      - ApplicationAutoScalingRole
      - Arn
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs

Step 4: Implement the ScalingPolicy

This resource configures your exact scaling configuration — when to scale up or down and by how much. Pay close attention to the StepAdjustments in the StepScalingPolicyConfiguration as the documentation on this is very vague.

In the below example, we are scaling up by 2 containers when the alarm is greater than the Metric Threshold and scaling down by 1 container when below the Metric Threshold. Take special note of how MetricIntervalLowerBound and MetricIntervalUpperBound work together. When unspecified, they are effectively infinity for the upper bound and negative infinity for the lower bound. Finally, note that these thresholds are computed based on aggregated metrics — meaning the Average, Minimum or Maximum of your combined fleet of containers.

AutoScalingPolicy:
  Type: AWS::ApplicationAutoScaling::ScalingPolicy
  Properties:
    PolicyName: ECSScalingBlogPolicy
    PolicyType: StepScaling
    ScalingTargetId:
      Ref: AutoScalingTarget
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs
    StepScalingPolicyConfiguration:
      AdjustmentType: ChangeInCapacity
      Cooldown: 60
      MetricAggregationType: Average
      StepAdjustments:
      - MetricIntervalLowerBound: 0
        ScalingAdjustment: 2
      - MetricIntervalUpperBound: 0
        ScalingAdjustment: -1

Wrapping It Up

Amazon Web Services continues to provide excellent resources for automation, elasticity and virtually unlimited scalability. As you can see, with a couple solid examples underfoot you can very quickly build in that on-demand elasticity and inherent fault tolerance. After you have your tasks auto scaled, I recommend you check out the documentation on how to scale your container instances also to provide the same benefits to your ECS cluster itself.

Deploying Microservices? Let mu help!

With support for ECS Application Auto Scaling coming soon to Stelligent mu, it offers the fastest and most comprehensive platform for deploying microservices as containers.

Want to learn more about mu from its creators? Check out the DevOps in AWS Radio’s podcast or find more posts in our blog.

Additional Resources

Here are some of the supporting resources discussed in this post.

We’re Hiring!

Like what you’ve read? Would you like to join a team on the cutting edge of DevOps and Amazon Web Services? We’re hiring talented engineers like you. Click here to visit our careers page.

 

 

Continuous Delivery to S3 via CodePipeline and CodeBuild

In this blog post, you see a demonstration of Continuous Delivery of a static website to Amazon S3 via AWS CodeBuild  and AWS CodePipeline. At the conclusion, you will be able to provision all of the AWS resources by clicking a “Launch Stack” button and going through the AWS CloudFormation steps to launch a solution stack.

Using S3 is useful when you want to host static files such as HTML and image files as a website for others to access. Fortunately, S3 provides us the capability to configure an S3 bucket for static website hosting. For more information on manually configuring this for a custom domain, see Example: Setting up a Static Website Using a Custom Domain.

However, once you go through this process manually a few times, and if you’re like me, you’ll quickly grow tired of manually uploading new files, deleting old files, and setting the permissions for the files in the S3 bucket.

In this example, all the source files are hosted in GitHub and can be made available to developers. All of the steps in the process are orchestrated via CodePipeline and the build and deployment actions are performed by CodeBuild. The provisioning of all of the AWS resources is defined in a CloudFormation template.

By automating the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so without needing to repeatedly manually upload files to S3. Instead, you just commit the changes to the GitHub repository and the pipeline orchestrates the rest. While this is a simple example, you can follow the same model and tools for much larger and sophisticated applications.

Figure 1 shows this deployment pipeline in action.

devops-quick-demo

Figure 1 – Deployment Pipeline in CodePipeline to deploy a static website to S3

The remainder of this post describes how to configure the solution in your AWS account.

Prerequisites

Here are the prerequisites for this solution:

  • AWS Account – Follow these instructions to create an AWS Account: Creating an AWS Account and grant IAM privileges to access at least CodeBuild, CodePipeline, CloudFormation, IAM, and S3.
  • Fork GitHub Repo – Fork and clone your own stelligent/devops-essentials GitHub repository
  • OAuth Token – Create an OAuth token in GitHub and provide access to the admin:repo_hook and repo scopes.

To see these steps in more detail, go to devopsessentialsaws.com and go to section 2.1 Configure course prerequisites.

Architecture and Implementation

In Figure 2, you see the architecture for provisioning an infrastructure that launches a deployment pipeline to orchestrate the build the solution. You can click on the image to launch the template in CloudFormation Designer within your AWS account.

Figure 2 – CloudFormation Template for provisioning AWS resources

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML (or generated by more expressive domain-specific languages)
  • AWS CodePipeline – The CodePipeline stages and actions are defined in a CloudFormation template. This includes CodePipeline’s integration with CodeCommit, CodeBuild, and CloudFormation (For more information, see Action Structure Requirements in AWS CodePipeline).
  • GitHub – CodePipeline connects with an existing GitHub repository using the GitHub Source provider action.
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project to package and store the Lambda function
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline, CloudFormation, and other resources can access.

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including S3 and IAM.

S3 Buckets

There are two S3 buckets provisioned in this CloudFormation template. The SiteBucket resource defines the S3 bucket that hosts all the files that are copied from the downloaded source files from Git. The PipelineBucket hosts the input artifacts for CodePipeline that are referenced across stages in the deployment pipeline.

  SiteBucket:
    Type: AWS::S3::Bucket
    Properties:
      AccessControl: PublicRead
      BucketName: !Ref SiteBucketName
      WebsiteConfiguration:
        IndexDocument: index.html
  PipelineBucket:
    Type: AWS::S3::Bucket

IAM Role

The IAM role for CodePipeline provides the CodePipeline the necessary permissions for access to the necessary resource to deploy the static website resources.

  CodePipelineRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - codepipeline.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: codepipeline-service
        PolicyDocument:
          Statement:
          - Action:
            - codebuild:*
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:GetObject
            - s3:GetObjectVersion
            - s3:GetBucketVersioning
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:PutObject
            Resource:
            - arn:aws:s3:::codepipeline*
            Effect: Allow
          - Action:
            - s3:*
            - cloudformation:*
            - iam:PassRole
            Resource: "*"
            Effect: Allow
          Version: '2012-10-17'

CodePipeline

The CodePipeline pipeline CloudFormation snippet shown below defines the two stages and two actions that orchestrate the deployment of the static website. The Source action within the Source stage configures GitHub as the source provider. Then, it moves to the Deploy stage which runs CodeBuild to copy all the HTML and other assets to an S3 bucket that’a configured to be hosted as a website.

  Pipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: ThirdParty
            Version: '1'
            Provider: GitHub
          OutputArtifacts:
          - Name: SourceOutput
          Configuration:
            Owner: !Ref GitHubUser
            Repo: !Ref GitHubRepo
            Branch: !Ref GitHubBranch
            OAuthToken: !Ref GitHubToken
          RunOrder: 1
      - Name: Deploy
        Actions:
        - Name: Artifact
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          InputArtifacts:
          - Name: SourceOutput
          OutputArtifacts:
          - Name: DeployOutput
          Configuration:
            ProjectName: !Ref CodeBuildDeploySite
          RunOrder: 1
      ArtifactStore:
        Type: S3
        Location: !Ref PipelineBucket

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodePipeline – Customers can create new pipelines without incurring any charges on that pipeline for the first thirty calendar days. After that period, the new pipelines will be charged at the existing rate of $1 per active pipeline per month. For more information, see AWS CodePipeline pricing.
  • GitHub – No charge for public repositories
  • IAM – No additional cost.
  • S3 – If you launch the solution and delete the S3 bucket, it’ll be pennies (if that). See S3 Pricing.

The bottom line on pricing for this particular example is that you will charged no more than a few pennies if you launch the solution run through a few changes and then terminate the CloudFormation stack and associated AWS resources.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region

Step 2. Launch the Stack

Click on the “Launch Stack” button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 5 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

Here are the steps to test the deployment:

  1. Once the CloudFormation stack is complete, select checkbox next to the stack and go to the Outputs tab
  2. Click on the PipelineUrl link to launch the CodePipeline pipeline.
  3. Click on the SiteUrl link to launch the website that was configured and launched as part of the deployment pipeline
  4. From your Terminal, type (replacing YOURGITHUBUSERID with your GitHub userid):
    git clone https://github.com/YOURGITHUBUSERID/devops-essentials
  5. Make obvious visual changes to any of your local files (for example, change .bg-primary{color:#fff;background-color: in your forked repo version of devops-essentials/html/css/bootstrap.min.css) and type the following from your Terminal:
    git commit -am "add new files" && git push
  6. Go back to your pipeline in CodePipeline and see the changes successfully flow through the pipeline.

DevOps Essentials on AWS Video Course

devops_essentials_aws_cover_large

This and many more topics are covered in the DevOps Essentials on AWS Complete Video Course (Udemy, InformIT, SafariBooksOnline). In it, you’ll learn how to automate the infrastructure and deployment pipelines using AWS services and tools so if you’re some type of software orDevOps-focused engineer or architect interested in learning how to use AWS Developer and AWS Management Tools to create a full-lifecycle software delivery solution, it’s the course for you. The focus of the course is on deployment pipeline architectures and its implementations.

Additional Resources

Here are some of the supporting resources discussed in this post:

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Acknowledgements

My colleague Casey Lee created the initial CodePipeline/CodeBuild/S3 CloudFormation template that’s the basis for this solution.

Enforcing Compliance with AWS Organizations

You have a large organization with several development teams that work on various software projects that support your business. A year ago, you brought in a consultant that told you to use multiple AWS accounts because there were benefits to be gained. For example, using multiple accounts we can contain the damage from a possible security breach and isolate work by teams so that others don’t inadvertently disrupt that work. But there are also issues that we must deal with.
When a company has more than one AWS account and especially many AWS accounts, it becomes difficult to manage those accounts. How do we know that all teams are using good security policies? How do we take advantage of billing incentives for using more and more of an AWS resource? How do we manage the billing in general for all of those accounts? And if a company is in a business that requires them to comply with a set of standards such as PCI or HIPAA, how can we guarantee that teams are using only services that are certified compliant? And how can we automate the creation of new accounts in a way that they are properly configured, to begin with?

What Are AWS Organizations?

AWS organizations allow companies with multiple AWS accounts to manage those accounts from a billing and administrative perspective from a single root account. Why is this important? Until Organizations came along, I like to think of having multiple accounts as being like the Wild West. Each account was on its own and there was no way to manage all of them from one place. Users had no way to apply policies, manage permissions, or manage billing from a “company” perspective. AWS Organizations give us the tools we need to bring these accounts together and control them all in a predictable way.

Service Control Policies (SCPs)

Service Control Policies allow us to define the services that an account can access. In our case, we know that we want to allow access to only the services that are HIPAA compliant. Any service that isn’t compliant should not be allowed to be used by the teams. Using the root account, we can push this policy out to all accounts that we have within our organization.

Organizational Units (OUs)

Most organizations have accounts that have different requirements. Using the example above, some accounts may have to be HIPAA compliant while others may be used for other purposes and do not have to follow any guidelines. AWS Organizations gives us the ability to group accounts into Organizational Units.

Organizational Units allow us to split our accounts into separate groups and apply different policies to those groups. Continuing with the example from above, we can have an OU for all accounts that must be HIPAA compliant and an OU for accounts that are general purpose. All accounts in the HIPAA OU will be restricted to only the services that are HIPAA certified while the accounts in the general purpose OU have access to all AWS services. The rules that are applied to an OU even overrule account administrators. If an admin accidentally logs into an account and specifically sets permissions in that account to allow access to a service that has been restricted at the OU level, the OU rule that was applied to the account will still block that access.

OUs can be up to 5 levels deep. You can have multiple OUs inside of an OU. This allows even more granular control over accounts. As an example, let’s assume that some of our HIPAA accounts also handle patient transactional data. This means that we are dealing with both PCI and HIPAA data in those accounts. We can create an OU inside of our HIPAA account that restricts access to only services that are PCI compliant. The result is at the first level we have accounts that can only access HIPAA compliant services. In the PCI OU under the HIPAA OU we have accounts that can only access services which are HIPAA compliant AND PCI compliant.

One thing that must be remembered is that the root or “master” account cannot be restricted. Even if it is placed within an OU, none of the AWS services will be restricted to this account. Therefore, it is essential that the root account is not used by anyone other than the administrator of all accounts.

Account Creation Automation

It is often the case that a company will grow and will add teams as they are needed. These new teams will sometimes need their own set of accounts to work in to avoid disrupting the work of other teams. AWS Organizations provides the ability to automate this task. We can create an account, attach policies to this account, and add this account to the appropriate group all through the Organizations API. Not only is this useful for new teams, but it is also useful when developers need test accounts that need to be created quickly, then deleted when work within that account is finished.

How Does All of That Help Me?

Let’s take a look at an example and apply the tools above to solve the problems that companies with multiple accounts face. Let’s assume we have a health care company with a wide range of systems under their control. Some systems house identifiable patient data, which requires those systems to be HIPAA compliant, and some systems simply house generic data that can be used to generate high-level reports. The latter systems do not require any special treatment. One other platform the company has allows patients to log in and make payments. This platform allows users to store their credit card data for future transactions, which means the services they use must be PCI compliant.

Where Do We Start?

Before we begin we need to gather our requirements. We know that our company must be both HIPAA and PCI compliant so we can start by breaking the teams down into groups of standards they must follow.

Compliance Number of Teams
HIPAA 9
PCI 7
HIPAA and PCI (These overlap from the previous groups) 4
None 3

Once we have our teams broken out into groups, we need to know how many accounts each team has. Or this example, we are going to assume each team has 4 accounts:  Dev, Test, QA, PROD. Note that we have a group of 4 accounts that overlap in service restriction requirements. Unfortunately, Organizations will not allow an account to belong to 2 Organizational Units that are at the same hierarchical level. We will discuss the details of how to achieve this later when we create our OUs and begin adding accounts to them.

Once we have our accounts grouped we are ready to start planning our organization. The resulting Organization will have this overall structure:

cloudcraft - AWS Orgs

LIMITATION ALERT:

It’s worth noting at this point that AWS organizations treat accounts differently depending on how they were originally created. The Organizations API provides the ability to remove an account from the Organization, but only if that account was invited to join the organization. If the account was created by the organization, that account cannot be removed from the organization without deleting the account entirely. The Organizations API also does not provide the ability to delete an account, no matter how it was created. To delete an account, you must log into that account and do that manually. These limitations may influence how companies want to handle bringing accounts into an organization.

One other important fact we need to know is that the account that owns the user we use to create the Organization will become the master account. Make sure never to create an Organization from an account that needs to have policies applied to it. A master account will always have “root” access, even if it is moved to an Organizational Unit that restricts services. The services of the master account cannot be restricted and the wide-open policies will always override anything that is more restrictive.

Once we have our account information, let’s move on to creating the organization.

Creating an Organization

Before we begin, we need to make sure we have the AWS Command Line tools installed on the OS of your choice. Organizations can also be managed using the AWS SDK for your language of choice, but we’re going to use the command line tools for this example. Again, make sure we are using a user from the account we want to be the master. Make sure that user is configured with your CLI tools. Once our configs have been verified, we can issue the following command:

Minimum permissions for your user:

  • organizations:CreateOrganization
aws organizations create-organization --feature-set ALL

Notice that we are passing in a parameter to the create-organization command called “feature-set”. This tells AWS what control the organization will have over our accounts. There are 2 options we can pass in here:  ALL, CONSOLIDATED_BILLING. The ALL parameter value enables consolidated billing and also allows the organization to put policies in place that can restrict the services the account can access. This is the default value if this parameter is omitted. A value of CONSOLIDATED_BILLING will allow the new organization to consolidate the billing of all accounts under the master account. The Organization will not be allowed to restrict the services each account has access to. For our company, we need ALL functionality so we retain the ability to control access for some accounts to only HIPAA and PCI compliant services.

After running this command, we get back a response from AWS

{ "Organization": { "AvailablePolicyTypes": [{ "Status": "ENABLED", "Type": "SERVICE_CONTROL_POLICY" }], "MasterAccountId": "111111111111", "MasterAccountArn": "arn:aws:organizations::111111111111:account/o-exampleorgid/111111111111", "MasterAccountEmail": "bill@example.com", "FeatureSet": "ALL", "Id": "o-exampleorgid", "Arn": "arn:aws:organizations::111111111111:organization/o-exampleorgid" } }

We need to capture the “Id” value and keep that for future use.

Let’s Add Some Accounts

Inviting Accounts

Now that we have a newly created Organization, we can start adding our accounts to our organization. As mentioned above, there are 2 ways to add an account to an Organization. The first method and the one we’ll be using primarily for our example is to send an invitation to our accounts that already exist.

I want to reiterate that it’s important to note here that any account we invite to our Organization can be removed at any time. If we want our accounts tied to this Organization without the option to be removed (as a way of ensuring our policies are always in place), we need to create that account from within the Organization. Any resources would have to be migrated from the existing account to the new account.

To send an invitation to an existing account, we can issue the following command:

Minimum permissions for your users:

  • organizations:DescribeOrganization
  • organizations:InviteAccountToOrganization
aws organizations invite-account-to-organization --target '{"Type": "ACCOUNT", "Id": "ACCOUNT_ID_NUMBER"}'

We are passing in a data structure to the target parameter of the command. In this example, we are passing in the account ID. The key Type can also have values of EMAIL or ORGANIZATION. In those cases, we would set the Id to the appropriate value.

Another optional parameter that we could have passed as “notes”. If we want to include additional information in the email that is auto-generated by Organizations, we can pass that information using the “notes” parameter.

The response from this command should look like this:

{
  "Handshake": {
    "Action": "INVITE",
    "Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111",
    "ExpirationTimestamp": 1482952459.257,
    "Id": "h-examplehandshakeid111",
    "Parties": [{
      "Id": "o-exampleorgid",
      "Type": "ORGANIZATION"
    },
    {
      "Id": "juan@example.com",
      "Type": "EMAIL"
    }],
    "RequestedTimestamp": 1481656459.257,
    "Resources": [{
      "Type": "MASTER_EMAIL",
      "Value": "bill@amazon.com"
    },
    {
      "Type": "MASTER_NAME",
      "Value": "Org Master Account"
    },
    {
      "Type": "ORGANIZATION_FEATURE_SET",
      "Value": "FULL"
    },
    {
      "Type": "ORGANIZATION",
      "Value": "o-exampleorgid"
    },
    {
      "Type": "EMAIL",
      "Value": "juan@example.com"
    }],
    "State": "OPEN"
  }
}

Once again, we are interested in the “Id” value of the “Handshake” object. Each time we run the command to invite an account, we will receive this “Id” back in the response. We need to record that value for each account we invite so we can use it in the next step to accept the invitation.

Accepting Invitations

The process of inviting and adding an account to an organization is a “handshake” transaction. An invitation is sent to the account we want to add to our organization and the “owner” of that account must log in and accept that invitation. Fortunately for us, this can also be accomplished through the CLI. Again, we need to make sure our CLI is configured with a principal user that has the IAM permissions to accept that handshake. Once we have the CLI configured, we can issue the following command:

Minimum permissions for your user:

  • organizations:ListHandshakesForAccount
  • organizations:AcceptHandshake
  • organizations:DeclineHandshake
aws organizations accept-handshake --handshake-id HANDSHAKE_ID

The handshake ID that is being passed into this command was given to us in the response of the command to send the invitation.

Remember that we can also send and accept invitations through the console. For users with a few accounts, this may be acceptable. But if you are dealing with more than a few accounts you are definitely going to want to automate this process.

LIMITATION ALERT:

AWS has set a limit on a number of invitations that can be sent per day of 20. If you need to send more than that, contact customer support and they will up your limit.

Using Organizational Units

Here’s where the real power of Organizations starts to show. Now that we have our accounts added to the Organization we need to group them into OUs and restrict the services that can be used within those accounts. Before we started creating the Organization, we took the time to group our accounts by the compliance standard they needed to adhere to. We can use that information to help us create our OUs to move our accounts into. Looking at our chart we can see that we have four different types of accounts. We have HIPAA compliant, PCI compliant, HIPAA and PCI compliant, and accounts that require no restrictions at all. We are going to create three top-level OUs and one OU that is within either the PCI or the HIPAA OU. Because we are simply overlapping 2 sets of compliance standards, it really doesn’t matter which OU we use as a parent.

We’ll start by creating the three top-level OUs. We can issue the following commands to create those:

Minimum permissions for your user:

  • organizations:CreateOrganizationalUnit
aws organizations create-organizational-unit --parent-id PARENT_ORG_ID --name HipaaOU
aws organizations create-organizational-unit --parent-id PARENT_ORG_ID --name PciOU
aws organizations create-organizational-unit --parent-id PARENT_ORG_ID --name GeneralOU

We now have three top-level Organizational Units that we can add accounts to. We have already invited all existing accounts to our Organization. They reside at the top-level of our Org. To place those accounts into the proper OU we need to issue the “move” command on each account.

Minimum permissions for your user:

  • organizations:MoveAccount
aws organizations move-account --account-id ACCOUNT_ID --source-parent-id PARENT_ORG_ID --destination-parent-id OU_ID

We will need to issue this command for each account we need to move to an OU. We need to make sure we are using the correct destination ID to place the account into the proper OU.

We need to repeat the last 2 steps to create the sub OU for our overlapping HIPAA and PCI accounts. This time around the PARENT_ORG_ID will be changed from the ID of the organization itself to the ID of the organizational unit we want to create this sub OU in. We will create this OU within the HipaaOU that we created in the previous step.

And we can move those accounts that require both HIPAA and PCI compliance into this new OU using the same command we used to move the other accounts.

Service Control Policies

Simply moving accounts into OUs accomplishes nothing on its own. In order to take advantage of the power of these new OUs, we need to apply policies that will restrict the services that the accounts within the OU can access. At the time of this writing, Service Control Policies are the only policies that can be applied to an OU.

In order to apply a Service Control Policy to our account, we need to create a policy file that we can pass into the create-policy command. We could place this text within the command itself, but with the number of services we need to include and the fact that we have to escape characters, that approach is error-prone and very messy. Here’s what our policy file will look like

{ 
  “Version”: “2012-10-17”,
  “Statement”: [{
    “Effect”: “Allow”,
    “Action”: [
      “ec2:*”,
      “rds:*”,
      “dynamodb:*”
    ],
    “Resource”: “*”
  }]
}

In the above policy file, we are explicitly allowing a few services. There are many more HIPAA compliant services, but for the sake of this example, we are going to limit the policy to these three services.

TRAP FOR YOUNG PLAYERS:

It needs to be mentioned here that Service Control Policies which are applied to an OU will not grant any user any rights. We are not pushing this policy as a way to give each user in the accounts in the OU access to these services. This policy is in place as a way to restrict the permissions that can be applied to a user. And they will apply to all users, including administrators.

It’s also worth noting that the policies we are putting in place to restrict services assume that the “Allow *” policies have been removed from the root, OU, and individual accounts. If “Allow *” is still in place in any of these locations, the above policy will have no effect on the account(s) it is applied to.

We need to create two additional policy files, one for each additional OU type. Because we removed the “Allow *” policy from all accounts, OUs, and the root Organization, we will need to create a policy file for our GeneralOU that allows all services for that OU. We will reuse the PCI policy file for the sub OU that allows both HIPAA and PCI services.

Once we have our policy files in place, we can start creating those policies:

Minimum permissions for your user:

  • organizations:CreatePolicy
aws organizations create-policy --content file://allow_hipaa_policy.json --name AllowHipaaServices --type SERVICE_CONTROL_POLICY --description "This policy allows all HIPAA services"
aws organizations create-policy --content file://allow_pci_policy.json --name AllowPCIServices --type SERVICE_CONTROL_POLICY --description "This policy allows all PCI services"
organizations create-policy --content file://allow_all_policy.json --name AllowAllServices --type SERVICE_CONTROL_POLICY --description "This policy allows all services"

We have created three new policies that now need to be attached to our OUs.

Minimum permissions for your user:

  • organizations:AttachPolicy
aws organizations attach-policy --policy-id HIPAA_POLICY_ID --target-id HIPAA_OU_ID
aws organizations attach-policy --policy-id PCI_POLICY_ID --target-id HIPAA_OU_ID
aws organizations attach-policy --policy-id GENERAL_POLICY_ID --target-id HIPAA_OU_ID
aws organizations attach-policy --policy-id PCI_POLICY_ID --target-id HIPAA_PCI_OU_ID

Let’s take the time to examine what is happening here. We know that we have removed all permissions for all services for our root Organization, OUs, and accounts. We created policies that allow services that are compliant with HIPAA and PCI respectively. And we know that when we apply those policies to our OUs, the accounts within that OU will now have access to those services. In the case of the sub OU that allows both PCI and HIPAA services, the sub OU that has the overlapping accounts will inherit the services that are allowed by the HIPAA OU. Applying the AllowPCIServices policy to the sub OU will mean that in addition to the services it inherited, it will also be allowed to access the services which are PCI compliant.

Conclusion

Success! We have created a new Organization, invited our accounts into that organization, and grouped those accounts into OUs so we could ensure each group of accounts is compliant to the required standards. When dealing with a few accounts, working from the command line is fine. For larger amounts of accounts, it is highly recommended to script this process out.

AWS Organizations helps companies manage multiple accounts from a billing and policy standpoint. The use of Organizations helps reduce accidental security policies that violate compliance laws that companies may have to follow. It also reduces the time and effort required to create new accounts by providing an API that allows the auto-creation of new accounts with the correct policies already attached. Users can be restricted to the accounts they need access to and blocked from the accounts they don’t. All companies that have multiple accounts can benefit from the features provided by Organizations.

About Stelligent
Stelligent is an APN Advanced Consulting Partner and hold the AWS DevOps Competency. As a technology services company that provides DevOps Automation on Amazon Web Services (AWS) Cloud, we aim for “one-click deployment.” Our reason for being is to help our customers gain the ability to continuously deploy their software, when they want to, and with confidence. We’ve been providing DevOps Automation solutions on AWS since 2009. Follow @Stelligent on Twitter. Learn more at http://www.stelligent.com

Stelligent is an APN Launch Partner for the AWS Management Tools Addition to the AWS Service Delivery Program

Stelligent, an AWS Partner Network (APN) Advanced Consulting Partner specializing exclusively in DevOps Automation on the Amazon Web Services (AWS) Cloud, announce that it is a launch partner for four additional services in the AWS Service Delivery Program: AWS CloudFormationAWS CloudTrail, AWS Config, and Amazon EC2 Systems Manager. This means that Stelligent has demonstrated a successful track record of delivering specific AWS services and a demonstrated ability to provide expertise in a particular service or skill area.

800x200_Management-01 (1)

“The ability to deploy high-quality code in hours, not months, is something that we can help any company – including many in the Fortune 500 – achieve,” said Paul Duvall, Stelligent CTO and co-founder. “Using AWS Management Tools along with other AWS services we can drastically reducing our customers’ development times, while increasing the rate at which they can introduce new features.”

The AWS Service Delivery Program highlights APN Partners with a track record of delivering specific AWS services to customers. Attaining an AWS Service Delivery Distinction allows partners to differentiate themselves by showcasing to AWS customers areas of specialization.

The four AWS Management Tools included in the AWS Service Delivery Program include (Source AWS):

  • AWS CloudFormation – Create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.
  • AWS CloudTrail – Track user activity and API usage
  • AWS Config – Record and evaluate configurations of your AWS resources
  • Amazon EC2 Systems Manager – Easily configure and manage Amazon EC2 and on-premises systems

Stelligent uses these AWS Management Tools in creating DevOps Automation solutions for customers so they can release new features to users, on demand, and reduce the costs of delivering software by reducing overall lead time. Resulting benefits include the following:

● the ability to release software with every successful change
● significant reduction of cycle time
● increased confidence in what is deployed
● increase in ability to experiment
● reduction of overall costs

“We are proud to work with AWS to deliver DevOps Automation solutions to our customers, allowing them to release new features to users whenever they choose,” said Duvall. “Being a launch partner in the AWS Management Tools addition to the AWS Service Delivery Program means a lot to us — this is what we live and breathe, and we do so exclusively for our customers targeting AWS. We obsess over customers, and we obsess over applying what we believe are essential practices to achieve the aims of continuous delivery. This acknowledgement will help us reach still more customers who value that passion.”

About Stelligent
Stelligent is an APN Advanced Consulting Partner and hold the AWS DevOps Competency. As a technology services company that provides DevOps Automation on Amazon Web Services (AWS) Cloud, we aim for “one-click deployment.” Our reason for being is to help our customers gain the ability to continuously deploy their software, when they want to, and with confidence. We’ve been providing DevOps Automation solutions on AWS since 2009. Follow @Stelligent on Twitter. Learn more at http://www.stelligent.com

DevOps on AWS Radio: AWS CodePipeline and Amazon Alexa (Episode 11)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps on AWS news and discuss how to use AWS CodePipeline to deploy Amazon Alexa skill.

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. What was the “Use AWS CodePipeline to Deploy Amazon Alexa Skills” blog post ?
  2. What is AWS CodePipeline and what are its benefits? What are alternatives to using CodePipeline?
  3. How do you create a pipeline in CodePipeline?
  4. Which AWS services does CodePipeline integrate with? How about non-AWS tools and services
  5. How do you automate the provisioning of CodePipeline?
  6. Describe Amazon Alexa. What kinds of things can you do with Alexa? Which devices does it support
  7. Describe Lambda.
  8. How did you orchestrate CodePipeline to deploy a Lambda function?
  9. How did you configure Alexa to run the Lambda function?
  10. How can listeners learn more about this solution

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Screencast: Full-Stack DevOps on AWS Tool

Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers. However, there is a significant learning curve for developers to get their microservices deployed. mu is a full-stack DevOps on AWS tool that simplifies and orchestrates your software delivery lifecycle (environments, services, and pipelines). It is open source and available at http://getmu.io/. You can click the YouTube link below (we’ve also provided a transcript of this screencast in this post).

Let’s demonstrate using mu to deploy a Spring Boot application to ECS. So, we see here’s our micro service (and) we’ve already got our Docker file set up. We see that we’ve got our Gradle file so that we can compile the code and then we see the various classes necessary for the service; we’re using Liquibase for managing our database so that definition file is there; we’ve got some unit tests to find so when I will go ahead take a look at the Docker file and we see that it’s pretty straightforward: it builds from the Java image; all it does is takes the jar and adds it and then for the entry point, it just runs java -jar. So, we run mu init and that’s going to create two files for us: it’s going to create a mu.yml file which we see here and so we need to add some stuff to the file it generates – specifically, we want to specify Java 8 for the (AWS) CodeBuild image then we edit the buildspec file and tell it to use Gradle build for the build command. Buildspec is a standard code build  file for defining your project so if you see our two new files: buildspec.yml and mu.yml so we go ahead and commit those (and) push those up to our source repository in this case we’re using GitHub and then we run the command mu pipeline up and what that does is it creates a CloudFormation stack for managing our CodePipeline and our CodeBuild projects so it’s going to prompt us for the GitHub token this is the access token that you’ve defined inside GitHub so that CodePipeline can access your repository so we provide that token and then we see that it’s creating various things like IAM Roles for CodeBuild to do its business and (create) the actual CodeBuild project that’s going to be used there’s a quite a few different CodeBuild projects for building and testing and deploying so now we run the command mu service show and what that’s going to show us is that there is a pipeline now created we see it has started in the first step.

Let’s go ahead and open up (AWS CodePipeline) in the console and we see that, sure enough, (the Source stage of our pipeline) is running and then we see there’s a Build stage with the Artifact and Image actions in it – that’s where we compile and build our Docker image; there’s an acceptance stage and then a Production stage both of which do a deployment and then testing so jumping back over here to the command line we can run mu service show and we see that we are in the Source action currently running and that’s just going to take a minute before we now trigger the Artifact action of the Build stage and so that’s where we’re actually doing the compiling so the command we can run here (is) mu pipeline logs -f and we add the -f so that we follow the logs – what happens is all of the output from CodeBuild gets sent to CloudWatch Logs and so the mu pipeline logs command allows us to tail CloudWatch Logs and watch the activity in real time so we see that our Maven artifacts are being resolved for dependencies and then we see “build success”, so our artifact has been built and our unit tests have passed so it’s just going to take a second here for a CodeBuild to go ahead and upload the artifact and then trigger the pipeline to move to the next stage which is our Image (action) in the Image (action) what’s going to happen is it’s going to run Docker build against our artifact (and) create a Docker image; it’s then going to push that image up to ECR. It’s also going to create that ECS repository if it doesn’t exist yet through a CloudFormation stack so we go ahead and run mu pipeline logs and we could see the Image action running we see we’re pulling down the Docker base image that Java image and then there’s our docker build and now we’re pushing back up to ECR I’ll take just a minute to upload that new docker image with our Spring Boot application on and that’s completed successfully.

So now if we jump back over to mu service show just give it a second we should see that we will progress beyond the Build stage and into the Acceptance stage in the Acceptance Stage there will be two actions first a deploy action that’s going to use the image that was created and create a new ECS service for it and so that’s what we see going on here what you’ll notice in just a second right there what’s happening is first it’s making sure the environment is up-to-date so the ECS cluster and the auto scaling group for it and all the instances for ECS; it’s making sure that’s up to date; it’s also then updating any databases that are defined and then finally deploying the service and so we see here is there’s a CREATE_IN_PROGRESS –  the status of the deployment to the Dev environment is in progress so there’s a CloudFormation stack being deployed. I go ahead and run this command mu service logs just like there’s logs for the pipeline all the logs for your service are sent to CloudWatch Logs so here we’re watching the logs for our service starting up these are the Spring Boot output messages. If you used Spring Boot before it should look familiar but this is very helpful for troubleshooting an application being able to see if logs in real time.

So the deployment is complete – (based on) the logs we saw that it is up – so we’re going to go and look at the environment here. We do mu env list. We see the Dev environment and when we show it, we can see the EC2 instance associated with it and we also see the base URL for the ELB so I’m gonna go ahead and run a curl command against that – adding the bananas URI at the end of it and pipe that to jq just to make it look pretty and sure enough, there we see we get a successful response. So, our app has been deployed successfully and we see that we are in the Approval stage and it’s waiting for approvals so we’ve completed the Acceptance stage.

Let’s take a look at CloudFormation to just see what mu has created for us. So, we see there’s over just (CloudFormation) stacks over here. Remember everything that mu does is managed through CloudFormation there’s no other database or anything behind mu – it’s just native AWS resources so, for example, if we look at the VPC there for the in dev environment we see all the things you expect to see: routes, Network ACLs, subnets, there’s a NAT gateway defined, the VPC itself and then if we go to the cluster we see the Auto Scaling Groups for the ECS container instances, we see the load balancer – the application load balancer that’s defined for the environment, all the necessary security groups and then there’s some scaling policies to scale in or out on that auto scaling group based on how many tasks are currently running. This is the service –  the banana service has been deployed to the (dev environment), we see the IAM roles, Task Definition and whatnot for the service.

Now one thing we didn’t do previously was we didn’t do any testing so what you can do is you can go ahead and create this file called buildspec-test.yml and what will happen is anything that you define in this test YAML will be run as a test action after the deployments made if standard CodeBuild buildspec file so in this case we’re going to use a tool called Newman. Newman is a nodejs command-line tool for running postman collections. Postman is a tool that GitHub created for doing testing of restful APIs. So, our postman collections. so we’re configuring this to run Newman for our tests. We’ll have to make a change to mu.yml – we have to configure the acceptance environment to use a Node.js CodeBuild image so that’s what we’ve done there so with those two changes we should be able to run mu pipeline up that will update the CodeBuild project to use the nodejs image and then once our pipeline is up to date we’ll be able to commit our change which is that buildspec-test file and once we push that up the pipeline will start running again this time tests will actually run and we’ll get some assurance that the code is ready to go onto production. So to make that change, push it and then if we look at the service we’ll see that the source action has triggered and we’ll just let this run for a while. The whole pipeline is going to have to run but things like the artifact and image won’t really cause any change because we didn’t actually change the source code but those are go ahead and run anyway so we are now in being image stage we’re taking the new jar file and building a docker image from it pushing that up to ECR we’ve now hit the Deploy stage so the latest Docker image is being used for the ECS service.

Once that completes, we will run that mu pipeline logs again to watch the CodeBuild project doing the testing and here we go so we see the testing is running it’s going to run npm install to install our dependencies namely the Newman tool and then we see some results so i see status code 200 – that looks good. Under the fail column, I see a bunch of zeros which looks great and then I see build success so not only has our application been deployed to ECS but we’ve also been able to test it and and now those tests will be run as a part of every execution of the pipeline as part of every commit. Now the other thing that we’ll recognize here is this application that we built it’s managing our inventory of bananas but what it doesn’t have is a real database behind we’re just using the H2 database that is available with Java so let’s go ahead and make a change here let’s configure mu to actually have a real database so with mu that’s as easy is as defining a database you give it a name you could specify other things like a type and whatnot but will default with the Aurora RDS and then you’re going to want to pass some environment variables so we will pass the database connection information to our spring app since we’re using Spring data source it’s just a matter of finding these three environment variables and you’ll notice that the username password and the endpoint are not actually in the mu.yml file we don’t want those things in there what what will happen is mu will create those for us and then they will make them available As CloudFormation parameters that we can reference to the dollar-sign notation that CloudFormation offers. ok so now that we’ve got that change made, go and add our new file and commit the change and push it up which should trigger a new run of the pipeline and again we’ve got to go through all those earlier actions just to ultimately get to the deploy action where the RDS database will be created now again you can choose any RDS database type but we’re using Aurora by default.

Now one question is well how does the password get defined so the way this works is we use a service that AWS has called Parameter Store which manages secrets and when mu starts up it checks if there’s a password defined and if it’s not, it generates a random 16-character string, adds it to Parameter Store and then later on when it deploys the service it pulls it out of parameter store and passes it in as an environment variable. Those parameters are encrypted with KMS – a key management system so they are secure.

Ok, so looking at the logs now from the service these are our Spring Boot startup logs. What I’m expecting to see is that rather than seeing H2 as the dialect…there you go, we see MySQL is the dialect for the connection that tells me that Spring Boot detected our environment variables and Spring Boot recognized that we are in fact trying to talk to MySQL – let me go and highlight that here. So, this tells us that our application is in fact connecting to a MySQL database which is provided by RDS and wired up via mu. So, we can look at our service again and watch the pipeline run and we can get some confirmation that we need break anything because we have those tests as a part of our pipeline now so we’ll let this go and – our tests are running. Once that completes we will have a good good feeling that this change is ready to promote the production.

Well thanks for watching and check out https://getmu.io to learn more.

Use AWS CodePipeline to Deploy Amazon Alexa Skills

If you’ve done any experimentation with the Amazon Alexa voice service, you’ve probably learned that you can use AWS Lambda to write functions that can be executed from Alexa. As a developer, what’s exciting about this is that you can create your own custom Alexa skills to perform anything suited for voice-based computing.

You’ll probably also learn that there are numerous manual actions for integrating the various tools and code to deploy an Alexa skill. Once you create the Lambda function, you need to create a zip file with any packages that the function requires and upload it to Amazon S3. Moreover, you need to store code assets somewhere and then orchestrate the build and deployment of the function(s)  that are run by your Alexa skill. Finally, you need to configure the Alexa skill itself using the Alexa Skills Kit (ASK).

In this post, you will learn how to orchestrate the deployment of an Alexa skill (written in AWS Lambda) using the AWS Developer Tools suite – including AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline. The provisioning of all of the AWS resources is defined in an AWS CloudFormation template. By automating many of the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so. You’ll see an example that walks you through the deployment process.

Figure 1 shows this deployment pipeline in action.

serverless-alexa-pipeline

Figure 1 – Deployment Pipeline in CodePipeline to deploy a Lambda function

Prerequisites

Here are the prerequisites for this solution:

Architecture and Implementation

All code assets are stored in AWS CodeCommit. We define a deployment pipeline in AWS CodePipeline to orchestrate the solution by configuring a Source action for CodeCommit, a build action with CodeBuild, and deploy actions for a CloudFormation changeset. The provisioning of AWS resources is defined in CloudFormation.

In Figure 2, you see the architecture for provisioning an infrastructure that launches a deployment pipeline to orchestrate the build and deployment of a Lambda function. You can click on the image to launch the template in CloudFormation Designer.

Figure 2 – CloudFormation Template for provisioning AWS resources

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML
  • AWS CodePipeline – The CodePipeline stages and actions are defined in a CloudFormation template. This includes CodePipeline’s integration with CodeCommit, CodeBuild, and CloudFormation (For more information, see Action Structure Requirements in AWS CodePipeline).
  • AWS CodeCommit – Creates a CodeCommit Git repository using the AWS::CodeCommit::Repository
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project to package and store the Lambda function
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline, CloudFormation, and other resources can access.
  • AWS SNS – Provisions a Simple Notification Service (SNS) Topic using the AWS::SNS::Topic resource. The SNS topic is used by the CodeCommit repository for notifications.
  • Serverless Application Model (SAM) – “The AWS Serverless Application Model (AWS SAM) extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.” [Source]
  • Amazon Alexa – the voice service that powers Amazon Echo, provides capabilities, or skills, that enable users to interact with devices in a more intuitive way using voice.
  • AWS Lambda – The serverless function run by the Alexa skill.

The index.js file stored in CodeCommit is based on the alexa-skill-kit-sdk-factskill blueprint. As part of the deployment pipeline, the Node.js function gets packaged by CodeBuild and stored in S3. In the Deploy stage, it generates a CloudFormation template based on the Serverless Application Model and executes a change set on this template. The purpose of the generated template is to provision the Lambda function from the source in S3. Figure 3 illustrates how the Alexa skill interfaces with Lambda.

serverless-alexa-lambda

Figure 3 – Alexa Skills Kit and Lambda 

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including S3, IAM, and SNS.

IAM Role

There are several IAM roles that are provisioned in the CloudFormation template. The code shown in this section is for an IAM role that is used by the AWS Serverless Application Model for deploying the Lambda function run by the Alexa skill.

  LambdaTrustRole:
    Description: Creating service role in IAM for AWS Lambda
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Action: sts:AssumeRole
          Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
      ManagedPolicyArns:
      - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Path: "/"
      Policies:
      - PolicyDocument:
          Statement:
          - Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
            Effect: Allow
            Resource: "*"
          Version: '2012-10-17'
        PolicyName: MyLambdaWorkerPolicy
      RoleName: !Ref AWS::StackName
CodePipeline

The CodePipeline pipeline CloudFormation snippet shown below defines the three stages and four actions that orchestrate the deployment of the Lambda function used by the Alexa skill. The pipeline provisions a CodeCommit source action called Source. This repository is provisioned as part of the CloudFormation template. The TemplatePath: alexa-BuildArtifact::template-export.json property definition in the GenerateChangeSet deploy action configures the name of the SAM file that is generated to provision the Lambda function that was packaged and stored in the PackageExport build action. This file is used by SAM to transform into a CloudFormation template that is executed by the ExecuteChangeSet action.

  CodePipelineStack:
    Type: AWS::CodePipeline::Pipeline
    DependsOn:
    - CodeBuildWebsite
    - LambdaTrustRole
    Properties:
      RoleArn:
        Fn::Join:
        - ''
        - - 'arn:aws:iam::'
          - Ref: AWS::AccountId
          - ":role/"
          - Ref: CodePipelineRole
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: AWS
            Version: '1'
            Provider: CodeCommit
          OutputArtifacts:
          - Name: MyApp
          Configuration:
            BranchName:
              Ref: RepositoryBranch
            RepositoryName:
              Ref: AWS::StackName
          RunOrder: 1
      - Name: Build
        Actions:
        - InputArtifacts:
          - Name: MyApp
          Name: PackageExport
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          OutputArtifacts:
          - Name: alexa-BuildArtifact
          Configuration:
            ProjectName:
              Ref: CodeBuildWebsite
          RunOrder: 1
      - Name: Deploy
        Actions:
        - InputArtifacts:
          - Name: alexa-BuildArtifact
          Name: GenerateChangeSet
          ActionTypeId:
            Category: Deploy
            Owner: AWS
            Version: '1'
            Provider: CloudFormation
          OutputArtifacts: []
          Configuration:
            ActionMode: CHANGE_SET_REPLACE
            ChangeSetName: pipeline-changeset
            RoleArn:
              Fn::GetAtt:
              - CloudFormationTrustRole
              - Arn
            Capabilities: CAPABILITY_IAM
            StackName:
              Fn::Join:
              - ''
              - - ""
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ""
            TemplatePath: alexa-BuildArtifact::template-export.json
          RunOrder: 1
        - ActionTypeId:
            Category: Deploy
            Owner: AWS
            Provider: CloudFormation
            Version: 1
          Configuration:
            ActionMode: CHANGE_SET_EXECUTE
            ChangeSetName: pipeline-changeset
            StackName:
              Fn::Join:
              - ''
              - - ""
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ""
          InputArtifacts: []
          Name: ExecuteChangeSet
          OutputArtifacts: []
          RunOrder: 2
      ArtifactStore:
        Type: S3
        Location: !Ref ArtifactBucket

Serverless Application Model

With the AWS Serverless Application Model (SAM), you can simplify the process of packaging a serverless application and deploying it with CloudFormation. The sam-template.yml below is a file that uses the SAM to define an Alexa skill function. Using the CloudFormation generate and execute change set defined in the CodePipeline provisioning in CloudFormation, this file transforms to a CloudFormation template. Fn::ImportValue pulls the export value from main CloudFormation template that provisions this solution.

AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31

Resources:
  AlexaSkillFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs4.3
      Role:
        Fn::ImportValue:
          !Join ['-', [!Ref 'AWS::StackName', 'LambdaTrustRole']]
      Events:
        AlexaSkillEvent:
          Type: AlexaSkill

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodeCommit – If used on a small project of less than six users, there’s no additional cost. See AWS CodeCommit Pricing for more information.
  • CodePipeline – Customers can create new pipelines without incurring any charges on that pipeline for the first thirty calendar days. After that period, the new pipelines will be charged at the existing rate of $1 per active pipeline per month. For more information, see AWS CodePipeline pricing.
  • Lambda –Considering you likely won’t have over 1M requests for this particular solution, there’s no cost. The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month. For more information, see AWS Lambda Pricing.
  • Alexa –There is no direct cost associated with using the Alexa service. If you’re using an Amazon Echo device, there is a one-time payment for the hardware and you’re charged every time your Lambda function is run (once it exceeds 1M free requests per month).
  • IAM – No additional cost.
  • SNS – Considering you likely won’t have over 1 million Amazon SNS requests for this particular solution, there’s no cost. For more information, see AWS SNS Pricing.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region

Step 2. Launch the Stack

Click on the “Launch Stack” button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 5 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

To test the deployment, you will need to configure the Alexa skill using the Amazon Developer Portal. You can use the Amazon Alexa Developer portal, a tool called Echosim, or an actual Amazon Echo device to test your skill.

Upload Code Assets to CodeCommit

  1. Once the CloudFormation stack is complete, select checkbox next to the stack and go to the Outputs tab
  2. Click on the PipelineUrl link to launch the CodePipeline pipeline. The Source action will be in a failed state.
  3. From the pipeline, click on the CodeCommit link and copy the command under “Clone your repository to your local computer and start working on code” to your clipboard.
  4. From your Terminal, paste the command contents to a computer for which you have configured a git client.
  5. Copy all the files from your locally-cloned Git repository (for https://github.com/stelligent/devops-essentials/tree/master/samples/serverless/alexa) to the CodeCommit repository you just cloned.
  6. From your Terminal, type
    git add .
  7. From your Terminal, type:
    git commit -am "add new files" && git push
  8. Go back to your pipeline in CodePipeline and see the changes successfully flow through the pipeline.

Configure and Test Alexa Skill

At this time, you can’t just click a “Launch Stack” button to deploy an Alexa skill. Separately, you need to configure the Alexa skill to define the intent schema, sample utterances and, most relevant, the Lambda function ARN that was deployed as part of the CodePipeline pipeline. To configure and test your Alexa skill, follow the steps defined below.

  1. Once your pipeline has successfully completed, go to https://developer.amazon.com/alexa and click the Sign In link
  2. Use your Amazon credentials to login to the Amazon Developer portal
  3. Select Alexa
  4. Under Alexa Skills Kit select Get Started
  5. Click Add a New Skill
  6. Enter a Name and Invocation Name and Choose Save
  7. Click Next
  8. In the Intent Schema text area, enter the contents from IntentSchema.json.
  9. In the Sample Utterances text area, enter the contents from SampleUtterances_en_US.txt.
  10. Click Next
  11. Choose the AWS Lambda ARN (Amazon Resource Name) radio button in the Service Endpoint Type section.
  12. Choose the North America checkbox
  13. Go to the Lambda console and choose the radio button next to the function that the CodePipeline pipeline generated. Then, choose the Actions button and select the Show ARN item and copy the contents that are displayed to your clipboard.
  14. Go back to the Amazon Developer Portal and paste your clipboard contents to the North America text box.
  15. Click Next
  16. In the Service Simulator section, enter “tell me a space fact” in the Enter Utterance text box and click Ask (the name of your skill). You should see a valid response in the Lambda Response text area. Go to SampleUtterances_en_US.txt for some other examples to simulate.

Alternatively, you can use a service the Echosim service to test your Alexa skill or an actual Amazon Echo device.

Deployment Pipeline

There are three stages and four actions that compose the pipeline that orchestrates the deployment of the Lambda function used by the Amazon Alexa service.

  • Source – In the single Source action, it uses the CodeCommit source action type to store all the code assets for the Alexa skill, infrastructure, and deployment pipeline
  • Build – In the single PackageExport action, it uses the CodeBuild build action type to package and store the Lambda function and associated files
  • Deploy
    • GenerateChangeSet – Uses the CloudFormation deploy action type to generate a change set for a CloudFormation template that defines the Lambda function
    • ExecuteChangeSet – Uses the CloudFormation deploy action type to generate a change set on the CloudFormation template to deploy the Lambda function

Figure 4 illustrates annotates the stages and actions of this deployment pipeline.

serverless-pipeline-annotated

Figure 4 – Annotated Deployment Pipeline for Solution

DevOps Essentials on AWS Complete Video Course

This and many more topics are covered in the DevOps Essentials on AWS Complete Video Course (release date: August 2017). In it, you’ll learn how to automate the infrastructure and deployment pipelines using AWS services and tools so if you’re some type of software or DevOps-focused engineer or architect interested in learning how to use AWS Developer Tools to create a full-lifecycle software delivery solution, it’s the course for you. The focus of the course is on deployment pipeline architectures and its implementations.

Additional Resources

You can also provide voice-enabled applications using Amazon Lex, Amazon Polly, and other AWS services – only without the “wake word” functionality.

Here are some of the supporting resources discussed in this post:

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Microservice databases with mu

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this third post of the blog series focused on the mu tool, we will use mu to manage microservice databases in the pipeline we built in the first post.  

Why should my microservice manage the database?

As discussed in prior posts, adopting a microservice architecture can increase a team’s ability to deliver software faster through decoupling and team autonomy.  By decomposing an application into microservices and then giving teams complete ownership of their microservices, the teams can then make decisions and implement changes independent of other teams and their microservices.

Unless the same approach is taken to decompose the databases that support the microservices, the benefits of microservices will be limited by the cross team dependencies on shared databases. When your microservices share a database then in effect you’ve used the database as an API between the services.  This type of architecture causes tight coupling between services and likely will require regression testing and even deployment of multiple services at the same time.

Martin Fowler, in his post titled Microservices, says “Microservices prefer letting each service manage its own database.”  By decomposing all the way down into the database you can realize the benefits of agility that microservices has to offer.

decentralised-data
Source: https://martinfowler.com/articles/microservices.html

Let mu help!

blog1The continuous delivery pipeline that mu creates for your microservice can manage the provisioning of a database.  Additionally, the details about the database can be injected into your service as environment variables.

Let’s demonstrate this by adding a database to the microservice pipeline we created in the first post for the banana service.

Define the database

Previously, the banana service was using an embedded H2 database.  This won’t work in a production environment so we need an RDS database instance that the microservice can use.  Adding a database for a service with mu is as simple as adding a couple lines to your mu.yml file:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana

By default, this will create an RDS database instance of size db.t2.small with the aurora engine.  Next we need to reference the database from our microservice.  We can pass the database URL and credentials via environment variables:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana

  environment:
    SPRING_DATASOURCE_USERNAME: ${DatabaseMasterUsername}
    SPRING_DATASOURCE_PASSWORD: ${DatabaseMasterPassword}
    SPRING_DATASOURCE_URL: jdbc:mysql://${DatabaseEndpointAddress}:${DatabaseEndpointPort}/${DatabaseName}

This approach does have the disadvantage of passing database credentials as environment variables.  This presents a security issue as any IAM user/role with access to ECS task API would be able to discover the credentials.

AWS has recently announced IAM database authentication that can be utilized to obtain temporary database credentials from the microservice via an AWS API call.  Although we will save the details for a future blog post, for now it’s worth mentioning that mu can configure the database for IAM database authentication to work around this issue of passing credentials as environment variables.  This would be accomplished with a mu.yml like this:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana
    instanceClass: db.t2.medium
    iamAuthentication: true

  environment:
       SPRING_DATASOURCE_URL: jdbc:mysql://${DatabaseEndpointAddress}:${DatabaseEndpointPort}/${DatabaseName}

The configuration of the tables and the data in the database is managed with Liquibase. When the service is started, Liquibase creates/updates the database tables and data. This is accomplished by creating the a file named db.changelog-master.yaml  in src/main/resources/db/changelog/

Now we can commit and push our changes to cause a new run of the pipeline to occur:

$ git add --all && git commit -m "add database" && git push

We see our pipeline is green, so we have confidence that the new database is working properly with the microservice.

Conclusion

Realizing the benefits of microservices requires decomposing not just the application, but also the databases that support it.  As demonstrated in this post, mu makes it simple to manage your database and wire them up to your microservices.  The goal is that mu empowers you to implement microservice best practices in your application.

In the upcoming posts in this blog series, we will look into:

  • Service Discovery – use mu to enable service discovery via `Consul` to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Docker lifecycle automation and testing with Ruby in AWS

My friend and colleague, Stephen Goncher and I got to spend some real time recently implementing a continuous integration and continuous delivery pipeline using only Ruby. We were successful in developing a new module in our pipeline gem that handles many of the docker engine needs without having to skimp out on testing and code quality. By using the swipely/docker-api gem we were able to write well-tested, DRY pipeline code that can be leveraged by future users of our environment with confidence.

Our environment included the use of Amazon Web Service’s Elastic Container Registry (ECR) which proved to be more challenging to implement than we originally considered. The purpose of this post is to help others implement some basic docker functionality in their pipelines more quickly than we did. In addition, we will showcase some of the techniques we used to test our docker images.

Quick look at the SDK

It’s important that you make the connection in your mind now that each interface in the docker gem has a corresponding API call in the Docker Engine. With that said, it would be wise to take a quick stroll through the documentation and API reference before writing any code. There’s a few methods, such as Docker.authenticate! that will require some advanced configuration that is vaguely documented and you’ll need to combine all the sources to piece them together.

For those of you who are example driven learners, be sure to check out an example project on github that we put together to demonstrate these concepts.

Authenticating with ECR

We’re going to save you the trouble of fumbling through the various documentation by providing an example to authenticate with an Amazon ECR repository. The below example assumes you have already created a repository in AWS. You’ll also need to have an instance role attached to the machine you’re executing this snippet from or have your API key and secret configured.

Snippet 1. Using ruby to authenticate with Amazon ECR

require 'aws-sdk-core'
require 'base64'
require 'docker'

# AWS SDK ECR Client
ecr_client = Aws::ECR::Client.new

# Your AWS Account ID
aws_account_id = '1234567890'

# Grab your authentication token from AWS ECR
token = ecr_client.get_authorization_token(
 registry_ids: [aws_account_id]
).authorization_data.first

# Remove the https:// to authenticate
ecr_repo_url = token.proxy_endpoint.gsub('https://', '')

# Authorization token is given as username:password, split it out
user_pass_token = Base64.decode64(token.authorization_token).split(':')

# Call the authenticate method with the options
Docker.authenticate!('username' => user_pass_token.first,
                     'password' => user_pass_token.last,
                     'email' => 'none',
                     'serveraddress' => ecr_repo_url)

Pro Tip #1: The docker-api gem stores the authentication credentials in memory at runtime (see: Docker.creds.) If you’re using something like a Jenkins CI server to execute your pipeline in separate stages, you’ll need to re-authenticate at each step. Here’s an example of how the sample project accomplishes this.

Snippet 2. Using ruby to logout

Docker.creds = nil

Pro Tip #2: You’ll need to logout or deauthenticate from ECR in order to pull images from the public/default docker.io repository.

Build, tag and push

The basic functions of the docker-api gem are pretty straightforward to use with a vanilla configuration. When you tie in a remote repository such as Amazon ECR there can be some gotcha’s. Here are some more examples of the various stages of a docker image you’ll encounter with your pipeline. Now that you’re authenticated, let’s get to doing some real work!

The following snippets assume you’re authenticated already.

Snippet 3. The complete lifecycle of a basic Docker image

# Build our Docker image with a custom context
image = Docker::Image.build_from_dir(
 '/path/to/project',
 { 'dockerfile' => 'ubuntu/Dockerfile' }
)

# Tag our image with the complete endpoint and repo name
image.tag(repo: 'example.ecr.amazonaws.com/stelligent-example',
          tag: 'latest')

# Push only our tag to ECR
image.push(nil, tag: 'latest')

Integration Tests for your Docker Images

Here at Stelligent, we know that the key to software quality is writing tests. It’s part of our core DNA. So it’s no surprise we have some method to writing integration tests for our docker images. The solution will use Serverspec to launch the intermediate container, execute the tests and compile the results while we use the docker-api gem we’ve been learning to build the image and provide the image id into the context.

Snippet 5. Writing a serverspec test for a Docker Image

require 'serverspec'

describe 'Dockerfile' do
 before(:all) do
   set :os, family: :debian
   set :backend, :docker
   set :docker_image, '123456789' # image id
 end

 describe file('/usr/local/apache2/htdocs/index.html') do
   it { should exist }
   it { should be_file }
   it { should be_mode 644 }
   it { should contain('Automation for the People') }
 end

 describe port(80) do
   it { should be_listening }
 end
end

Snippet 6. Executing your test

$ rspec spec/integration/docker/stelligent-example_spec.rb

You’re Done!

Using a tool like the swipely/docker-api to drive your automation scripts is a huge step forward in providing fast, reliable feedback in your Docker pipelines compared to writing bash. By doing so, you’re able to write unit and integration tests for your pipeline code to ensure both your infrastructure and your application is well-tested. Not only can you unit test your docker-api implementation, but you can also leverage the AWS SDK’s ability to stub responses and take your testing a step further when implementing with Amazon Elastic Container Repository.

See it in Action

We’ve put together a short (approx. 5 minute) demo of using these tools. Check it out from github and take a test drive through the life cycle of Docker within AWS.


Working with cool tools like Docker and its open source SDKs is only part of the exciting work we do here at Stelligent. To take your pipeline a step further from here, you should check out mu — a microservices platform that will deploy your newly tested docker containers. You can take that epic experience a step further and become a Stelligentsia because we are hiring innovative and passionate engineers like you!