Continuous Delivery to S3 via CodePipeline and CodeBuild

In this blog post, you see a demonstration of Continuous Delivery of a static website to Amazon S3 via AWS CodeBuild  and AWS CodePipeline. At the conclusion, you will be able to provision all of the AWS resources by clicking a “Launch Stack” button and going through the AWS CloudFormation steps to launch a solution stack.

Using S3 is useful when you want to host static files such as HTML and image files as a website for others to access. Fortunately, S3 provides us the capability to configure an S3 bucket for static website hosting. For more information on manually configuring this for a custom domain, see Example: Setting up a Static Website Using a Custom Domain.

However, once you go through this process manually a few times, and if you’re like me, you’ll quickly grow tired of manually uploading new files, deleting old files, and setting the permissions for the files in the S3 bucket.

In this example, all the source files are hosted in GitHub and can be made available to developers. All of the steps in the process are orchestrated via CodePipeline and the build and deployment actions are performed by CodeBuild. The provisioning of all of the AWS resources is defined in a CloudFormation template.

By automating the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so without needing to repeatedly manually upload files to S3. Instead, you just commit the changes to the GitHub repository and the pipeline orchestrates the rest. While this is a simple example, you can follow the same model and tools for much larger and sophisticated applications.

Figure 1 shows this deployment pipeline in action.

devops-quick-demo

Figure 1 – Deployment Pipeline in CodePipeline to deploy a static website to S3

The remainder of this post describes how to configure the solution in your AWS account.

Prerequisites

Here are the prerequisites for this solution:

  • AWS Account – Follow these instructions to create an AWS Account: Creating an AWS Account and grant IAM privileges to access at least CodeBuild, CodePipeline, CloudFormation, IAM, and S3.
  • Fork GitHub Repo – Fork and clone your own stelligent/devops-essentials GitHub repository
  • OAuth Token – Create an OAuth token in GitHub and provide access to the admin:repo_hook and repo scopes.

To see these steps in more detail, go to devopsessentialsaws.com and go to section 2.1 Configure course prerequisites.

Architecture and Implementation

In Figure 2, you see the architecture for provisioning an infrastructure that launches a deployment pipeline to orchestrate the build the solution. You can click on the image to launch the template in CloudFormation Designer within your AWS account.

Figure 2 – CloudFormation Template for provisioning AWS resources

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML (or generated by more expressive domain-specific languages)
  • AWS CodePipeline – The CodePipeline stages and actions are defined in a CloudFormation template. This includes CodePipeline’s integration with CodeCommit, CodeBuild, and CloudFormation (For more information, see Action Structure Requirements in AWS CodePipeline).
  • GitHub – CodePipeline connects with an existing GitHub repository using the GitHub Source provider action.
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project to package and store the Lambda function
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline, CloudFormation, and other resources can access.

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including S3 and IAM.

S3 Buckets

There are two S3 buckets provisioned in this CloudFormation template. The SiteBucket resource defines the S3 bucket that hosts all the files that are copied from the downloaded source files from Git. The PipelineBucket hosts the input artifacts for CodePipeline that are referenced across stages in the deployment pipeline.

  SiteBucket:
    Type: AWS::S3::Bucket
    Properties:
      AccessControl: PublicRead
      BucketName: !Ref SiteBucketName
      WebsiteConfiguration:
        IndexDocument: index.html
  PipelineBucket:
    Type: AWS::S3::Bucket

IAM Role

The IAM role for CodePipeline provides the CodePipeline the necessary permissions for access to the necessary resource to deploy the static website resources.

  CodePipelineRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - codepipeline.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: codepipeline-service
        PolicyDocument:
          Statement:
          - Action:
            - codebuild:*
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:GetObject
            - s3:GetObjectVersion
            - s3:GetBucketVersioning
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:PutObject
            Resource:
            - arn:aws:s3:::codepipeline*
            Effect: Allow
          - Action:
            - s3:*
            - cloudformation:*
            - iam:PassRole
            Resource: "*"
            Effect: Allow
          Version: '2012-10-17'

CodePipeline

The CodePipeline pipeline CloudFormation snippet shown below defines the two stages and two actions that orchestrate the deployment of the static website. The Source action within the Source stage configures GitHub as the source provider. Then, it moves to the Deploy stage which runs CodeBuild to copy all the HTML and other assets to an S3 bucket that’a configured to be hosted as a website.

  Pipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: ThirdParty
            Version: '1'
            Provider: GitHub
          OutputArtifacts:
          - Name: SourceOutput
          Configuration:
            Owner: !Ref GitHubUser
            Repo: !Ref GitHubRepo
            Branch: !Ref GitHubBranch
            OAuthToken: !Ref GitHubToken
          RunOrder: 1
      - Name: Deploy
        Actions:
        - Name: Artifact
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          InputArtifacts:
          - Name: SourceOutput
          OutputArtifacts:
          - Name: DeployOutput
          Configuration:
            ProjectName: !Ref CodeBuildDeploySite
          RunOrder: 1
      ArtifactStore:
        Type: S3
        Location: !Ref PipelineBucket

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodePipeline – Customers can create new pipelines without incurring any charges on that pipeline for the first thirty calendar days. After that period, the new pipelines will be charged at the existing rate of $1 per active pipeline per month. For more information, see AWS CodePipeline pricing.
  • GitHub – No charge for public repositories
  • IAM – No additional cost.
  • S3 – If you launch the solution and delete the S3 bucket, it’ll be pennies (if that). See S3 Pricing.

The bottom line on pricing for this particular example is that you will charged no more than a few pennies if you launch the solution run through a few changes and then terminate the CloudFormation stack and associated AWS resources.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region

Step 2. Launch the Stack

Click on the “Launch Stack” button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 5 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

Here are the steps to test the deployment:

  1. Once the CloudFormation stack is complete, select checkbox next to the stack and go to the Outputs tab
  2. Click on the PipelineUrl link to launch the CodePipeline pipeline.
  3. Click on the SiteUrl link to launch the website that was configured and launched as part of the deployment pipeline
  4. From your Terminal, type (replacing YOURGITHUBUSERID with your GitHub userid):
    git clone https://github.com/YOURGITHUBUSERID/devops-essentials
  5. Make obvious visual changes to any of your local files (for example, change .bg-primary{color:#fff;background-color: in your forked repo version of devops-essentials/html/css/bootstrap.min.css) and type the following from your Terminal:
    git commit -am "add new files" && git push
  6. Go back to your pipeline in CodePipeline and see the changes successfully flow through the pipeline.

DevOps Essentials on AWS Video Course

devops_essentials_aws_cover_large

This and many more topics are covered in the DevOps Essentials on AWS Complete Video Course (Udemy, InformIT, SafariBooksOnline). In it, you’ll learn how to automate the infrastructure and deployment pipelines using AWS services and tools so if you’re some type of software orDevOps-focused engineer or architect interested in learning how to use AWS Developer and AWS Management Tools to create a full-lifecycle software delivery solution, it’s the course for you. The focus of the course is on deployment pipeline architectures and its implementations.

Additional Resources

Here are some of the supporting resources discussed in this post:

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Acknowledgements

My colleague Casey Lee created the initial CodePipeline/CodeBuild/S3 CloudFormation template that’s the basis for this solution.

DevOps on AWS Radio: AWS CodePipeline and Amazon Alexa (Episode 11)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps on AWS news and discuss how to use AWS CodePipeline to deploy Amazon Alexa skill.

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. What was the “Use AWS CodePipeline to Deploy Amazon Alexa Skills” blog post ?
  2. What is AWS CodePipeline and what are its benefits? What are alternatives to using CodePipeline?
  3. How do you create a pipeline in CodePipeline?
  4. Which AWS services does CodePipeline integrate with? How about non-AWS tools and services
  5. How do you automate the provisioning of CodePipeline?
  6. Describe Amazon Alexa. What kinds of things can you do with Alexa? Which devices does it support
  7. Describe Lambda.
  8. How did you orchestrate CodePipeline to deploy a Lambda function?
  9. How did you configure Alexa to run the Lambda function?
  10. How can listeners learn more about this solution

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Screencast: Full-Stack DevOps on AWS Tool

Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers. However, there is a significant learning curve for developers to get their microservices deployed. mu is a full-stack DevOps on AWS tool that simplifies and orchestrates your software delivery lifecycle (environments, services, and pipelines). It is open source and available at http://getmu.io/. You can click the YouTube link below (we’ve also provided a transcript of this screencast in this post).

Let’s demonstrate using mu to deploy a Spring Boot application to ECS. So, we see here’s our micro service (and) we’ve already got our Docker file set up. We see that we’ve got our Gradle file so that we can compile the code and then we see the various classes necessary for the service; we’re using Liquibase for managing our database so that definition file is there; we’ve got some unit tests to find so when I will go ahead take a look at the Docker file and we see that it’s pretty straightforward: it builds from the Java image; all it does is takes the jar and adds it and then for the entry point, it just runs java -jar. So, we run mu init and that’s going to create two files for us: it’s going to create a mu.yml file which we see here and so we need to add some stuff to the file it generates – specifically, we want to specify Java 8 for the (AWS) CodeBuild image then we edit the buildspec file and tell it to use Gradle build for the build command. Buildspec is a standard code build  file for defining your project so if you see our two new files: buildspec.yml and mu.yml so we go ahead and commit those (and) push those up to our source repository in this case we’re using GitHub and then we run the command mu pipeline up and what that does is it creates a CloudFormation stack for managing our CodePipeline and our CodeBuild projects so it’s going to prompt us for the GitHub token this is the access token that you’ve defined inside GitHub so that CodePipeline can access your repository so we provide that token and then we see that it’s creating various things like IAM Roles for CodeBuild to do its business and (create) the actual CodeBuild project that’s going to be used there’s a quite a few different CodeBuild projects for building and testing and deploying so now we run the command mu service show and what that’s going to show us is that there is a pipeline now created we see it has started in the first step.

Let’s go ahead and open up (AWS CodePipeline) in the console and we see that, sure enough, (the Source stage of our pipeline) is running and then we see there’s a Build stage with the Artifact and Image actions in it – that’s where we compile and build our Docker image; there’s an acceptance stage and then a Production stage both of which do a deployment and then testing so jumping back over here to the command line we can run mu service show and we see that we are in the Source action currently running and that’s just going to take a minute before we now trigger the Artifact action of the Build stage and so that’s where we’re actually doing the compiling so the command we can run here (is) mu pipeline logs -f and we add the -f so that we follow the logs – what happens is all of the output from CodeBuild gets sent to CloudWatch Logs and so the mu pipeline logs command allows us to tail CloudWatch Logs and watch the activity in real time so we see that our Maven artifacts are being resolved for dependencies and then we see “build success”, so our artifact has been built and our unit tests have passed so it’s just going to take a second here for a CodeBuild to go ahead and upload the artifact and then trigger the pipeline to move to the next stage which is our Image (action) in the Image (action) what’s going to happen is it’s going to run Docker build against our artifact (and) create a Docker image; it’s then going to push that image up to ECR. It’s also going to create that ECS repository if it doesn’t exist yet through a CloudFormation stack so we go ahead and run mu pipeline logs and we could see the Image action running we see we’re pulling down the Docker base image that Java image and then there’s our docker build and now we’re pushing back up to ECR I’ll take just a minute to upload that new docker image with our Spring Boot application on and that’s completed successfully.

So now if we jump back over to mu service show just give it a second we should see that we will progress beyond the Build stage and into the Acceptance stage in the Acceptance Stage there will be two actions first a deploy action that’s going to use the image that was created and create a new ECS service for it and so that’s what we see going on here what you’ll notice in just a second right there what’s happening is first it’s making sure the environment is up-to-date so the ECS cluster and the auto scaling group for it and all the instances for ECS; it’s making sure that’s up to date; it’s also then updating any databases that are defined and then finally deploying the service and so we see here is there’s a CREATE_IN_PROGRESS –  the status of the deployment to the Dev environment is in progress so there’s a CloudFormation stack being deployed. I go ahead and run this command mu service logs just like there’s logs for the pipeline all the logs for your service are sent to CloudWatch Logs so here we’re watching the logs for our service starting up these are the Spring Boot output messages. If you used Spring Boot before it should look familiar but this is very helpful for troubleshooting an application being able to see if logs in real time.

So the deployment is complete – (based on) the logs we saw that it is up – so we’re going to go and look at the environment here. We do mu env list. We see the Dev environment and when we show it, we can see the EC2 instance associated with it and we also see the base URL for the ELB so I’m gonna go ahead and run a curl command against that – adding the bananas URI at the end of it and pipe that to jq just to make it look pretty and sure enough, there we see we get a successful response. So, our app has been deployed successfully and we see that we are in the Approval stage and it’s waiting for approvals so we’ve completed the Acceptance stage.

Let’s take a look at CloudFormation to just see what mu has created for us. So, we see there’s over just (CloudFormation) stacks over here. Remember everything that mu does is managed through CloudFormation there’s no other database or anything behind mu – it’s just native AWS resources so, for example, if we look at the VPC there for the in dev environment we see all the things you expect to see: routes, Network ACLs, subnets, there’s a NAT gateway defined, the VPC itself and then if we go to the cluster we see the Auto Scaling Groups for the ECS container instances, we see the load balancer – the application load balancer that’s defined for the environment, all the necessary security groups and then there’s some scaling policies to scale in or out on that auto scaling group based on how many tasks are currently running. This is the service –  the banana service has been deployed to the (dev environment), we see the IAM roles, Task Definition and whatnot for the service.

Now one thing we didn’t do previously was we didn’t do any testing so what you can do is you can go ahead and create this file called buildspec-test.yml and what will happen is anything that you define in this test YAML will be run as a test action after the deployments made if standard CodeBuild buildspec file so in this case we’re going to use a tool called Newman. Newman is a nodejs command-line tool for running postman collections. Postman is a tool that GitHub created for doing testing of restful APIs. So, our postman collections. so we’re configuring this to run Newman for our tests. We’ll have to make a change to mu.yml – we have to configure the acceptance environment to use a Node.js CodeBuild image so that’s what we’ve done there so with those two changes we should be able to run mu pipeline up that will update the CodeBuild project to use the nodejs image and then once our pipeline is up to date we’ll be able to commit our change which is that buildspec-test file and once we push that up the pipeline will start running again this time tests will actually run and we’ll get some assurance that the code is ready to go onto production. So to make that change, push it and then if we look at the service we’ll see that the source action has triggered and we’ll just let this run for a while. The whole pipeline is going to have to run but things like the artifact and image won’t really cause any change because we didn’t actually change the source code but those are go ahead and run anyway so we are now in being image stage we’re taking the new jar file and building a docker image from it pushing that up to ECR we’ve now hit the Deploy stage so the latest Docker image is being used for the ECS service.

Once that completes, we will run that mu pipeline logs again to watch the CodeBuild project doing the testing and here we go so we see the testing is running it’s going to run npm install to install our dependencies namely the Newman tool and then we see some results so i see status code 200 – that looks good. Under the fail column, I see a bunch of zeros which looks great and then I see build success so not only has our application been deployed to ECS but we’ve also been able to test it and and now those tests will be run as a part of every execution of the pipeline as part of every commit. Now the other thing that we’ll recognize here is this application that we built it’s managing our inventory of bananas but what it doesn’t have is a real database behind we’re just using the H2 database that is available with Java so let’s go ahead and make a change here let’s configure mu to actually have a real database so with mu that’s as easy is as defining a database you give it a name you could specify other things like a type and whatnot but will default with the Aurora RDS and then you’re going to want to pass some environment variables so we will pass the database connection information to our spring app since we’re using Spring data source it’s just a matter of finding these three environment variables and you’ll notice that the username password and the endpoint are not actually in the mu.yml file we don’t want those things in there what what will happen is mu will create those for us and then they will make them available As CloudFormation parameters that we can reference to the dollar-sign notation that CloudFormation offers. ok so now that we’ve got that change made, go and add our new file and commit the change and push it up which should trigger a new run of the pipeline and again we’ve got to go through all those earlier actions just to ultimately get to the deploy action where the RDS database will be created now again you can choose any RDS database type but we’re using Aurora by default.

Now one question is well how does the password get defined so the way this works is we use a service that AWS has called Parameter Store which manages secrets and when mu starts up it checks if there’s a password defined and if it’s not, it generates a random 16-character string, adds it to Parameter Store and then later on when it deploys the service it pulls it out of parameter store and passes it in as an environment variable. Those parameters are encrypted with KMS – a key management system so they are secure.

Ok, so looking at the logs now from the service these are our Spring Boot startup logs. What I’m expecting to see is that rather than seeing H2 as the dialect…there you go, we see MySQL is the dialect for the connection that tells me that Spring Boot detected our environment variables and Spring Boot recognized that we are in fact trying to talk to MySQL – let me go and highlight that here. So, this tells us that our application is in fact connecting to a MySQL database which is provided by RDS and wired up via mu. So, we can look at our service again and watch the pipeline run and we can get some confirmation that we need break anything because we have those tests as a part of our pipeline now so we’ll let this go and – our tests are running. Once that completes we will have a good good feeling that this change is ready to promote the production.

Well thanks for watching and check out https://getmu.io to learn more.

Use AWS CodePipeline to Deploy Amazon Alexa Skills

If you’ve done any experimentation with the Amazon Alexa voice service, you’ve probably learned that you can use AWS Lambda to write functions that can be executed from Alexa. As a developer, what’s exciting about this is that you can create your own custom Alexa skills to perform anything suited for voice-based computing.

You’ll probably also learn that there are numerous manual actions for integrating the various tools and code to deploy an Alexa skill. Once you create the Lambda function, you need to create a zip file with any packages that the function requires and upload it to Amazon S3. Moreover, you need to store code assets somewhere and then orchestrate the build and deployment of the function(s)  that are run by your Alexa skill. Finally, you need to configure the Alexa skill itself using the Alexa Skills Kit (ASK).

In this post, you will learn how to orchestrate the deployment of an Alexa skill (written in AWS Lambda) using the AWS Developer Tools suite – including AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline. The provisioning of all of the AWS resources is defined in an AWS CloudFormation template. By automating many of the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so. You’ll see an example that walks you through the deployment process.

Figure 1 shows this deployment pipeline in action.

serverless-alexa-pipeline

Figure 1 – Deployment Pipeline in CodePipeline to deploy a Lambda function

Prerequisites

Here are the prerequisites for this solution:

Architecture and Implementation

All code assets are stored in AWS CodeCommit. We define a deployment pipeline in AWS CodePipeline to orchestrate the solution by configuring a Source action for CodeCommit, a build action with CodeBuild, and deploy actions for a CloudFormation changeset. The provisioning of AWS resources is defined in CloudFormation.

In Figure 2, you see the architecture for provisioning an infrastructure that launches a deployment pipeline to orchestrate the build and deployment of a Lambda function. You can click on the image to launch the template in CloudFormation Designer.

Figure 2 – CloudFormation Template for provisioning AWS resources

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML
  • AWS CodePipeline – The CodePipeline stages and actions are defined in a CloudFormation template. This includes CodePipeline’s integration with CodeCommit, CodeBuild, and CloudFormation (For more information, see Action Structure Requirements in AWS CodePipeline).
  • AWS CodeCommit – Creates a CodeCommit Git repository using the AWS::CodeCommit::Repository
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project to package and store the Lambda function
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline, CloudFormation, and other resources can access.
  • AWS SNS – Provisions a Simple Notification Service (SNS) Topic using the AWS::SNS::Topic resource. The SNS topic is used by the CodeCommit repository for notifications.
  • Serverless Application Model (SAM) – “The AWS Serverless Application Model (AWS SAM) extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.” [Source]
  • Amazon Alexa – the voice service that powers Amazon Echo, provides capabilities, or skills, that enable users to interact with devices in a more intuitive way using voice.
  • AWS Lambda – The serverless function run by the Alexa skill.

The index.js file stored in CodeCommit is based on the alexa-skill-kit-sdk-factskill blueprint. As part of the deployment pipeline, the Node.js function gets packaged by CodeBuild and stored in S3. In the Deploy stage, it generates a CloudFormation template based on the Serverless Application Model and executes a change set on this template. The purpose of the generated template is to provision the Lambda function from the source in S3. Figure 3 illustrates how the Alexa skill interfaces with Lambda.

serverless-alexa-lambda

Figure 3 – Alexa Skills Kit and Lambda 

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including S3, IAM, and SNS.

IAM Role

There are several IAM roles that are provisioned in the CloudFormation template. The code shown in this section is for an IAM role that is used by the AWS Serverless Application Model for deploying the Lambda function run by the Alexa skill.

  LambdaTrustRole:
    Description: Creating service role in IAM for AWS Lambda
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Action: sts:AssumeRole
          Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
      ManagedPolicyArns:
      - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Path: "/"
      Policies:
      - PolicyDocument:
          Statement:
          - Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
            Effect: Allow
            Resource: "*"
          Version: '2012-10-17'
        PolicyName: MyLambdaWorkerPolicy
      RoleName: !Ref AWS::StackName
CodePipeline

The CodePipeline pipeline CloudFormation snippet shown below defines the three stages and four actions that orchestrate the deployment of the Lambda function used by the Alexa skill. The pipeline provisions a CodeCommit source action called Source. This repository is provisioned as part of the CloudFormation template. The TemplatePath: alexa-BuildArtifact::template-export.json property definition in the GenerateChangeSet deploy action configures the name of the SAM file that is generated to provision the Lambda function that was packaged and stored in the PackageExport build action. This file is used by SAM to transform into a CloudFormation template that is executed by the ExecuteChangeSet action.

  CodePipelineStack:
    Type: AWS::CodePipeline::Pipeline
    DependsOn:
    - CodeBuildWebsite
    - LambdaTrustRole
    Properties:
      RoleArn:
        Fn::Join:
        - ''
        - - 'arn:aws:iam::'
          - Ref: AWS::AccountId
          - ":role/"
          - Ref: CodePipelineRole
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: AWS
            Version: '1'
            Provider: CodeCommit
          OutputArtifacts:
          - Name: MyApp
          Configuration:
            BranchName:
              Ref: RepositoryBranch
            RepositoryName:
              Ref: AWS::StackName
          RunOrder: 1
      - Name: Build
        Actions:
        - InputArtifacts:
          - Name: MyApp
          Name: PackageExport
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          OutputArtifacts:
          - Name: alexa-BuildArtifact
          Configuration:
            ProjectName:
              Ref: CodeBuildWebsite
          RunOrder: 1
      - Name: Deploy
        Actions:
        - InputArtifacts:
          - Name: alexa-BuildArtifact
          Name: GenerateChangeSet
          ActionTypeId:
            Category: Deploy
            Owner: AWS
            Version: '1'
            Provider: CloudFormation
          OutputArtifacts: []
          Configuration:
            ActionMode: CHANGE_SET_REPLACE
            ChangeSetName: pipeline-changeset
            RoleArn:
              Fn::GetAtt:
              - CloudFormationTrustRole
              - Arn
            Capabilities: CAPABILITY_IAM
            StackName:
              Fn::Join:
              - ''
              - - ""
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ""
            TemplatePath: alexa-BuildArtifact::template-export.json
          RunOrder: 1
        - ActionTypeId:
            Category: Deploy
            Owner: AWS
            Provider: CloudFormation
            Version: 1
          Configuration:
            ActionMode: CHANGE_SET_EXECUTE
            ChangeSetName: pipeline-changeset
            StackName:
              Fn::Join:
              - ''
              - - ""
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ""
          InputArtifacts: []
          Name: ExecuteChangeSet
          OutputArtifacts: []
          RunOrder: 2
      ArtifactStore:
        Type: S3
        Location: !Ref ArtifactBucket

Serverless Application Model

With the AWS Serverless Application Model (SAM), you can simplify the process of packaging a serverless application and deploying it with CloudFormation. The sam-template.yml below is a file that uses the SAM to define an Alexa skill function. Using the CloudFormation generate and execute change set defined in the CodePipeline provisioning in CloudFormation, this file transforms to a CloudFormation template. Fn::ImportValue pulls the export value from main CloudFormation template that provisions this solution.

AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31

Resources:
  AlexaSkillFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs4.3
      Role:
        Fn::ImportValue:
          !Join ['-', [!Ref 'AWS::StackName', 'LambdaTrustRole']]
      Events:
        AlexaSkillEvent:
          Type: AlexaSkill

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodeCommit – If used on a small project of less than six users, there’s no additional cost. See AWS CodeCommit Pricing for more information.
  • CodePipeline – Customers can create new pipelines without incurring any charges on that pipeline for the first thirty calendar days. After that period, the new pipelines will be charged at the existing rate of $1 per active pipeline per month. For more information, see AWS CodePipeline pricing.
  • Lambda –Considering you likely won’t have over 1M requests for this particular solution, there’s no cost. The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month. For more information, see AWS Lambda Pricing.
  • Alexa –There is no direct cost associated with using the Alexa service. If you’re using an Amazon Echo device, there is a one-time payment for the hardware and you’re charged every time your Lambda function is run (once it exceeds 1M free requests per month).
  • IAM – No additional cost.
  • SNS – Considering you likely won’t have over 1 million Amazon SNS requests for this particular solution, there’s no cost. For more information, see AWS SNS Pricing.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region

Step 2. Launch the Stack

Click on the “Launch Stack” button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 5 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

To test the deployment, you will need to configure the Alexa skill using the Amazon Developer Portal. You can use the Amazon Alexa Developer portal, a tool called Echosim, or an actual Amazon Echo device to test your skill.

Upload Code Assets to CodeCommit

  1. Once the CloudFormation stack is complete, select checkbox next to the stack and go to the Outputs tab
  2. Click on the PipelineUrl link to launch the CodePipeline pipeline. The Source action will be in a failed state.
  3. From the pipeline, click on the CodeCommit link and copy the command under “Clone your repository to your local computer and start working on code” to your clipboard.
  4. From your Terminal, paste the command contents to a computer for which you have configured a git client.
  5. Copy all the files from your locally-cloned Git repository (for https://github.com/stelligent/devops-essentials/tree/master/samples/serverless/alexa) to the CodeCommit repository you just cloned.
  6. From your Terminal, type
    git add .
  7. From your Terminal, type:
    git commit -am "add new files" && git push
  8. Go back to your pipeline in CodePipeline and see the changes successfully flow through the pipeline.

Configure and Test Alexa Skill

At this time, you can’t just click a “Launch Stack” button to deploy an Alexa skill. Separately, you need to configure the Alexa skill to define the intent schema, sample utterances and, most relevant, the Lambda function ARN that was deployed as part of the CodePipeline pipeline. To configure and test your Alexa skill, follow the steps defined below.

  1. Once your pipeline has successfully completed, go to https://developer.amazon.com/alexa and click the Sign In link
  2. Use your Amazon credentials to login to the Amazon Developer portal
  3. Select Alexa
  4. Under Alexa Skills Kit select Get Started
  5. Click Add a New Skill
  6. Enter a Name and Invocation Name and Choose Save
  7. Click Next
  8. In the Intent Schema text area, enter the contents from IntentSchema.json.
  9. In the Sample Utterances text area, enter the contents from SampleUtterances_en_US.txt.
  10. Click Next
  11. Choose the AWS Lambda ARN (Amazon Resource Name) radio button in the Service Endpoint Type section.
  12. Choose the North America checkbox
  13. Go to the Lambda console and choose the radio button next to the function that the CodePipeline pipeline generated. Then, choose the Actions button and select the Show ARN item and copy the contents that are displayed to your clipboard.
  14. Go back to the Amazon Developer Portal and paste your clipboard contents to the North America text box.
  15. Click Next
  16. In the Service Simulator section, enter “tell me a space fact” in the Enter Utterance text box and click Ask (the name of your skill). You should see a valid response in the Lambda Response text area. Go to SampleUtterances_en_US.txt for some other examples to simulate.

Alternatively, you can use a service the Echosim service to test your Alexa skill or an actual Amazon Echo device.

Deployment Pipeline

There are three stages and four actions that compose the pipeline that orchestrates the deployment of the Lambda function used by the Amazon Alexa service.

  • Source – In the single Source action, it uses the CodeCommit source action type to store all the code assets for the Alexa skill, infrastructure, and deployment pipeline
  • Build – In the single PackageExport action, it uses the CodeBuild build action type to package and store the Lambda function and associated files
  • Deploy
    • GenerateChangeSet – Uses the CloudFormation deploy action type to generate a change set for a CloudFormation template that defines the Lambda function
    • ExecuteChangeSet – Uses the CloudFormation deploy action type to generate a change set on the CloudFormation template to deploy the Lambda function

Figure 4 illustrates annotates the stages and actions of this deployment pipeline.

serverless-pipeline-annotated

Figure 4 – Annotated Deployment Pipeline for Solution

DevOps Essentials on AWS Complete Video Course

This and many more topics are covered in the DevOps Essentials on AWS Complete Video Course (release date: August 2017). In it, you’ll learn how to automate the infrastructure and deployment pipelines using AWS services and tools so if you’re some type of software or DevOps-focused engineer or architect interested in learning how to use AWS Developer Tools to create a full-lifecycle software delivery solution, it’s the course for you. The focus of the course is on deployment pipeline architectures and its implementations.

Additional Resources

You can also provide voice-enabled applications using Amazon Lex, Amazon Polly, and other AWS services – only without the “wake word” functionality.

Here are some of the supporting resources discussed in this post:

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

AWS CodeStar – Quickly develop, build, and deploy applications on AWS

AWS CodeStar is a new service that changes the way development teams deliver software in AWS. CodeStar makes the process of setting up software applications for continuous delivery easier to manage through integrated authorization and access management, centralized member collaboration, and automated environment provisioning.

adh-team-whowhat1
(1) “Working with AWS CodeStar Teams.” Working with AWS CodeStar Teams – AWS CodeStar. Amazon Web Services, 2017. Web. 01 May 2017. – http://docs.aws.amazon.com/codestar/latest/userguide/working-with-teams.html

Through the use of CodeStar you can now automatically create entire environments for your application and all of its associated AWS resources. Furthermore, CodeStar is great for groups who are engaging in brand new start up applications and projects. Because of the simplicity of CodeStar, development teams can create efficient software workflows that will be able to build, test, and release software on AWS much faster than before. Some of the benefits of CodeStar include:

  • Automatic Provisioning of Resources: When you create a project through CodeStar, AWS will automatically provision a handful of the underlying resources that will be part of your software’s environment through the use of AWS CloudFormation. Some of these resources could include AWS Elastic Beanstalk, AWS EC2 instances, AWS S3 Buckets, and an AWS CodeCommit repository. One of the most significant resources that CodeStar creates is a continuous delivery pipeline. This pipeline is built using AWS CodePipeline and initially contains two stages: a Source (Commit) stage and an Application (Deploy) stage. If you need additional stages, you can modify your CodePipeline pipeline accordingly.
  • Pre-built Code Templates: When you begin the process of creating a project with CodeStar you are given the option to choose from many pre-built code templates used to build applications that will run on AWS Elastic Beanstalk, AWS EC2, or AWS Lambda. These pre-built templates come with already-setup sample code applications that are ready to be modified and as the user you can choose between five programming languages to build your software in. These five languages include Ruby, Python, PHP, Java, and Javascript. After you choose your programming language you then have the option to choose from three ways of editing your project code which include the use of Visual Studio, Eclipse, or Command Line Tools.

For the remainder of this blog I will demonstrate how to setup and build a CodeStar project using a Ruby on Rails template and will deploy the sample application on an AWS EC2 instance.

CodeStar Project with Ruby on Rails

Creating your CodeStar Project

  1. The first thing you will need to do to create your CodeStar project is to log into your AWS console, go to the CodeStar console, and select “Create New Project”.
  2. You will be directed to a page that displays the many variety of project templates for you to choose from. The types of  applications this service supports range from templates ready to deploy on:
    1. AWS Elastic Beanstalk (Automated management of capacity and load balancing), Amazon EC2 with AWS CodeDeploy (Flexible deployment onto any type of instance), and AWS Lambda (Lambda is serverless technology and uses AWS CodeBuild to build your artifacts automatically)
      1. Side note: As of now it is not possible to create a CodeStar project via a CloudFormation template. It is also not possible to start a CodeStar project with your already-built application or to use GitHub as your code repository. The only way to achieve this would be to modify the Source stage of the CodePipeline that gets created for you once it is complete.
    2. For my example I am going to choose the “Ruby on Rails Web Application” that will be running on an Amazon EC2 instance.

Screen Shot 2017-04-25 at 5.16.23 PM

3. You will then be prompted to enter in the name for your project (Project name) and will be able to edit the Project ID as well. You can also choose whether or not to allow AWS CodeStar to administer AWS resources on your behalf by either checking/unchecking the box on the bottom of the page. If you chose a template that has a project running on EC2 (such as my example) then you will be able to edit the EC2 configuration as well. This includes choosing:

  1. Your own VPC (you have the choice of being assigned a default VPC and Subnet or choosing an existing one. You cannot create a VPC here.)
    1. Side note: To create an AWS VPC and a subnet you must go into the Networking & Content Delivery Console: VPC section and create them.
  2. Your Subnet to deploy your instance into
  3. The instance type (I chose t2.micro) 

Screen Shot 2017-04-25 at 5.40.22 PM

4. Select your AWS EC2 Keypair and select “Create Project”

5. You will then be able to choose how you want to edit your project code from the following three choices (Visual Studio, Eclipse, or Command line tools). For my example I chose Command line tools. At the bottom of the page will also be the code repository URL for your project and you can choose an access method between SSH and HTTPS.

6. The next page will be the Connect to your tools page which is where you’ll select your local machine’s operating system (macOS, Windows, Linux) and your connection method (HTTPS, SSH).

  1. For HTTPS connection: If you haven’t done so already you will need to install a Git client on your local machine (there is a link to install it in Step 1). You will also need to generate your AWS IAM user Git credentials by clicking the “here” link in Step 2. Once you have completed the first two steps you can then clone your repository onto your local machine by copying the Git command in Step 3 and pasting it to whatever directory you would like in your terminal.  Once you  have cloned the git repository into  your terminal you will be prompted for your user name and password which will be the Git credentials that you generate for your IAM user. Hit the “Skip” button below to continue onto your management dashboard
  2. For SSH connection: If you haven’t done so already you will need to install a Git client on your local machine (there is a link to install it in Step 1). You will then need to register your SSH Public Key (for help on how to do this please go to this link here located in the instructions in Step 2). Once you have registered your SSH key you will need to go into your terminal into your ~/.ssh directory and create a file named “config”. Add the following lines into this file:
Host git-codecommit.*.amazonaws.com
User Your-IAM-SSH-Key-ID-Here
IdentityFile ~/.ssh/Your-Private-Key-File-Name-Here

Once you have saved the file, you will need to ensure it has the right permissions by running the following command in your ~/.ssh directory:

chmod 600 config

After you have followed these steps you can clone the project repository onto your local machine by copying and pasting the command located in Step 4.

As mentioned earlier in this article, when you go through the process of creating your CodeStar project, if you selected the box that “allows AWS to administer resources on your behalf” CodeStar creates a CloudFormation stack that automatically deploys the environment and resources for your application. Here is what the CloudFormation stack and its resources looks like if you chose to create the Ruby on Rails application on an EC2 instance:

Screen Shot 2017-05-01 at 5.14.33 PM

Pre-configured Management Dashboard

After you have created your CodeStar project  you will be given a pre-configured centralized management dashboard from which you will be able to view a variety of events that are going on with your application project. Things that are viewable in the default dashboard include your:

  • Application’s resource activity metrics via AWS CloudWatch
  • Code commits history
  • Your application’s endpoint (Outlined in red: my example contains a public EC2 DNS endpoint)
  • A visual of your AWS CodePipeline in which you can see real time progress of your software’s continuous delivery cycle.
  • You also have the option to add the Atlassian Jira Software extension to your dashboard so that you can directly track your application project’s issues and its collaborator’s tasks

Screen Shot 2017-04-27 at 3.40.31 PM.png

From the dashboard you can Configure issue tracking which enables to integrate the Jira extension into your project for easy tracking. You are also able to setup your team members who will be given access to work on your project and determine which role they will have on it. You will just have to pick their IAM user name, choose whether remote access is allowed, and select the role for them between:

  • Viewer
  • Contributer
  • Owner

Start Modifying Your Rails Application

For this example I will be opening up my sample Rails application by going to the application endpoint link on the CodeStar dashboard. The first modification that I will be making will be to the opening “hello page” of the application. Here is what the opening page of the sample application looks like when I go to the application endpoint:

Screen Shot 2017-04-27 at 3.03.10 PM

Assuming that you have cloned the Git repository for your project onto your local machine, you can now start to modify your Rails application and make changes using your own text editor. For this example I am just going to remove the links on the home page (/app/views/hello_page/hello.html.erb) and change some of the wording. After making my slight changes to the “hello page” and saving it, I can just into my Git repository on my local machine’s terminal and proceed to type the following commands to push my most recent changes:

git status
  • This will show you what changes have been made to your project

Screen Shot 2017-04-27 at 4.41.34 PM.png

git add app/views/hello_page/hello.html.erb
  • This will add all of the changes that are ready to be made to the hello page
git commit -m “[your message about the changes that have been made]”
git push
  • This will push your newly modified project into your code pipeline and will automatically trigger the continuous deployment cycle.

Here is what will happen to your CodePipeline on your dashboard when you “git push” your changes:

Screen Shot 2017-04-27 at 4.37.40 PM.png

Once the pipeline has has succeeded through the Application stage, refresh your browser page with your application’s endpoint and see the new changes that have been made to your Rails application:

Screen Shot 2017-04-27 at 4.34.47 PM.png

From here on out you have a full Ruby on Rails application framework running on an Amazon EC2 instance where you can start to build/modify your own custom application. For more information about what you can do with your new Rails application please refer to the README that can be accessed by clicking on the “Code” box on the left side of you CodeStar dashboard.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post we talked about how to use the newly added AWS CodeStar service and discovered the benefits that it can offer to a variety of users. You learned about the different types of projects that CodeStar can create and how to easily interact with those projects upon their creation.

Let us know if you have any comments or questions @stelligent or @TreyMcElhattan

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Microservice testing with mu: injecting quality into the pipeline

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this second post of the blog series focused on the mu tool, we will use mu to incorporate automated testing in the microservice pipeline we built in the first post.  

Why should I care about testing?

Most people, when asked why they want to adopt continuous delivery, will reply that they want to “go faster”.  Although continuous delivery will enable teams to get to production quicker, people often overlook the fact that it will also improve the quality of the software…at the same time.

Martin Fowler, in his post titled ContinuousDelivery, says you’re doing continuous delivery when:

  • Your software is deployable throughout its lifecycle
  • Your team prioritizes keeping the software deployable over working on new features
  • Anybody can get fast, automated feedback on the production readiness of their systems any time somebody makes a change to them
  • You can perform push-button deployments of any version of the software to any environment on demand

It’s important to recognize that the first three points are all about quality.  Only when a team focuses on injecting quality throughout the delivery pipeline can they safely “go faster”.  Fowler’s list of continuous delivery characteristics is helpful in assessing when a team is doing it right.  In contrast, here is a list of indicators that show when a team is doing it wrong:

  • Testing is done late in a sprint or after multiple sprints
  • Developers don’t care about quality…that is left to the QA team
  • A limited number of people are able to execute tests and assess production readiness
  • Majority of tests require manual execution

This problem is only compounded with microservices.  By increasing the number of deployable artifacts by a factor of 10x or 100x, you are increasing the complexity of the system and therefore the volume of testing required.  In short, if you are trying to do microservices and continuous delivery without considering test automation, you are doing it wrong.

Let mu help!

blog1The continuous delivery pipeline that mu creates for your microservice will run automated tests that you define on every execution of the pipeline.  This provides quick feedback to all team members as to the production readiness of your microservice.

mu accomplishes this by adding a step to the pipeline that runs a CodeBuild project to execute your tests.  Any tool that you can run from within CodeBuild can be used to test your microservice.

Let’s demonstrate this by adding automated tests to the microservice pipeline we created in the first post for the banana service.

Define tests with Postman

First, we’ll use Postman to define a test collection for our microservice.  Details on how to use Postman are beyond the scope of this post, but here are few good videos to learn more:

I started by creating a test collection named “Bananas”.  Then I created requests in the collection for the various REST endpoints I have in my microservice.  The requests use a Postman variable named “BASE_URL” in the URL to allow these tests to be run in other environments.  Finally, I defined tests in the JavaScript DSL that is provided by Postman to validate the results match my expectations.

Below, you will find an example of one of the requests in my collection:

blog2

Once we have our collection created and we confirm that our tests pass locally, we can export the collection as a JSON file and save it in our microservices repository.  For this example, I’ve exported the collection to “src/test/postman/collection.json”.

blog3.png

Run tests with CodeBuild

Now that we have our end to end tests defined in a Postman collection, we can use Newman to run these tests from CodeBuild.  The pipeline that mu creates will check for the existence of a file named buildspec-test.yml and if it exists, will use that for running the tests.  

There are three important aspects of the buildspec:

  • Install the Newman tool via NPM
  • Run our test collection with Newman
  • Keep the results as a pipeline artifact

Here’s the buildspec-test.yml file that was created:

version: 0.1

## Use newman to run a postman collection.  
## The env.json file is created by the pipeline with BASE_URL defined

phases:
  install:
    commands:
      - npm install newman --global
  build:
    commands:
      - newman run -e env.json -r html,json,junit,cli src/test/postman/collection.json

artifacts:
  files:
    - newman/*

The final change that we need to make for mu to run our tests in the pipeline is to specify the image for CodeBuild to use for running our tests.  Since the tool we use for testing requires Node.js, we will choose the appropriate image to have the necessary dependencies available to us.  So our updated mu.yml file now looks like:

environments:
- name: acceptance
- name: production
service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  pipeline:
    source:
      provider: GitHub
      repo: myuser/banana-service
    build:
      image: aws/codebuild/java:openjdk-8
    acceptance:
      image: aws/codebuild/eb-nodejs-4.4.6-amazonlinux-64:2.1.3

Apply these updates to our pipeline my running mu:

$ mu pipeline up
Upserting Bucket for CodePipeline
Upserting Pipeline for service 'banana-service' …

Commit and push our changes to cause a new run of the pipeline to occur:

$ git add --all && git commit -m "add test automation" && git push

We can see the results by monitoring the build logs:

$ mu pipeline logs -f
2017/04/19 16:39:33 Running command newman run -e env.json -r html,json,junit,cli src/test/postman/collection.json
2017/04/19 16:39:35 newman
2017/04/19 16:39:35
2017/04/19 16:39:35 Bananas
2017/04/19 16:39:35
2017/04/19 16:39:35  New Banana
2017/04/19 16:39:35   POST http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas [200 OK, 354B, 210ms]
2017/04/19 16:39:35     Has picked date
2017/04/19 16:39:35     Not peeled
2017/04/19 16:39:35
2017/04/19 16:39:35  All Bananas
2017/04/19 16:39:35   GET http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas [200 OK, 361B, 104ms]
2017/04/19 16:39:35     Status code is 200
2017/04/19 16:39:35     Has bananas
2017/04/19 16:39:35
2017/04/19 16:39:35
2017/04/19 16:39:35                           executed    failed
2017/04/19 16:39:35
2017/04/19 16:39:35               iterations         1         0
2017/04/19 16:39:35
2017/04/19 16:39:35                 requests         2         0
2017/04/19 16:39:35
2017/04/19 16:39:35             test-scripts         2         0
2017/04/19 16:39:35
2017/04/19 16:39:35       prerequest-scripts         0         0
2017/04/19 16:39:35
2017/04/19 16:39:35               assertions         5         0
2017/04/19 16:39:35
2017/04/19 16:39:35  total run duration: 441ms
2017/04/19 16:39:35
2017/04/19 16:39:35  total data received: 331B (approx)
2017/04/19 16:39:35
2017/04/19 16:39:35  average response time: 157ms
2017/04/19 16:39:35

Conclusion

Adopting continuous delivery for microservices demands the injection of test automation into the pipeline.  As demonstrated in this post, mu gives you the freedom to choose whatever test framework you desire and executes those test for you on every pipeline execution.  Only once your pipeline is doing the work of assessing the microservice readiness for production can you achieve the goal of delivering faster while also increasing quality.

In the upcoming posts in this blog series, we will look into:

  • Custom Resources –  create custom resources like DynamoDB with mu during our microservice deployment
  • Service Discovery – use mu to enable service discovery via `Consul` to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Introducing mu: a tool for managing your microservices in AWS

mu is a tool that Stelligent has created to make it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this first post of the blog series focused on the mu tool, we will be introducing the motivation for the tool and demonstrating the deployment of a microservice with it.  

Why microservices?

The architectural pattern of decomposing an application into microservices has proven extremely effective at increasing an organization’s ability to deliver software faster.  This is due to the fact that microservices are independently deployable components that are decoupled from other components and highly cohesive around a single business capability.  Those attributes of a microservice yield smaller team sizes that are able to operate with a high level of autonomy to deliver what the business wants at the pace the market demands.

What’s the catch?

When teams begin their journey with microservices, they usually face cost duplication on two fronts:  infrastructure and re-engineering. The first duplication cost is found in the “infrastructure overhead” used to support the microservice deployment.  For example, if you are deploying your microservices on AWS EC2 instances, then for each microservice, you need a cluster of EC2 instances to ensure adequate capacity and tolerance to failures.  If a single microservice requires 12 t2.small instances to meet capacity requirements and we want to be able to survive an outage in 1 out of 4 availability zones, then we would need to run 16 instances total, 4 per availability zone.  This leaves an overhead cost of 4 t2.small instances.  Then multiply this cost by the number of microservices for a given application and it is easy to see that the overhead cost of microservices deployed in this manner can add up quickly.

Containers to the rescue!

An approach to addressing this challenge of overhead costs is to use containers for deploying microservices.  Each microservice would be deployed as a series of containers to a cluster of hosts that is shared by all microservices.  This allows for greater density of microservices on EC2 instances and allows the overhead to be shared by all microservices.  Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers.  ECS leverages many AWS services to provide a robust container management solution.  Additionally, a developer can use tools like CodeBuild and CodePipeline to create continuous delivery pipelines for their microservices.

That sounds complicated…

This approach leads to the second duplication cost of microservices: the cost of “reengineering”.  There is a significant learning curve for developers to learn how to use all these different AWS resources to deploy their microservices in an efficient manner.  If each team is using their autonomy to engineer a platform on AWS for their microservices then a significant level of engineering effort is being duplicated.  This duplication not only causes additional engineering costs, but also impedes a team’s ability to deliver the differentiating business capabilities that they were commissioned to do in the first place.

Let mu help!

To address these challenges, mu was created to simplify the declaration and administration of the AWS resources necessary to support microservices.  mu is a tool that a developer uses from their workstation to deploy their microservices to AWS quickly and efficiently as containers.  It codifies best practices for microservices, containers and continuous delivery pipelines into the AWS resources it creates on your behalf.  It does this from a simple CLI application that can be installed on the developer’s workstation in seconds.  Similar to how the Serverless Framework improved the developer experience of Lambda and API Gateway, this tool makes it easier for developers to use ECS as a microservices platform.

Additionally, mu does not require any servers, databases or other AWS resources to support itself.  All state information is managed via CloudFormation stacks.  It will only create resources (via CloudFormation) necessary to run your microservices.  This means at any point you can stop using mu and continue to manage the AWS resources that it created via AWS tools such as the CLI or the console.

Core components

The mu tool consists of three main components:

  • Environments – an environment includes a shared network (VPC) and cluster of hosts (ECS and EC2 instances) necessary to run microservices as clusters.  The environments include the ability to automatically scale out or scale in based on resource requirements across all the microservices that are deployed to it.  Many environments can exist (e.g. development, staging, production)
  • Services – a microservice that will be deployed to a given environment (or environments) as a set of containers.
  • Pipeline – a continuous delivery pipeline that will manage the building, testing, and deploying of a microservice in the various environments.

mu-architecture

Installing and configuring mu

First let’s install mu:

$ curl -s http://getmu.io/install.sh | sh

If you’re appalled at the idea of curl | bash installers, then you can always just download the latest version directly.

mu will use the same mechanism as aws-cli to authenticate with the AWS services.  If you haven’t configured your AWS credentials yet, the easiest way to configure them is to install the aws-cli and then follow the aws configure instructions:

$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

Setup your microservice

In order for mu to setup a continuous delivery pipeline for your microservice, you’ll need to run mu from within a git repo.  For this demo, we’ll be using the stelligent/banana-service repo for our microservice.  If you want to follow along and try this on your own, you’ll want to fork the repo and clone your fork.

Let’s begin with cloning the microservice repo:

$ git clone git@github.com:myuser/banana-service.git
$ cd banana-service

Next, we will initialize mu configuration for our microservice:

$ mu init --env
Writing config to '/Users/casey.lee/Dev/mu/banana-service/mu.yml'
Writing buildspec to '/Users/casey.lee/Dev/mu/banana-service/buildspec.yml'

We need to update the mu.yml that was generated with the URL paths that we want to route to this microservice and the CodeBuild image to use:

environments:
- name: acceptance
- name: production
service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  pipeline:
    source:
      provider: GitHub
      repo: myuser/banana-service
    build:
      image: aws/codebuild/java:openjdk-8

Next, we need to update the generated buildspec.yml to include the gradle build command:

version: 0.1
phases:
  build:
    commands:
      - gradle build
artifacts:
  files:
    - '**/*'

Finally, commit and push our changes:

$ git add --all && git commit -m "mu init" && git push

Create the pipeline

Make sure you have GitHub token with repo and admin:repo_hook scopes to provide to the pipeline in order to integrate with your GitHub repo.  Then you can create the pipeline:

$ mu pipeline up
Upserting Bucket for CodePipeline
Upserting Pipeline for service 'banana-service' ...
  GitHub token: XXXXXXXXXXXXXXX

Now that the pipeline is created, it will build and deploy for every commit to your git repo.  You can monitor the status of the pipeline as it builds and deploys the microservice:

$ mu svc show

Pipeline URL:   https://console.aws.amazon.com/codepipeline/home?region=us-west-2#/view/mu-pipeline-banana-service-Pipeline-1B3A94CZR6WH
+------------+----------+------------------------------------------+-------------+---------------------+
|   STAGE    |  ACTION  |                 REVISION                 |   STATUS    |     LAST UPDATE     |
+------------+----------+------------------------------------------+-------------+---------------------+
| Source     | Source   | 1f1b09f0bbc3f42170b8d32c68baf683f1e3f801 | Succeeded   | 2017-04-07 15:12:35 |
| Build      | Artifact |                                        - | Succeeded   | 2017-04-07 15:14:49 |
| Build      | Image    |                                        - | Succeeded   | 2017-04-07 15:19:02 |
| Acceptance | Deploy   |                                        - | InProgress  | 2017-04-07 15:19:07 |
| Acceptance | Test     |                                        - | -           |                   - |
| Production | Approve  |                                        - | -           |                   - |
| Production | Deploy   |                                        - | -           |                   - |
| Production | Test     |                                        - | -           |                   - |
+------------+----------+------------------------------------------+-------------+---------------------+

Deployments:
+-------------+-------+-------+--------+-------------+------------+
| ENVIRONMENT | STACK | IMAGE | STATUS | LAST UPDATE | MU VERSION |
+-------------+-------+-------+--------+-------------+------------+
+-------------+-------+-------+--------+-------------+------------+

You can also monitor the build logs:

$ mu pipeline logs -f
[Container] 2017/04/07 22:25:43 Running command mu -c mu.yml svc deploy acceptance 
[Container] 2017/04/07 22:25:43 Upsert repo for service 'banana-service' 
[Container] 2017/04/07 22:25:43   No changes for stack 'mu-repo-banana-service' 
[Container] 2017/04/07 22:25:43 Deploying service 'banana-service' to 'dev' from '324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f' 

Once the pipeline has completed deployment of the service, you can view logs from service:

$ mu service logs -f acceptance                                                                                                                                                                         
  .   ____          _          __ _ _
 /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| | ) ) ) )
  ' | ____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

:: Spring Boot ::        (v1.4.0.RELEASE) 
2017-04-07 22:30:08.788  INFO 5 --- [           main] com.stelligent.BananaApplication         : Starting BananaApplication on 6a4d5544d9de with PID 5 (/app.jar started by root in /) 
2017-04-07 22:30:08.824  INFO 5 --- [           main] com.stelligent.BananaApplication         : No active profile set, falling back to default profiles: default 
2017-04-07 22:30:09.342  INFO 5 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@108c4c35: startup date [Fri Apr 07 22:30:09 UTC 2017]; root of context hierarchy 
2017-04-07 22:30:09.768  INFO 5 --- [           main] com.stelligent.BananaApplication         : Starting BananaApplication on 7818361f6f45 with PID 5 (/app.jar started by root in /) 

Testing the service

Finally, we can get the information about the ELB endpoint in the acceptance environment to test the service:

$ mu env show acceptance                                                                                                                                                                        

Environment:    acceptance
Cluster Stack:  mu-cluster-dev (UPDATE_COMPLETE)
VPC Stack:      mu-vpc-dev (UPDATE_COMPLETE)
Bastion Host:
Base URL:       http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com
Container Instances:
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
|    EC2 INSTANCE     |   TYPE   |     AMI      |     AZ     | CONNECTED | STATUS | # TASKS | CPU AVAIL | MEM AVAIL |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
| i-093b788b4f39dd14b | t2.micro | ami-62d35c02 | us-west-2a | true      | ACTIVE |       3 |       604 |       139 |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+

Services:
+----------------+---------------------------------------------------------------------+------------------+---------------------+
|    SERVICE     |                                IMAGE                                |      STATUS      |     LAST UPDATE     |
+----------------+---------------------------------------------------------------------+------------------+---------------------+
| banana-service | 324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f | CREATE_COMPLETE  | 2017-04-07 15:25:43 |
+----------------+---------------------------------------------------------------------+------------------+---------------------+


$ curl -s http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas | jq

[
  {
    "pickedAt": "2017-04-10T10:34:27.911",
    "peeled": false,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas/1"
      }
    ]
  }
]

Cleanup

To cleanup the resources that mu created, run the following commands:

$ mu pipeline term
$ mu env term acceptance
$ mu env term production

Conclusion

As you can see, mu addresses infrastructure and engineering overhead costs associated with microservices.  It makes deployment of microservices via containers simple and cost-efficient.  Additionally, it ensures the deployments are repeatable and non-dramatic by utilizing a continuous delivery pipeline for orchestrating the flow of software changes into production.

In the upcoming posts in this blog series, we will look into:

  • Test Automation –  add test automation to the continuous delivery pipeline with mu
  • Custom Resources –  create custom resources like DynamoDB with mu during our microservice deployment
  • Service Discovery – use mu to enable service discovery via Consul to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started.  Keep in touch with us in our Gitter room and share your feedback!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If you are Stelligent is hiring and we would love to hear from you!