Next-Generation Managed Services are Self-Service

The traditional managed services provider (MSP) model is broken and needs disruption. The next-generation managed services model is about guiding customers in a self-service manner.

The key drivers causing customers to seek cloud providers like Amazon Web Services (AWS) include the agility and cost efficiencies they afford. The agility helps customers be more responsive to their users. At the same time, this same agility delivered by cloud providers can also result in customers being overwhelmed in determining best practices and patterns for deploying and operating on the cloud. Moreover, as more companies realize how software is a strategic asset for their business, they often don’t want to simply outsource all their IT needs to yet another provider. They want speed and agility, but not by being at the mercy of an IT provider. They want to leverage best practices while gaining the autonomy in obtaining these capabilities in a self-service manner.

In this post, I contrast the traditional MSP to a next-generation MSP driven by DevOps and automation. In learning how MSP features can help increase agility when delivered from a new model, you should be able to better choose providers who align best with your business outcomes.

The Traditional MSP

To begin, let’s have a look at the typical capabilities of an MSP. They include:

  • Access Management – Creating user accounts and permissions to infrastructure resources
  • Change Management – Ensure changes are applied in a controlled manner
  • Continuity Management – Prevent loss via disaster recovery techniques such as backups, high availability, and restoration
  • Incident Management – Get support to fix problems
  • Patch Management – Keep infrastructure up to date and compliant
  • Provisioning Management – Provision and configure infrastructure
  • Reporting – Get access to metrics, logs, and recommendations for improvement
  • Security Management – Ensure infrastructure is secure

With a traditional MSP, these types of services are typically provided through an opaque model in which customers are reliant on the MSP to perform remedial actions to fix most problems. This is because the MSP often has the credentials and knowledge to make changes to a largely manually-provisioned infrastructure or a hodgepodge of “automated” scripts that are not provided as a system to customers.

Next-Generation MSP on AWS

A next-gen MSP provides customers capabilities in a self-service manner enabling them to get up and running quickly with a fully-automated infrastructure while benefiting from the expert guidance provided by the MSP. This means customers don’t need someone on the MSP’s support team to – say – restart a server or perform a backup. This is because these services are provided to customers through self-service means. Instead, the reason a customer might need a next-gen MSP is for their best practices expertise in architecture and automation to help more quickly guide them to better solutions.

What Does Next-Gen Look Like?

What do each of the capabilities described in the first section look like in a next-generation MSP model? At their core, they’re self service. Customers of the MSP might have a team from the MSP get their infrastructure up and running but there should be nothing preventing the customer from provisioning everything themselves either. Furthermore, there should be a way for customers to get their applications running on the infrastructure using repeatable frameworks as well.

Let’s have a look at the types of capabilities a next-gen MSP on AWS might offer:

  • Access Management  – A customer interfaces with an API and/or console provided by the MSP that automates the provisioning of AWS Organizations, AWS Accounts, and IAM users and permissions. Possible ToolsAWS Organizations, AWS IAM, AWS Service Catalog, and automation through AWS CloudFormation and other tools.
  • Change Management – Customers interface with the API/Console to manage how changes are deployed on their infrastructure. For example, they might want to modify RDS database configuration settings or the AMI the EC2 instances use. Customers can make a request to the MSP to apply or schedule these changes or they can apply the changes themselves using frameworks provided by the MSP. These changes flow through an approval process configured by the customer. Possible Tools: AWS Service Catalog, AWS CloudFormation, AWS CloudWatch Dashboards, Configuration Management Tools, and custom automation.
  • Continuity Management – Customers can schedule disaster recovery processes and scenarios through an API/Console. This includes scheduling data, storage, and source backups. It might also include the ability to schedule disaster recovery drills with experts from the MSP. Moreover, the automation provided in the DevOps frameworks provided by the MSP should support resilient, high availability solutions that can maintain the necessary infrastructure even when parts of it fails so that users do not experience errors when parts of the underlying infrastructure fails. Possible Tools: Amazon EC2 Systems Manager, AWS CodeCommitAWS Shield, Custom Reports, Amazon Glacier, AWS Service Catalog, AWS Auto Scaling, AWS CloudFormation, and Configuration Management Tools might be used. Also, tools for automation of backing up EBS volumes, RDS database snapshots, etc.
  • Incident Management – Customers can contact MSP support experts at any time of day to help guide them to solutions through various mechanisms including real-time chat, chatbots, online systems, and the phone. However, the MSP should never be required to be present to fix an infrastructure error. This is because the MSP should provide the customer access to authorized individuals who are capable of making infrastructure changes in a governed manner – if they choose to do so. The MSP can also handle daily activities of investigating and resolving alarms or incidents. Possible ToolsAmazon Connect, AWS Step Functions, Amazon Polly, Amazon Lex, Amazon CloudWatch – Logs, Events, and Monitoring, AWS CloudTrail, New Relic (App & Performance Monitoring), AWS Config (and Config Rules), and AWS EC2 Systems Manager
  • Patch Management – The MSP can manage all customer OS patching activities to help keep infrastructure resources current and secure. This would include applying updates or patches that are released from OS vendors  in a timely and consistent manner to minimize the impact on the customers’ business. Critical security patches are applied as needed, while others are applied based on the patch schedule when customers make the request. The customer can also apply these changes through governance mechanisms provided by the MSP. Possible Tools: AWS EC2 Systems Manager, AWS Service Catalog
  • Provisioning Management – The MSP launches and manages infrastructure stacks via a framework that provisions these stacks as code that builds users, security infrastructure, networks, environments, services, and deployment pipelines. The MSP should provide these same capabilities to customers as well so that they are capable of making these changes with or without the MSP. Possible Tools: AWS CloudFormation, Configuration Management Tools, and custom automation.
  • Reporting – Customers get access to the data using to manage your infrastructure, including Amazon S3 logs, CloudTrail logs, instance logs, and real-time data from the AWS Managed Services APIs. Customer can also get real-time advice through automated systems provide by the MSP. The MSP should also walk customers through metrics, their impact, as well as recommendations to optimize platform usage. Possible Tools: Amazon CloudWatch Dashboards, AWS Trusted Advisor, custom automation, and web portals
  • Security Management – The next-gen MSP provides customers information protection of assets and keeps the infrastructure secure by providing anti-malware protection, intrusion detection, and intrusion prevention systems. Possible Tools: Amazon VPC, AWS Parameter Store, AWS WAF, Amazon Inspector, AWS Shield, AWS Config and Config Rules, AWS CloudTrail, and Security Monitoring as a Service

The overarching goal of the next-generation delivery model is to provide the capability of 100% self-service capabilities for customers as part of a shared responsibility model. Alternatively, the customer might choose for the MSP to manage everything for them. In this case, the customer should be able to take over the management of the infrastructure at any time if the MSP is not meeting its needs. Customer-centric MSPs will do this by creating fully automated, continuous, and autonomic services.

Scenario: Deployment Pipeline Management

Here’s an example scenario in how a next-generation MSP might provide a deployment pipeline monitoring and guidance service to customers.

The MSP uses an open-source framework that provisions all the necessary AWS environment, deployment pipeline, and application resources to run a highly-available, secure application on AWS. Each deployment pipeline is configured to send AWS CodePipeline statistics via AWS CloudWatch Events. These events are configured to submit notifications through Amazon SNS and AWS Lambda so that all necessary parties are informed via email and Slack. What’s more, the CodePipeline statistics are aggregated and made available through Amazon CloudWatch Dashboards. All of this is configured through configuration files that are versioned in the customer’s version-control repository and automated via the open-source framework.

Once the MSP DevOps Engineers receive failure alerts through Slack, email, or the Dashboard, they help guide the customers’ engineers in resolving errors in AWS cpl-failureCodePipeline and/or its integrations with other tools like AWS CodeBuild, AWS CodeDeploy, AWS CloudFormation, static analysis, or tests. The expertise provided in this “stop the line” model helps quickly resolve issues that arise making them less costly to fix and increasing high velocity feedback between customers and its users. You might see some MSPs provide real-time expertise through automated conversational bots enabled through services like Amazon Lex. There’s a lot of space for innovation in providing these services to companies.

What’s Next?

Going forward, we expect customers to demand more self-service capabilities from their providers. Providers will enable these self-service capabilities through systematic automation and a focus on the user experience in how these features are provided to IT consumers.

Additional Resources

Continuous Delivery to S3 via CodePipeline and CodeBuild

In this blog post, you see a demonstration of Continuous Delivery of a static website to Amazon S3 via AWS CodeBuild  and AWS CodePipeline. At the conclusion, you will be able to provision all of the AWS resources by clicking a “Launch Stack” button and going through the AWS CloudFormation steps to launch a solution stack.

Using S3 is useful when you want to host static files such as HTML and image files as a website for others to access. Fortunately, S3 provides us the capability to configure an S3 bucket for static website hosting. For more information on manually configuring this for a custom domain, see Example: Setting up a Static Website Using a Custom Domain.

However, once you go through this process manually a few times, and if you’re like me, you’ll quickly grow tired of manually uploading new files, deleting old files, and setting the permissions for the files in the S3 bucket.

In this example, all the source files are hosted in GitHub and can be made available to developers. All of the steps in the process are orchestrated via CodePipeline and the build and deployment actions are performed by CodeBuild. The provisioning of all of the AWS resources is defined in a CloudFormation template.

By automating the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so without needing to repeatedly manually upload files to S3. Instead, you just commit the changes to the GitHub repository and the pipeline orchestrates the rest. While this is a simple example, you can follow the same model and tools for much larger and sophisticated applications.

Figure 1 shows this deployment pipeline in action.

devops-quick-demo

Figure 1 – Deployment Pipeline in CodePipeline to deploy a static website to S3

The remainder of this post describes how to configure the solution in your AWS account.

Prerequisites

Here are the prerequisites for this solution:

  • AWS Account – Follow these instructions to create an AWS Account: Creating an AWS Account and grant IAM privileges to access at least CodeBuild, CodePipeline, CloudFormation, IAM, and S3.
  • Fork GitHub Repo – Fork and clone your own stelligent/devops-essentials GitHub repository
  • OAuth Token – Create an OAuth token in GitHub and provide access to the admin:repo_hook and repo scopes.

To see these steps in more detail, go to devopsessentialsaws.com and go to section 2.1 Configure course prerequisites.

Architecture and Implementation

In Figure 2, you see the architecture for provisioning an infrastructure that launches a deployment pipeline to orchestrate the build the solution. You can click on the image to launch the template in CloudFormation Designer within your AWS account.

Figure 2 – CloudFormation Template for provisioning AWS resources

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML (or generated by more expressive domain-specific languages)
  • AWS CodePipeline – The CodePipeline stages and actions are defined in a CloudFormation template. This includes CodePipeline’s integration with CodeCommit, CodeBuild, and CloudFormation (For more information, see Action Structure Requirements in AWS CodePipeline).
  • GitHub – CodePipeline connects with an existing GitHub repository using the GitHub Source provider action.
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project to package and store the Lambda function
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline, CloudFormation, and other resources can access.

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including S3 and IAM.

S3 Buckets

There are two S3 buckets provisioned in this CloudFormation template. The SiteBucket resource defines the S3 bucket that hosts all the files that are copied from the downloaded source files from Git. The PipelineBucket hosts the input artifacts for CodePipeline that are referenced across stages in the deployment pipeline.

  SiteBucket:
    Type: AWS::S3::Bucket
    Properties:
      AccessControl: PublicRead
      BucketName: !Ref SiteBucketName
      WebsiteConfiguration:
        IndexDocument: index.html
  PipelineBucket:
    Type: AWS::S3::Bucket

IAM Role

The IAM role for CodePipeline provides the CodePipeline the necessary permissions for access to the necessary resource to deploy the static website resources.

  CodePipelineRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - codepipeline.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: codepipeline-service
        PolicyDocument:
          Statement:
          - Action:
            - codebuild:*
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:GetObject
            - s3:GetObjectVersion
            - s3:GetBucketVersioning
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:PutObject
            Resource:
            - arn:aws:s3:::codepipeline*
            Effect: Allow
          - Action:
            - s3:*
            - cloudformation:*
            - iam:PassRole
            Resource: "*"
            Effect: Allow
          Version: '2012-10-17'

CodePipeline

The CodePipeline pipeline CloudFormation snippet shown below defines the two stages and two actions that orchestrate the deployment of the static website. The Source action within the Source stage configures GitHub as the source provider. Then, it moves to the Deploy stage which runs CodeBuild to copy all the HTML and other assets to an S3 bucket that’a configured to be hosted as a website.

  Pipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: ThirdParty
            Version: '1'
            Provider: GitHub
          OutputArtifacts:
          - Name: SourceOutput
          Configuration:
            Owner: !Ref GitHubUser
            Repo: !Ref GitHubRepo
            Branch: !Ref GitHubBranch
            OAuthToken: !Ref GitHubToken
          RunOrder: 1
      - Name: Deploy
        Actions:
        - Name: Artifact
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          InputArtifacts:
          - Name: SourceOutput
          OutputArtifacts:
          - Name: DeployOutput
          Configuration:
            ProjectName: !Ref CodeBuildDeploySite
          RunOrder: 1
      ArtifactStore:
        Type: S3
        Location: !Ref PipelineBucket

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodePipeline – Customers can create new pipelines without incurring any charges on that pipeline for the first thirty calendar days. After that period, the new pipelines will be charged at the existing rate of $1 per active pipeline per month. For more information, see AWS CodePipeline pricing.
  • GitHub – No charge for public repositories
  • IAM – No additional cost.
  • S3 – If you launch the solution and delete the S3 bucket, it’ll be pennies (if that). See S3 Pricing.

The bottom line on pricing for this particular example is that you will charged no more than a few pennies if you launch the solution run through a few changes and then terminate the CloudFormation stack and associated AWS resources.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region

Step 2. Launch the Stack

Click on the “Launch Stack” button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 5 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

Here are the steps to test the deployment:

  1. Once the CloudFormation stack is complete, select checkbox next to the stack and go to the Outputs tab
  2. Click on the PipelineUrl link to launch the CodePipeline pipeline.
  3. Click on the SiteUrl link to launch the website that was configured and launched as part of the deployment pipeline
  4. From your Terminal, type (replacing YOURGITHUBUSERID with your GitHub userid):
    git clone https://github.com/YOURGITHUBUSERID/devops-essentials
  5. Make obvious visual changes to any of your local files (for example, change .bg-primary{color:#fff;background-color: in your forked repo version of devops-essentials/html/css/bootstrap.min.css) and type the following from your Terminal:
    git commit -am "add new files" && git push
  6. Go back to your pipeline in CodePipeline and see the changes successfully flow through the pipeline.

DevOps Essentials on AWS Video Course

devops_essentials_aws_cover_large

This and many more topics are covered in the DevOps Essentials on AWS Complete Video Course (Udemy, InformIT, SafariBooksOnline). In it, you’ll learn how to automate the infrastructure and deployment pipelines using AWS services and tools so if you’re some type of software orDevOps-focused engineer or architect interested in learning how to use AWS Developer and AWS Management Tools to create a full-lifecycle software delivery solution, it’s the course for you. The focus of the course is on deployment pipeline architectures and its implementations.

Additional Resources

Here are some of the supporting resources discussed in this post:

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Acknowledgements

My colleague Casey Lee created the initial CodePipeline/CodeBuild/S3 CloudFormation template that’s the basis for this solution.

Stelligent is an APN Launch Partner for the AWS Management Tools Addition to the AWS Service Delivery Program

Stelligent, an AWS Partner Network (APN) Advanced Consulting Partner specializing exclusively in DevOps Automation on the Amazon Web Services (AWS) Cloud, announce that it is a launch partner for four additional services in the AWS Service Delivery Program: AWS CloudFormationAWS CloudTrail, AWS Config, and Amazon EC2 Systems Manager. This means that Stelligent has demonstrated a successful track record of delivering specific AWS services and a demonstrated ability to provide expertise in a particular service or skill area.

800x200_Management-01 (1)

“The ability to deploy high-quality code in hours, not months, is something that we can help any company – including many in the Fortune 500 – achieve,” said Paul Duvall, Stelligent CTO and co-founder. “Using AWS Management Tools along with other AWS services we can drastically reducing our customers’ development times, while increasing the rate at which they can introduce new features.”

The AWS Service Delivery Program highlights APN Partners with a track record of delivering specific AWS services to customers. Attaining an AWS Service Delivery Distinction allows partners to differentiate themselves by showcasing to AWS customers areas of specialization.

The four AWS Management Tools included in the AWS Service Delivery Program include (Source AWS):

  • AWS CloudFormation – Create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.
  • AWS CloudTrail – Track user activity and API usage
  • AWS Config – Record and evaluate configurations of your AWS resources
  • Amazon EC2 Systems Manager – Easily configure and manage Amazon EC2 and on-premises systems

Stelligent uses these AWS Management Tools in creating DevOps Automation solutions for customers so they can release new features to users, on demand, and reduce the costs of delivering software by reducing overall lead time. Resulting benefits include the following:

● the ability to release software with every successful change
● significant reduction of cycle time
● increased confidence in what is deployed
● increase in ability to experiment
● reduction of overall costs

“We are proud to work with AWS to deliver DevOps Automation solutions to our customers, allowing them to release new features to users whenever they choose,” said Duvall. “Being a launch partner in the AWS Management Tools addition to the AWS Service Delivery Program means a lot to us — this is what we live and breathe, and we do so exclusively for our customers targeting AWS. We obsess over customers, and we obsess over applying what we believe are essential practices to achieve the aims of continuous delivery. This acknowledgement will help us reach still more customers who value that passion.”

About Stelligent
Stelligent is an APN Advanced Consulting Partner and hold the AWS DevOps Competency. As a technology services company that provides DevOps Automation on Amazon Web Services (AWS) Cloud, we aim for “one-click deployment.” Our reason for being is to help our customers gain the ability to continuously deploy their software, when they want to, and with confidence. We’ve been providing DevOps Automation solutions on AWS since 2009. Follow @Stelligent on Twitter. Learn more at http://www.stelligent.com

DevOps on AWS Radio: AWS CodePipeline and Amazon Alexa (Episode 11)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps on AWS news and discuss how to use AWS CodePipeline to deploy Amazon Alexa skill.

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. What was the “Use AWS CodePipeline to Deploy Amazon Alexa Skills” blog post ?
  2. What is AWS CodePipeline and what are its benefits? What are alternatives to using CodePipeline?
  3. How do you create a pipeline in CodePipeline?
  4. Which AWS services does CodePipeline integrate with? How about non-AWS tools and services
  5. How do you automate the provisioning of CodePipeline?
  6. Describe Amazon Alexa. What kinds of things can you do with Alexa? Which devices does it support
  7. Describe Lambda.
  8. How did you orchestrate CodePipeline to deploy a Lambda function?
  9. How did you configure Alexa to run the Lambda function?
  10. How can listeners learn more about this solution

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Screencast: Full-Stack DevOps on AWS Tool

Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers. However, there is a significant learning curve for developers to get their microservices deployed. mu is a full-stack DevOps on AWS tool that simplifies and orchestrates your software delivery lifecycle (environments, services, and pipelines). It is open source and available at http://getmu.io/. You can click the YouTube link below (we’ve also provided a transcript of this screencast in this post).

Let’s demonstrate using mu to deploy a Spring Boot application to ECS. So, we see here’s our micro service (and) we’ve already got our Docker file set up. We see that we’ve got our Gradle file so that we can compile the code and then we see the various classes necessary for the service; we’re using Liquibase for managing our database so that definition file is there; we’ve got some unit tests to find so when I will go ahead take a look at the Docker file and we see that it’s pretty straightforward: it builds from the Java image; all it does is takes the jar and adds it and then for the entry point, it just runs java -jar. So, we run mu init and that’s going to create two files for us: it’s going to create a mu.yml file which we see here and so we need to add some stuff to the file it generates – specifically, we want to specify Java 8 for the (AWS) CodeBuild image then we edit the buildspec file and tell it to use Gradle build for the build command. Buildspec is a standard code build  file for defining your project so if you see our two new files: buildspec.yml and mu.yml so we go ahead and commit those (and) push those up to our source repository in this case we’re using GitHub and then we run the command mu pipeline up and what that does is it creates a CloudFormation stack for managing our CodePipeline and our CodeBuild projects so it’s going to prompt us for the GitHub token this is the access token that you’ve defined inside GitHub so that CodePipeline can access your repository so we provide that token and then we see that it’s creating various things like IAM Roles for CodeBuild to do its business and (create) the actual CodeBuild project that’s going to be used there’s a quite a few different CodeBuild projects for building and testing and deploying so now we run the command mu service show and what that’s going to show us is that there is a pipeline now created we see it has started in the first step.

Let’s go ahead and open up (AWS CodePipeline) in the console and we see that, sure enough, (the Source stage of our pipeline) is running and then we see there’s a Build stage with the Artifact and Image actions in it – that’s where we compile and build our Docker image; there’s an acceptance stage and then a Production stage both of which do a deployment and then testing so jumping back over here to the command line we can run mu service show and we see that we are in the Source action currently running and that’s just going to take a minute before we now trigger the Artifact action of the Build stage and so that’s where we’re actually doing the compiling so the command we can run here (is) mu pipeline logs -f and we add the -f so that we follow the logs – what happens is all of the output from CodeBuild gets sent to CloudWatch Logs and so the mu pipeline logs command allows us to tail CloudWatch Logs and watch the activity in real time so we see that our Maven artifacts are being resolved for dependencies and then we see “build success”, so our artifact has been built and our unit tests have passed so it’s just going to take a second here for a CodeBuild to go ahead and upload the artifact and then trigger the pipeline to move to the next stage which is our Image (action) in the Image (action) what’s going to happen is it’s going to run Docker build against our artifact (and) create a Docker image; it’s then going to push that image up to ECR. It’s also going to create that ECS repository if it doesn’t exist yet through a CloudFormation stack so we go ahead and run mu pipeline logs and we could see the Image action running we see we’re pulling down the Docker base image that Java image and then there’s our docker build and now we’re pushing back up to ECR I’ll take just a minute to upload that new docker image with our Spring Boot application on and that’s completed successfully.

So now if we jump back over to mu service show just give it a second we should see that we will progress beyond the Build stage and into the Acceptance stage in the Acceptance Stage there will be two actions first a deploy action that’s going to use the image that was created and create a new ECS service for it and so that’s what we see going on here what you’ll notice in just a second right there what’s happening is first it’s making sure the environment is up-to-date so the ECS cluster and the auto scaling group for it and all the instances for ECS; it’s making sure that’s up to date; it’s also then updating any databases that are defined and then finally deploying the service and so we see here is there’s a CREATE_IN_PROGRESS –  the status of the deployment to the Dev environment is in progress so there’s a CloudFormation stack being deployed. I go ahead and run this command mu service logs just like there’s logs for the pipeline all the logs for your service are sent to CloudWatch Logs so here we’re watching the logs for our service starting up these are the Spring Boot output messages. If you used Spring Boot before it should look familiar but this is very helpful for troubleshooting an application being able to see if logs in real time.

So the deployment is complete – (based on) the logs we saw that it is up – so we’re going to go and look at the environment here. We do mu env list. We see the Dev environment and when we show it, we can see the EC2 instance associated with it and we also see the base URL for the ELB so I’m gonna go ahead and run a curl command against that – adding the bananas URI at the end of it and pipe that to jq just to make it look pretty and sure enough, there we see we get a successful response. So, our app has been deployed successfully and we see that we are in the Approval stage and it’s waiting for approvals so we’ve completed the Acceptance stage.

Let’s take a look at CloudFormation to just see what mu has created for us. So, we see there’s over just (CloudFormation) stacks over here. Remember everything that mu does is managed through CloudFormation there’s no other database or anything behind mu – it’s just native AWS resources so, for example, if we look at the VPC there for the in dev environment we see all the things you expect to see: routes, Network ACLs, subnets, there’s a NAT gateway defined, the VPC itself and then if we go to the cluster we see the Auto Scaling Groups for the ECS container instances, we see the load balancer – the application load balancer that’s defined for the environment, all the necessary security groups and then there’s some scaling policies to scale in or out on that auto scaling group based on how many tasks are currently running. This is the service –  the banana service has been deployed to the (dev environment), we see the IAM roles, Task Definition and whatnot for the service.

Now one thing we didn’t do previously was we didn’t do any testing so what you can do is you can go ahead and create this file called buildspec-test.yml and what will happen is anything that you define in this test YAML will be run as a test action after the deployments made if standard CodeBuild buildspec file so in this case we’re going to use a tool called Newman. Newman is a nodejs command-line tool for running postman collections. Postman is a tool that GitHub created for doing testing of restful APIs. So, our postman collections. so we’re configuring this to run Newman for our tests. We’ll have to make a change to mu.yml – we have to configure the acceptance environment to use a Node.js CodeBuild image so that’s what we’ve done there so with those two changes we should be able to run mu pipeline up that will update the CodeBuild project to use the nodejs image and then once our pipeline is up to date we’ll be able to commit our change which is that buildspec-test file and once we push that up the pipeline will start running again this time tests will actually run and we’ll get some assurance that the code is ready to go onto production. So to make that change, push it and then if we look at the service we’ll see that the source action has triggered and we’ll just let this run for a while. The whole pipeline is going to have to run but things like the artifact and image won’t really cause any change because we didn’t actually change the source code but those are go ahead and run anyway so we are now in being image stage we’re taking the new jar file and building a docker image from it pushing that up to ECR we’ve now hit the Deploy stage so the latest Docker image is being used for the ECS service.

Once that completes, we will run that mu pipeline logs again to watch the CodeBuild project doing the testing and here we go so we see the testing is running it’s going to run npm install to install our dependencies namely the Newman tool and then we see some results so i see status code 200 – that looks good. Under the fail column, I see a bunch of zeros which looks great and then I see build success so not only has our application been deployed to ECS but we’ve also been able to test it and and now those tests will be run as a part of every execution of the pipeline as part of every commit. Now the other thing that we’ll recognize here is this application that we built it’s managing our inventory of bananas but what it doesn’t have is a real database behind we’re just using the H2 database that is available with Java so let’s go ahead and make a change here let’s configure mu to actually have a real database so with mu that’s as easy is as defining a database you give it a name you could specify other things like a type and whatnot but will default with the Aurora RDS and then you’re going to want to pass some environment variables so we will pass the database connection information to our spring app since we’re using Spring data source it’s just a matter of finding these three environment variables and you’ll notice that the username password and the endpoint are not actually in the mu.yml file we don’t want those things in there what what will happen is mu will create those for us and then they will make them available As CloudFormation parameters that we can reference to the dollar-sign notation that CloudFormation offers. ok so now that we’ve got that change made, go and add our new file and commit the change and push it up which should trigger a new run of the pipeline and again we’ve got to go through all those earlier actions just to ultimately get to the deploy action where the RDS database will be created now again you can choose any RDS database type but we’re using Aurora by default.

Now one question is well how does the password get defined so the way this works is we use a service that AWS has called Parameter Store which manages secrets and when mu starts up it checks if there’s a password defined and if it’s not, it generates a random 16-character string, adds it to Parameter Store and then later on when it deploys the service it pulls it out of parameter store and passes it in as an environment variable. Those parameters are encrypted with KMS – a key management system so they are secure.

Ok, so looking at the logs now from the service these are our Spring Boot startup logs. What I’m expecting to see is that rather than seeing H2 as the dialect…there you go, we see MySQL is the dialect for the connection that tells me that Spring Boot detected our environment variables and Spring Boot recognized that we are in fact trying to talk to MySQL – let me go and highlight that here. So, this tells us that our application is in fact connecting to a MySQL database which is provided by RDS and wired up via mu. So, we can look at our service again and watch the pipeline run and we can get some confirmation that we need break anything because we have those tests as a part of our pipeline now so we’ll let this go and – our tests are running. Once that completes we will have a good good feeling that this change is ready to promote the production.

Well thanks for watching and check out https://getmu.io to learn more.

DevOps on AWS Radio: mu – DevOps on AWS tool (Episode 10)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps on AWS news and speak with Casey Lee from Stelligent about the open-source, full-stack DevOps on AWS tool called mu.

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. What is mu and what problem does it solve? What are its benefits?
  2. How does someone use mu (including prereqs)?
  3. What types of programming languages and platforms are supported?
  4. What types of AWS architectures does mu support (i.e. traditional EC2, ECS, Serverless, etc.)?
  5. Which AWS services are provisioned by mu?
  6. Does mu support non-AWS implementations?
  7. What does mu install on my AWS account?
  8. Describe mu’s support for configuration/secrets
  9. Extensibility?
  10. Price?
  11. What’s next on the mu roadmap?
  12. How can listeners learn more about mu?

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

DevOps in AWS Radio: Serverless (Episode 8)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps in AWS news and speak with Mike Roberts and John Chapin from Symphonia about Serverless architectures, DevOps, and AWS.

Here are the show notes:

DevOps in AWS News

Episode Topics

  1. Pros and Cons of Serverless architectures
  2. Symphonia’s Serverless speciality
  3. How is DevOps and Continuous Delivery fit into Serverless
  4. Continuous Experimentation and Serverless architectures
  5. Types of applications or services are most suitable or not suitable for Serverless
  6. O’Reilly report: “What is Serverless?”
  7. Serverless architectures resources and people in the space
  8. Vendor lockin

Additional Resources

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Service discovery for microservices with mu

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this fourth post of the blog series focused on the mu tool, we will use mu to setup Consul for service discovery between multiple microservices.  

Why do I need service discovery?

One of the biggest benefits of a microservices architecture is that the services can be deployed independently of one another.  However, this presents a new challenge in that it becomes difficult for clients to know the list of containers to use when invoking the service.  Here are three different approaches to address this challenge:

  • Load balancer per microservice: Create a load balancer for every microservice and add/remove containers to the load balancer as deployments and scaling events occur.  The endpoint address of the load balancer is then shared with clients through some manual process.

cloudcraft - Microservices - multip.png

There are three concerns with this approach.  First, the endpoint address of the load balancer must never change or else all the clients will be broken and require updates to take the new endpoint address.  This can be addressed via DNS CNAME records, but still requires that the name chosen for the record must not change.  Second, there is the additional cost of a load balancer for every microservice.  Finally, there is additional latency introduced with adding a load balancer between each microservice invocation.

  • Shared load balancer: Create a load balancer that is shared by all microservices in an environment.  The load balancer must have rules for each microservice to route requests by URI patterns.

ms-architecture-3

The concern with this approach is that all traffic is now flowing through a single load balancer which can become a constraint in scaling the entire system.  Additionally, the load balancer becomes a shared resource amongst all the microservice teams, potentially impacting a team’s ability to operate independently of other teams.

  • Client load balancer: Load balancing from within the client is an approach in which the client has an awareness of all the containers in-service for a given microservice.  The client can then load balance between the containers when invoking the microservice.  This approach requires a system to provide service registration and service discovery.   

cloudcraft - mu-bananaservice-v3

The benefit with this approach is there are no longer load balancers between each microservice request so all the concerns with those prior approaches are addressed.  However, a new type microservice, an edge service, will need to be deployed to allow clients outside the microservice environment (that do not have access to service discovery) to invoke the service.

The preferred approach is the third approach which uses service discovery and client side load balancing within the microservice environment and edge routing with traditional load balancing for clients outside the microservice environment.  This approach provides the lowest latency and most loosely coupled solution for microservice invocation.

Let mu help!

The environment that mu creates for your microservice can manage the provisioning of Consul for service discovery and registration of your microservices.  Consul is a sort of phonebook for microservices.  It provides APIs for services to register their endpoints and for clients to lookup the endpoints.

Let’s demonstrate this by adding an additional milkshake service to the invoke the banana service from the first post.  Additionally, we will create a zuul router service to provide an edge service via Netflix’s Zuul.  Zuul is a proxy service that serves as the front door for all requests from outside the microservice environment.  Zuul will use Consul for service discovery to determine where best to route the incoming request.  Additionally, Zuul provides an excellent location to enforce policies such as authentication, authorization or logging on all incoming requests.

Enabling Consul and Edge Router

The first thing we will want to do is set up our edge router with Zuul.  This is just a matter of adding the @EnableZuulProxy and @EnableDiscoveryClient annotations to the Spring Boot application:

@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy
public class ZuulRouterApplication {

   public static void main(String[] args) {
     SpringApplication.run(ZuulRouterApplication.class, args);
   }
}

Zuul is configured via the application.yml file in src/main/resources.  For each service that we want exposed via the edge router, we add URI path patterns:

spring:
  application:
    name: zuul-router
zuul:
  routes:
    milkshake-service:
      path: /milkshakes/**
      stripPrefix: false
    banana-service:
      path: /bananas/**
      stripPrefix: false

In order to enable Consul in your environment, you need to update the environment definition in the mu.yml file.  Additionally, you need to configure Spring Cloud Consul to connect to the docker host ip address for service discovery.  We will also want to configure Spring Cloud to not register with Consul, since mu will already configure the Registrator agent on your ECS container instances:

 environments:
 - name: acceptance
   cluster:
     maxSize: 5
   discovery:
     provider: consul
 - name: production

service:
  name: zuul-router
  port: 8080
  pathPatterns:
  - /*
  environment:
    SPRING_CLOUD_CONSUL_HOST: 172.17.0.1
    SPRING_CLOUD_CONSUL_DISCOVERY_REGISTER: 'false'
  pipeline:
    source:
      provider: GitHub
      repo: cplee/zuul-router
    build:
      image: aws/codebuild/java:openjdk-8

Create Milkshake Service

Now we can create a new service to manage the creation of milkshakes.  The service looks very similar to the banana service, with the exception of declaring a Spring RestTemplate annotated with @LoadBalanced to enable client side loadbalancing via Ribbon.

 

@SpringBootApplication
@EnableDiscoveryClient
public class MilkshakeApplication {

  @LoadBalanced
  @Bean
  RestTemplate restTemplate(){
     return new RestTemplate();
  }
}

Now we can use the RestTemplate to make calls directly to the banana service.  Ribbon will do a lookup in Consul for a service named banana-service and replace it in the URL with one of the container’s IP and port:

@Component
public class BananaProvider implements FlavorProvider {

  @Autowired
  private RestTemplate restTemplate;

  private List<Map<String,Object>> getAll() {
    ParameterizedTypeReference<List<Map<String, Object>>> typeRef =
            new ParameterizedTypeReference<List<Map<String, Object>>>() {};

    ResponseEntity<List<Map<String, Object>>> exchange =
            this.restTemplate.exchange("http://banana-service/bananas",HttpMethod.GET,null, typeRef);

    return exchange.getBody();
  } 

Try it out!

After we have deployed all three services, we can use mu to confirm that all are running as expected.

~ ❯❯❯ mu env show acceptance                                                                                                                                                                                                       

Environment:    acceptance
Cluster Stack:  mu-cluster-dev (UPDATE_COMPLETE)
VPC Stack:      mu-vpc-dev (UPDATE_COMPLETE)
Bastion Host:   35.164.117.25
Base URL:       http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com

Container Instances:
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
|    EC2 INSTANCE     |   TYPE   |     AMI      |     AZ     | CONNECTED | STATUS | # TASKS | CPU AVAIL | MEM AVAIL |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
| i-08e3edc8c644f0534 | t2.micro | ami-62d35c02 | us-west-2b | true      | ACTIVE |       3 |       604 |       139 |
| i-05bc14a67e53889e1 | t2.micro | ami-62d35c02 | us-west-2a | true      | ACTIVE |       3 |       604 |       139 |
| i-0b56a0d9572531e9e | t2.micro | ami-62d35c02 | us-west-2c | true      | ACTIVE |       3 |       604 |       139 |
| i-05b2188a5c575fbeb | t2.micro | ami-62d35c02 | us-west-2b | true      | ACTIVE |       1 |       624 |       739 |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+

Services:
+-------------------+---------------------------+------------------+---------------------+
|      SERVICE      |         IMAGE             |      STATUS      |     LAST UPDATE     |
+-------------------+---------------------------+------------------+---------------------+
| milkshake-service | milkshake-service:9e4bcd9 | CREATE_COMPLETE  | 2017-05-12 11:33:05 |
| zuul-router       | zuul-router:3d4795c       | UPDATE_COMPLETE  | 2017-05-12 12:09:47 | 
| banana-service    | banana-service:3b62124    | UPDATE_COMPLETE  | 2017-05-12 11:32:55 |
+-------------------+---------------------------+------------------+---------------------+

We can then use curl to get a list of all the bananas available via the banana-service:

curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq
[
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/9"
      }
    ]
  }
]

Next we try to create a milkshake using the milkshake-service:

~ ❯❯❯ curl -s -d "{}" -H "Content-Type: application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/milkshakes\?flavor\=Banana | jq                                                                         
{
  "timestamp": "2017-05-15T19:12:56.640+0000",
  "status": 500,
  "error": "Internal Server Error",
  "exception": "org.springframework.web.client.HttpClientErrorException",
  "message": "429 Not enough bananas to make the shake.",
  "path": "/milkshakes"
}

Looks like there aren’t enough bananas to create a milkshake.  Let’s create another one:

~ ❯❯❯ curl -s -d "{}" -H "Content-Type: application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas

~ ❯❯❯ curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq                                                                                                                         
[
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/9"
      }
    ]
  },
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/10"
      }
    ]
  }
]

Now let’s try again creating a milkshake:

~ ❯❯❯ curl -s -d "{}" -H "application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/milkshakes\?flavor\=Banana | jq                                                                      
{
  "id": 3,
  "flavor": "Banana"
}

This time it worked, and if we query the list of bananas again, we see that 2 have been deleted for the milkshake:

~ ❯❯❯ curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq                                                                                                                        
[]

Conclusion

Decomposing a monolithic application into microservices presents an interesting challenge in enabling services to invoke one another while still keeping the services loosely coupled.  Using a client side load balancer like Ribbon along with a service discovery tool like Consul provide an excellent solution to this challenge.  As demonstrated in this post, mu makes it simple to enable service discovery in your microservice environment to help achieve this solution.  Head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Microservice databases with mu

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this third post of the blog series focused on the mu tool, we will use mu to manage microservice databases in the pipeline we built in the first post.  

Why should my microservice manage the database?

As discussed in prior posts, adopting a microservice architecture can increase a team’s ability to deliver software faster through decoupling and team autonomy.  By decomposing an application into microservices and then giving teams complete ownership of their microservices, the teams can then make decisions and implement changes independent of other teams and their microservices.

Unless the same approach is taken to decompose the databases that support the microservices, the benefits of microservices will be limited by the cross team dependencies on shared databases. When your microservices share a database then in effect you’ve used the database as an API between the services.  This type of architecture causes tight coupling between services and likely will require regression testing and even deployment of multiple services at the same time.

Martin Fowler, in his post titled Microservices, says “Microservices prefer letting each service manage its own database.”  By decomposing all the way down into the database you can realize the benefits of agility that microservices has to offer.

decentralised-data
Source: https://martinfowler.com/articles/microservices.html

Let mu help!

blog1The continuous delivery pipeline that mu creates for your microservice can manage the provisioning of a database.  Additionally, the details about the database can be injected into your service as environment variables.

Let’s demonstrate this by adding a database to the microservice pipeline we created in the first post for the banana service.

Define the database

Previously, the banana service was using an embedded H2 database.  This won’t work in a production environment so we need an RDS database instance that the microservice can use.  Adding a database for a service with mu is as simple as adding a couple lines to your mu.yml file:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana

By default, this will create an RDS database instance of size db.t2.small with the aurora engine.  Next we need to reference the database from our microservice.  We can pass the database URL and credentials via environment variables:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana

  environment:
    SPRING_DATASOURCE_USERNAME: ${DatabaseMasterUsername}
    SPRING_DATASOURCE_PASSWORD: ${DatabaseMasterPassword}
    SPRING_DATASOURCE_URL: jdbc:mysql://${DatabaseEndpointAddress}:${DatabaseEndpointPort}/${DatabaseName}

This approach does have the disadvantage of passing database credentials as environment variables.  This presents a security issue as any IAM user/role with access to ECS task API would be able to discover the credentials.

AWS has recently announced IAM database authentication that can be utilized to obtain temporary database credentials from the microservice via an AWS API call.  Although we will save the details for a future blog post, for now it’s worth mentioning that mu can configure the database for IAM database authentication to work around this issue of passing credentials as environment variables.  This would be accomplished with a mu.yml like this:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana
    instanceClass: db.t2.medium
    iamAuthentication: true

  environment:
       SPRING_DATASOURCE_URL: jdbc:mysql://${DatabaseEndpointAddress}:${DatabaseEndpointPort}/${DatabaseName}

The configuration of the tables and the data in the database is managed with Liquibase. When the service is started, Liquibase creates/updates the database tables and data. This is accomplished by creating the a file named db.changelog-master.yaml  in src/main/resources/db/changelog/

Now we can commit and push our changes to cause a new run of the pipeline to occur:

$ git add --all && git commit -m "add database" && git push

We see our pipeline is green, so we have confidence that the new database is working properly with the microservice.

Conclusion

Realizing the benefits of microservices requires decomposing not just the application, but also the databases that support it.  As demonstrated in this post, mu makes it simple to manage your database and wire them up to your microservices.  The goal is that mu empowers you to implement microservice best practices in your application.

In the upcoming posts in this blog series, we will look into:

  • Service Discovery – use mu to enable service discovery via `Consul` to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

DevOps in AWS Radio: AWS CodeStar (Episode 7)

In this episode, Paul Duvall and Brian Jakovich are joined by Trey McElhattan from Stelligent to cover recent DevOps in AWS news and speak about AWS CodeStar.

Stay tuned for Trey’s blog post on his experiences in using AWS CodeStar!

Here are the show notes:

DevOps in AWS News

Episode Topics

  1. What is AWS CodeStar? What are its key features?
  2. Which AWS tools does it use?
  3. What are the alternatives to using AWS CodeStar?
  4. If you’d like to switch one of the tools that CodeStar uses, how would you do this (e.g. use a different monitoring tools than CloudWatch)?
  5. Which are supported and how: SDKs, CLI, Console, CloudFormation, etc.?
  6. What’s the pricing model for CodeStar?

Additional Resources

  1. New- Introducing AWS CodeStar – Quickly Develop, Build, and Deploy Applications on AWS
  2. AWS CodeStar Product Details
  3. AWS CodeStar Main Page

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…