Screencast: Full-Stack DevOps on AWS Tool

Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers. However, there is a significant learning curve for developers to get their microservices deployed. mu is a full-stack DevOps on AWS tool that simplifies and orchestrates your software delivery lifecycle (environments, services, and pipelines). It is open source and available at http://getmu.io/. You can click the YouTube link below (we’ve also provided a transcript of this screencast in this post).

Let’s demonstrate using mu to deploy a Spring Boot application to ECS. So, we see here’s our micro service (and) we’ve already got our Docker file set up. We see that we’ve got our Gradle file so that we can compile the code and then we see the various classes necessary for the service; we’re using Liquibase for managing our database so that definition file is there; we’ve got some unit tests to find so when I will go ahead take a look at the Docker file and we see that it’s pretty straightforward: it builds from the Java image; all it does is takes the jar and adds it and then for the entry point, it just runs java -jar. So, we run mu init and that’s going to create two files for us: it’s going to create a mu.yml file which we see here and so we need to add some stuff to the file it generates – specifically, we want to specify Java 8 for the (AWS) CodeBuild image then we edit the buildspec file and tell it to use Gradle build for the build command. Buildspec is a standard code build  file for defining your project so if you see our two new files: buildspec.yml and mu.yml so we go ahead and commit those (and) push those up to our source repository in this case we’re using GitHub and then we run the command mu pipeline up and what that does is it creates a CloudFormation stack for managing our CodePipeline and our CodeBuild projects so it’s going to prompt us for the GitHub token this is the access token that you’ve defined inside GitHub so that CodePipeline can access your repository so we provide that token and then we see that it’s creating various things like IAM Roles for CodeBuild to do its business and (create) the actual CodeBuild project that’s going to be used there’s a quite a few different CodeBuild projects for building and testing and deploying so now we run the command mu service show and what that’s going to show us is that there is a pipeline now created we see it has started in the first step.

Let’s go ahead and open up (AWS CodePipeline) in the console and we see that, sure enough, (the Source stage of our pipeline) is running and then we see there’s a Build stage with the Artifact and Image actions in it – that’s where we compile and build our Docker image; there’s an acceptance stage and then a Production stage both of which do a deployment and then testing so jumping back over here to the command line we can run mu service show and we see that we are in the Source action currently running and that’s just going to take a minute before we now trigger the Artifact action of the Build stage and so that’s where we’re actually doing the compiling so the command we can run here (is) mu pipeline logs -f and we add the -f so that we follow the logs – what happens is all of the output from CodeBuild gets sent to CloudWatch Logs and so the mu pipeline logs command allows us to tail CloudWatch Logs and watch the activity in real time so we see that our Maven artifacts are being resolved for dependencies and then we see “build success”, so our artifact has been built and our unit tests have passed so it’s just going to take a second here for a CodeBuild to go ahead and upload the artifact and then trigger the pipeline to move to the next stage which is our Image (action) in the Image (action) what’s going to happen is it’s going to run Docker build against our artifact (and) create a Docker image; it’s then going to push that image up to ECR. It’s also going to create that ECS repository if it doesn’t exist yet through a CloudFormation stack so we go ahead and run mu pipeline logs and we could see the Image action running we see we’re pulling down the Docker base image that Java image and then there’s our docker build and now we’re pushing back up to ECR I’ll take just a minute to upload that new docker image with our Spring Boot application on and that’s completed successfully.

So now if we jump back over to mu service show just give it a second we should see that we will progress beyond the Build stage and into the Acceptance stage in the Acceptance Stage there will be two actions first a deploy action that’s going to use the image that was created and create a new ECS service for it and so that’s what we see going on here what you’ll notice in just a second right there what’s happening is first it’s making sure the environment is up-to-date so the ECS cluster and the auto scaling group for it and all the instances for ECS; it’s making sure that’s up to date; it’s also then updating any databases that are defined and then finally deploying the service and so we see here is there’s a CREATE_IN_PROGRESS –  the status of the deployment to the Dev environment is in progress so there’s a CloudFormation stack being deployed. I go ahead and run this command mu service logs just like there’s logs for the pipeline all the logs for your service are sent to CloudWatch Logs so here we’re watching the logs for our service starting up these are the Spring Boot output messages. If you used Spring Boot before it should look familiar but this is very helpful for troubleshooting an application being able to see if logs in real time.

So the deployment is complete – (based on) the logs we saw that it is up – so we’re going to go and look at the environment here. We do mu env list. We see the Dev environment and when we show it, we can see the EC2 instance associated with it and we also see the base URL for the ELB so I’m gonna go ahead and run a curl command against that – adding the bananas URI at the end of it and pipe that to jq just to make it look pretty and sure enough, there we see we get a successful response. So, our app has been deployed successfully and we see that we are in the Approval stage and it’s waiting for approvals so we’ve completed the Acceptance stage.

Let’s take a look at CloudFormation to just see what mu has created for us. So, we see there’s over just (CloudFormation) stacks over here. Remember everything that mu does is managed through CloudFormation there’s no other database or anything behind mu – it’s just native AWS resources so, for example, if we look at the VPC there for the in dev environment we see all the things you expect to see: routes, Network ACLs, subnets, there’s a NAT gateway defined, the VPC itself and then if we go to the cluster we see the Auto Scaling Groups for the ECS container instances, we see the load balancer – the application load balancer that’s defined for the environment, all the necessary security groups and then there’s some scaling policies to scale in or out on that auto scaling group based on how many tasks are currently running. This is the service –  the banana service has been deployed to the (dev environment), we see the IAM roles, Task Definition and whatnot for the service.

Now one thing we didn’t do previously was we didn’t do any testing so what you can do is you can go ahead and create this file called buildspec-test.yml and what will happen is anything that you define in this test YAML will be run as a test action after the deployments made if standard CodeBuild buildspec file so in this case we’re going to use a tool called Newman. Newman is a nodejs command-line tool for running postman collections. Postman is a tool that GitHub created for doing testing of restful APIs. So, our postman collections. so we’re configuring this to run Newman for our tests. We’ll have to make a change to mu.yml – we have to configure the acceptance environment to use a Node.js CodeBuild image so that’s what we’ve done there so with those two changes we should be able to run mu pipeline up that will update the CodeBuild project to use the nodejs image and then once our pipeline is up to date we’ll be able to commit our change which is that buildspec-test file and once we push that up the pipeline will start running again this time tests will actually run and we’ll get some assurance that the code is ready to go onto production. So to make that change, push it and then if we look at the service we’ll see that the source action has triggered and we’ll just let this run for a while. The whole pipeline is going to have to run but things like the artifact and image won’t really cause any change because we didn’t actually change the source code but those are go ahead and run anyway so we are now in being image stage we’re taking the new jar file and building a docker image from it pushing that up to ECR we’ve now hit the Deploy stage so the latest Docker image is being used for the ECS service.

Once that completes, we will run that mu pipeline logs again to watch the CodeBuild project doing the testing and here we go so we see the testing is running it’s going to run npm install to install our dependencies namely the Newman tool and then we see some results so i see status code 200 – that looks good. Under the fail column, I see a bunch of zeros which looks great and then I see build success so not only has our application been deployed to ECS but we’ve also been able to test it and and now those tests will be run as a part of every execution of the pipeline as part of every commit. Now the other thing that we’ll recognize here is this application that we built it’s managing our inventory of bananas but what it doesn’t have is a real database behind we’re just using the H2 database that is available with Java so let’s go ahead and make a change here let’s configure mu to actually have a real database so with mu that’s as easy is as defining a database you give it a name you could specify other things like a type and whatnot but will default with the Aurora RDS and then you’re going to want to pass some environment variables so we will pass the database connection information to our spring app since we’re using Spring data source it’s just a matter of finding these three environment variables and you’ll notice that the username password and the endpoint are not actually in the mu.yml file we don’t want those things in there what what will happen is mu will create those for us and then they will make them available As CloudFormation parameters that we can reference to the dollar-sign notation that CloudFormation offers. ok so now that we’ve got that change made, go and add our new file and commit the change and push it up which should trigger a new run of the pipeline and again we’ve got to go through all those earlier actions just to ultimately get to the deploy action where the RDS database will be created now again you can choose any RDS database type but we’re using Aurora by default.

Now one question is well how does the password get defined so the way this works is we use a service that AWS has called Parameter Store which manages secrets and when mu starts up it checks if there’s a password defined and if it’s not, it generates a random 16-character string, adds it to Parameter Store and then later on when it deploys the service it pulls it out of parameter store and passes it in as an environment variable. Those parameters are encrypted with KMS – a key management system so they are secure.

Ok, so looking at the logs now from the service these are our Spring Boot startup logs. What I’m expecting to see is that rather than seeing H2 as the dialect…there you go, we see MySQL is the dialect for the connection that tells me that Spring Boot detected our environment variables and Spring Boot recognized that we are in fact trying to talk to MySQL – let me go and highlight that here. So, this tells us that our application is in fact connecting to a MySQL database which is provided by RDS and wired up via mu. So, we can look at our service again and watch the pipeline run and we can get some confirmation that we need break anything because we have those tests as a part of our pipeline now so we’ll let this go and – our tests are running. Once that completes we will have a good good feeling that this change is ready to promote the production.

Well thanks for watching and check out https://getmu.io to learn more.

AWS CodeBuild is Here

At re:Invent 2016 AWS introduced AWS CodeBuild, a new service which compiles from source, runs tests, and produces ready to deploy software packages.  AWS CodeBuild handles provisioning, management, and scaling of your build servers.  You can either use pre-packaged build environments to get started quickly, or create custom build environments using your own build tools.  CodeBuild charges by the minute for compute resources, so you aren’t paying for a build environment while it is not in use.

AWS CodeBuild Introduction

Stelligent engineer, Harlen Bains has posted An Introduction to AWS CodeBuild to the AWS Partner Network (APN) Blog.  In the post he explores the basics of AWS CodeBuild and then demonstrates how to use the service to build a Java application.

Integrating AWS CodeBuild with AWS Developer Tools

In the follow-up post:  Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite, Stelligent CTO and AWS Community Hero,  Paul Duvall expands on how to integrate and automate the orchestration of CodeBuild with the rest of the AWS Developer Tools suite – including AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline using AWS’ provisioning tool, AWS CloudFormation.  He goes over the benefits of automating all the actions and stages into a deployment pipeline, while also providing an example with detailed screencast.

In the Future

Look to the Stelligent Blog for announcements, evaluations, and guides on new AWS products.  We are always looking for engineers who love to make things work better, faster, and just get a kick out of automating everything.  If you live and breathe DevOps, continuous delivery, and AWS, we want to hear from you.

Stelligent AWS Continuous Delivery Demo Screencast

See the YouTube screencast and transcript below of a Continuous Deployment pipeline demonstration for a Node.js application using AWS services such as EC2, DynamoDB, Route 53, ENI and VPC and tools such as AWS CodePipeline, Jenkins, Chef and AWS CloudFormation. Open source code is available at https://github.com/stelligent/dromedary

Transcript
In this screencast, you’ll see a live demonstration of a system that uses Continuous Deployment of features based on a recent code commit to Github.

You can access this live demo right now by going to demo.stelligent.com. All of the code is available in open source form by going to https://github.com/stelligent/dromedary.

This is a simple Node.js application in which you can click on any of the colors to “vote” for that color and see the results of your votes and others in real time. While it’s a simple application, it uses many of the types of services and tools that you might define in your enterprise systems such as EC2, Route 53, DynamoDB, VPC, Elastic IP and ENI, and it’s built, deployed, tested and released using CloudFormation, Chef, CodePipeline and Jenkins – among other tools.

So, I want you to imagine there’s several engineers on a team that have a dashboard like this on a large monitor showing AWS CodePipeline. CodePipeline is a service released by AWS in July 2015. With CodePipeline, you can model your Continuous Delivery and Continuous Deployment workflows – from the point at which someone commits new code to a version-control repository until it gets released to production.

You can see it shows there’s a failure associated with the infrastructure test. So, let’s take a look at the failure in Jenkins.

Many of you may already be familiar with Jenkins as it’s a Continuous Integration server. In the context of this demo, we’re using CodePipeline to orchestrate our Continuous Deployment workflow and Jenkins to perform the execution of all the code that creates the software system.

Now, the reason for this failure is because we’ve written a test to prevent just anyone from SSH’ing into the EC2 instance from a random host. So, I’m going to put my infrastructure developer hat on and look at the test that failed.

This is an RSpec test to test whether port 22 is accessible to any host. This test gets run every time someone commits any code to the version-control repository.

Based on this test failure, I’m going to look at the CloudFormation template that provisions the application infrastructure. The app_instance.json CloudFormation template defines the parameters, conditions, mappings and resources for the application infrastructure. With CloudFormation there are over 100 built-in AWS resources we define in code. Here, I’m looking at the resource in this template that defines the security group for the EC2 instance that’s hosting the application.

I’m going to update the CIDR block to a specific host using the /32 notation so that only I can access the instance that hosts the application.

Normally, I might wait for the changes made by the other infrastructure developer to successfully pass through the pipeline, but in this case, I’m just going to make my application code changes as well.

So, putting my application developer hat on, I’m going to make an application code change by adding a new color to the pie chart that gets displayed in our application. So, I’ll add the color orange to this pie chart. I’m also going to update my unit and functional tests so that they pass.

Now, you might’ve noticed something in the application. I’m not using SSL. So you go to http, instead of https. That’s insecure. So, I’m going to make some code changes to enable SSL in my application.

Ok. So, I’ve committed my changes to Git and it’s going to run through all the stages and actions in CodePipeline.

…and commit it to Git

So, CodePipeline is polling Github looking for any changes. Since I just committed some new changes, CodePipeline will discover those changes and begin the execution of the defined jobs in Jenkins.

Now, you’ll notice that CodePipeline picked up these changes – the security/infrastructure, SSL, changes along with the application code changes will be built, tested, deployed as part of this deployment pipeline.

You’ll see that there are a number of different stages here, each consisting of different actions. A stage is simply a logical grouping of actions, and is largely dictated by your application or service. The actions themselves call out to other services. There are four types of built-in actions within CodePipeline: source, build, test and deploy. You can also define custom actions in CodePipeline

Ok, now CodePipeline has gone through all of its stages successfully.

You can see that I added the new color, orange, and all my unit and infrastructure tests passed.

It spun up a new infrastructure and used the Blue/Green Deployment pattern to switch to the new environment using Route 53. With a blue-green deployment, we’ve got a production environment (we’ll call this “blue”) and a pre-production environment that looks exactly like production (and we’ll call this “green”). We commit the changes to Github and CodePipeline orchestrates these changes by spinning up the green environment and then uses Route 53 to move all traffic to green. Or, another way of putting this is that you’re switching between production and pre-production environments. Anyway, with this approach you can continue serving users without them experiencing any downtime. You can also potentially rollback to that blue environment can become production again if anything goes wrong with the deployment.

So, let’s summarize what happened…I made application, infrastructure and security changes as code and committed them to Git. It automatically went through a deployment pipeline using CodePipeline and Jenkins – which was also defined in code – It built, ran unit and functional tests, and stored the distro, launched an environment from code and deployed the application – using CloudFormation and Chef. As part of this pipeline, it also ran infrastructure tests as code and deployed to production without anyone lifting another finger. This way you get feedback as soon as an error occurs and only release to production when it passes all these checks. You can do the same thing in your enterprises. From commit to production in minutes or hours instead of days or weeks. Just think about what’s possible when you’re releases become a non-event like this!

DevOps in the Cloud LiveLessons (Video)

DevOps in the Cloud LiveLessons walks viewers through the process of putting together a complete continuous delivery platform for a working software application written in Ruby on Rails along with examples in other development platforms such as Grails and Java on the companion website. These applications are deployed to Amazon Web Services (AWS), which is an infrastructure as a service, commonly referred to as “the cloud”. Paul M. Duvall goes through the pieces that make up this platform including infrastructure and environments, continuous integration, build and deployment scripting, testing and database. Also, viewers will learn configuration management and collaboration practices and techniques along with what those nascent terms known as DevOps, continuous delivery and continuous deployment are all about. Finally, since this LiveLesson focuses on deploying to the cloud, viewers will learn the ins and outs of many of the services that make up the AWS cloud infrastructure. DevOps in the Cloud LiveLessons includes contributions by Brian Jakovich, who is a Continuous Delivery Engineer at Stelligent.

DevOps in the Cloud LiveLessons

Visit www.devopscloud.com to download the complete continuous delivery platform examples that are used in these LiveLessons.

Lesson 1:
Deploying a Working Software Application to the Cloud provides a high-level introduction to all parts of the Continuous Delivery system in the Cloud. In this lesson, you’ll be introduced to deploying a working software application into the cloud.

Lesson 2:
DevOps, Continuous Delivery, Continuous Deployment and the Cloud covers how to define motivations and differentiators around Continuous Delivery, DevOps, Continuous Deployment and the Cloud. The lesson also covers diagramming software delivery using spaghetti diagrams and value-stream maps.

Lesson 3:
Amazon Web Services covers the basics of the leading Infrastructure as a Service provider. You’ll learn how to use the AWS Management Console, launch and interact with Elastic Compute Cloud (EC2) instances, define security groups to control access to EC2 instances, set up an elastic load balancer to distribute load across EC2 instances, set up Auto Scaling, and monitor resource usage with CloudWatch.

Lesson 4:
Continuous Integration shows how to set up a Continuous Integration (CI) environment, which is the first step to Continuous Integration, Continuous Delivery and Continuous Deployment.

Lesson 5:
Infrastructure Automation covers how to fully script an infrastructure so that you can recreate any environment at any time utilizing AWS CloudFormation and the infrastructure automation tool, Puppet. You’ll also learn about the “Chaos Monkey” tool made popular by NetFlix – a tool that randomly and automatically terminates instances.

Lesson 6:
Building and Deploying Software teaches the basics of building and deploying a software application. You’ll learn how to create and run a scripted build, create and run a scripted deployment in Capistrano, manage dependent libraries using the Bundler and an Amazon S3-backed repository, and deploy the software to various target environments including production using the Jenkins CI server. You will also learn how anyone on the team can use Jenkins to perform self-service deployments on demand.

Lesson 7:
Configuration Management covers the best approaches to versioning everything in a way where you have a single source of truth and can look at the software system and everything it takes to create the software as a holistic unit. You’ll learn how to work from the canonical version, version configurations, set up a dynamic configuration management database that reduces the repetition, and develop collective ownership of all artifacts.

Lesson 8:
Database, covers how to entirely script a database, upgrade and downgrade a database, use a database sandbox to isolate changes from other developers and finally, to version all database changes so that they can run as part of a Continuous Delivery system.

Lesson 9:
Testing, covers the basics of writing and running various tests as part of a Continuous Integration process. You’ll learn how to write simple unit tests that will run fast tests at the code level, infrastructure tests, and deployment tests – sometimes called smoke tests. You will also learn how to get feedback on the test results from the CI system.

Lesson 10:
Delivery Pipeline demonstrates how to use the Build Pipeline plug-in in Jenkins to create a delivery pipeline for the commit, acceptance, load & performance and Production stages so that software can potentially be delivered to users with every change.

Elastic Operations – Scale Engineers in the Cloud

I’m happy to announce that we’re now offering Elastic Operations.  Elastic Operations is a managed service that eliminates all your hardware and replaces it with a reliable, scalable cloud supported by Operations Engineers.  Delivery_orange

You get self-service provisioning, build, deployment, database administration, issue tracking, and system monitoring — all managed by our expert engineers at one flat rate per month.

Your flat rate is based on the number of applications you’d like to manage with Elastic Operations.  And you can scale these experts up and down on a monthly basis, just like you do with Cloud Computing.

We offer various flat-rate plans for development operations, testing, and production. By utilizing 100% automation and the commodization of hardware via the cloud, we offer drastically reduced prices over traditional operations teams who manage data centers.

What applications would you like to manage better?  Sign up for Elastic Operations today, and your applications could be up and running in the cloud tomorrow. Check out the one-minute video on Elastic Operations here


 

To get more information on Elastic Operations or Stelligent, send an email to elasticops@elasticoperations.com

P.S. Use our Cloud ROI Calculator to learn how much you can save when moving to an Automated Operations Cloud.

Fire your 1990s-style Operations Team

A brief conversation between a developer and a Systems Engineer who still runs his systems like it was 1995…


  

Developer: I would like a target environment created for me.

Operations: You need to send an email and we will get back to you in a day or so.

Developer: Ok, sending the email now.

Operations: Thanks for the email. Please send us your requirements including your overall architectural approach.

Developer: Ok, here are our requirements and architecture

Operations: Now, we need to get approval from management.

Operations: Ok, we need to schedule a meeting to go over your requirements

Operations: Now that we've had the meeting, we need to schedule a time to setup the servers and environment. This will take a couple of days.

Developer: So, to get one environment it takes 40 hours of actual time and one week of wait time? I'm going to the Cloud and using a provisioning application so that I can get my environment in minutes instead of weeks!

 

Screencast on using Hudson Continuous Integration

The Integrate Button website (from Paul Duvall’s book on Continuous Integration) recently published a screencast on using the Hudson Continuous Integration server – along with Subversion, Ant, HSQL and other tools. Click the image below to get started.

Continuous Integration Hudson screencast

The screencast demonstrates the following steps:

  • Checkout source files from the Subversion repository – locally
  • Run the automated build locally
  • Commit files to Subversion
  • Download, install and configure Hudson – from Hudson website or from IntegrateButton scripts
  • Make a code change (with error) and checkin files
  • Get notified of error, automatically, via Hudson
  • Fix code errors, commit change and see results in Hudson

Originally authored by Stelligent at testearly.com

Screencast on AWS Elastic Beanstalk

Amazon Web Services released their Platform as a Service offering on Wednesday, January 19th. I've gotten an opportunity to play with it and I'm quite impressed. I created a seven-minute screencast that takes you through the steps to deploy and configure an application/environment using Elastic Beanstalk. In this screencast, you'll see how easy it was to get a Hudson CI server up and running in an EC2 environment. Furthermore, Elastic Beanstalk provides automatic scaling, monitoring, configuration right 'out of the box'. It's worth checking out.