Blog

Firewalls, controlled by a Pipeline?

Is updating your firewall a painful, slow process? Does the communication gap between development teams and security teams cause frustration? If so, you’re not alone. In technology organizations, changes to firewalls tend to be slow and typically cause developer teams and security teams numerous headaches. However, controlling firewall and security settings with a pipeline, managed with CloudFormation can provide your organization faster turn around, less risk, and in turn, more business value.

So what might a firewall update process currently look like?

First, fill out a ticket in a tool far outside the normal realm of a developer’s typical workflow

The person requesting the firewall change flounders through an often tedious workflow is often not well documented and unclear. They may input partial, unnecessary, or unwittingly incorrect information. The ticket in turn becomes unhelpful and uninformative for the security personnel implementing the change.

If the ticket says “Instances need ping access”, what does this mean? Do all instances need it? On which port? For all the message types, or just echo request? The ticket system gives no clear path forward for communication to clarify these questions. In the worst situation, unruly developers like myself will become frustrated and come up with methods to work-around the system, creating even more security holes and issues.

Next, wait, while the ticket falls into a black hole in the galaxy of the security team

In organizations, this waiting game could take months, and in the case of unlucky forgotten tickets, perhaps a year. Delays in delivering business value ensue: features and products are delayed in their release. Some of these processes give no information to those relying on this firewall change. There is no way to see if it has been assigned, or to whom. There is little opportunity to follow-up and escalate the importance of the changes.

Finally, enjoy the updates to the firewall!

On some dark, stormy night, the security team will work deep into the night applying a change set. If the team works in AWS, this could be a combination of updates to Security Groups, IAM Roles, Routing Tables, and more. Perhaps, these tweaks are even applied manually, using the AWS console, making them subject to human error via fat-fingering, lapses in attention and more. What’s more, these updates may be hard to test, resulting in unexpected glitches the next day followed by hours of debugging, if all has not gone smoothly.

Why is this process an issue?

The process described above is at odds with the ideas behind Continuous Delivery, where teams aim to push code into production frequently, which could be as rarely as once a sprint, but as rapidly as multiple times per day. When delivering in such a way, the infrastructure has to be as agile and quick-moving as the code. By requiring weeks, even months for an infrastructure change, the release of some features can be delayed for long periods of time, instead of being delivered as soon as the feature is working. The development teams are stifled, unable to deliver new features providing new business value while they wait on the slow change process.

CloudFormation to the Rescue

CloudFormation is an AWS product that allows users to easily create, update, and destroy AWS resources by running specific scripts called templates. The templates create stacks, which is a collection of resources created by the template. CloudFormation allows engineers to create templates that manage security and network rules related to firewall security for different VPNs, subnets, and individual EC2 instances running on AWS. These templates can be stored in private repositories in source control where the security folks can create, update, and test new security rules and features. 

What could a future firewall update process look like?

To manage this, the developer team and the security team would be responsible for separate CloudFormation stacks. The security team’s stacks would manage resources like security groups and firewall settings. The dev team’s stack would manage EC2 instances and other resources that would use this security group or live behind this firewall.

  1. When the developer team needs security update, the developer creates a change to the firewall CloudFormation template and tests it in isolated environment
  2. She then submits a pull request to the Security team’s CloudFormation repository to implement her firewall change
  3. The security team can assign the pull request to one of their engineers and the developer can watch the discussion and progress around the ticket.
  4. If the security team approves the change by and kicks off the automated deployment process
  5. If the security team does not approve the change, the pull request is rejected which notifies the developer, giving them a forum to discuss why it was denied and possible changes that would allow it to be approved

What does this new process fix?

This new process provides major benefits for both teams.

First, it gives both teams more speed and eliminates the black hole of “wait”. The change has already been made; the security team simply needs to review it. The developer requesting the change can stay informed on the progress of the change as well, giving all parties a forum for discussion and clarity.

Second, it builds in quality. By handling the configuration of firewalls and security settings with CloudFormation, fat-fingering configuration commands is all but eliminated, as well as other manual errors. The CloudFormation templates can be verified before running for correct syntax. Additionally, the ease of spinning up resources with CloudFormation makes spinning up testing resources viable, easy, and fast, thereby encouraging testing.

Example Implementation

Below is a simplistic way to achieve this, by following the Red/Green approach from Extreme Programming. In this approach, first we write a failing test (the Red stage), then we make changes that make the test pass (the Green stage). First, we want to verify that, based on our security group setup, the ec2 instances do not have ping (icmp) access. We do this by setting up our environment and running a test script. We want to see that our test fails. Next, we update our security group to allow ping access, and re-run our test script. Then, we want to see that the test script passes.

First, we set up our templates to create our environment

Here is a file called security-group-stelligent-blog.template. It creates a security group that allows ssh access and no icmp (ping) traffic. The template outputs the id of the security group created.

Here is a file called ec2-instances-stelligent-blog.template. It creates two t1.micro EC2 linux instances named InstanceOneStelligentBlog and InstanceTwoStelligentBlog, both which reference the security group created in the template above.

In my case, this-is-the-security-group-output looks like this “stelligent-blog-security-groups-PingAndSshSecurityGroupStelligentBlog-DGVKCW4AH7A4”.

Then, we test to see whether our current environment has the functionality we want (It shouldn’t, as we have not implemented it yet)

Here is a test script that can verify whether the given target endpoint can be pinged.

Create both the security group stack and ec2 instances’ stack, then copy the test file from your local machine onto one of the instances:

Then we log on to our instance via ssh:

And run the test script

Here, we should see the test output “Test Failed”

Next, we update the script to make the test pass

In security-group-stelligent-blog.template, update the SecurityGroupIngress as seen below. It now allows icmp (ping) traffic from anywhere.

Now deploy (not create!) the changes to your security group stack and run the test script again.

 

Here, we should see the test output “Test Passed”

Now run with it…

Expanding on this approach in other ways can improve your current infrastructure delivery process. Instead of just managing security groups and firewalls, think about managing other security resources in AWS like IAM Roles, S3 Bucket Access, or Route Tables for subnets. By managing resources with CloudFormation, you can then utilize linting tools for your templates. For example cfn_nag will prevent overly permissive IAM rules. Furthermore, process around required testing could blossom. Security teams could require a corresponding test be submitted with the pull request, thereby building up a test suite to ensure their users get the security settings they need. Additionally, the security team will have confidence that new changes won’t break their regression suite of tests. Make your security process one that everyone involved is happy to participate in.

Resources:

cfn_nag – https://github.com/stelligent/cfn_nag

 

npm vs Yarn

Late last year, a new package management system for Javascript was introduced known as Yarn, designed to replace the deficiencies found in npm. Yarn has been showing up on Twitter and StackOverflow answers, but it isn’t always clear what the benefits and tradeoffs are, which is what this post aims to clear up.

Yarn was created in collaboration by developers at Facebook, Google, Exponent, and Tilde, in order to solve a growing set of problems with the npm (or node package manager) client.  The biggest of these issues included speed, reliability, and security, which are the hallmarks of good programming practices, let alone devops.  

The Benefits of Yarn

Yarn is able to help improve processes in several key ways.  The first of these is by far the most intriguing: speed improvement.  One particular npm install on my recent projects could take three and a half minutes to complete, nothing earth-shattering, until Yarn shows what it can do. After setting up and running yarn install, the call dropped down to under a minute!

One benchmark found Yarn to be two to three times faster than npm, and it was just as fast, if not faster, running on a non-clean build.  On larger scale projects, this can mean the difference between acceptable build times for a pipeline step versus build times slowly getting longer and longer (we are always trying to achieve fast feedback!)  Why is it so much faster?  The answer lies in parallel installations.  While npm will not install the next package until all its dependencies are installed, Yarn will install several packages simultaneously.

Yarn also has better versioning control of packages, with the use of a yarn.lock file, which is similar to a Gemfile.lock file in Ruby.  While the package.json file can define a range of module versions, Yarn’s lock file can ensure the same package is installed on all devices.  The lock file is created or updated as needed when a new module is added.  npm does have its own solution to this problem in shrinkwrap, however, the shrinkwrap file is not generated automatically and must be maintained by developers to ensure it stays up to date.  Yarn’s lock file system does all of that automatically by default, making shrinkwrap an unnecessary process and one less file to maintain.

On the security front, npm will run code from dependencies automatically and allow packages to be added on the fly, but Yarn only installs from yarn.lock or package.json.  A command of yarn add will install the package and its dependencies much like npm install would do, but, Yarn will also add this package to the package.json file as a dependency, where npm would need a –save flag.  

Another great feature involves Yarn only reaching out to the internet to install packages it does not already have, leaving previously installed packages to be read from the .yarn-cache file, saving even more cycle time.  Yarn is also non-verbose by default, whereas npm will always be overly verbose; Yarn can be made to be verbose, but its output is still far cleaner and easier to read than npm.  

Drawbacks

While Yarn offers some great benefits, it’s not quite flawless.  The first important issue to notice is that Yarn uses a different registry than npm.  This may at first be cause for alarm, but the registry chosen can be changed and is not necessarily less reliable than npm.  A best practice is having npm and Yarn be pointed at a different registry than the default, helping to create a reliable Continuous Delivery pipeline using your own repository.

One other concern of Yarn is that it’s a new framework, which can make some people uncomfortable, though this is ironic given how there seems to be a new Javascript framework developed every six hours.  Where mature tools are a priority, Yarn may not be the best for a stable experience, as with new tools, there’s always a role of the dice.  With that said, we’ve been using it in our production pipeline for a couple of months without issue. Developers who benefited from Yarn being added to their pipeline have since started using Yarn locally for their work to replace npm and are quite satisfied with the results.

Conclusion

Yarn has a number of improvements over npm, whether it’s faster processing, more security, or better dependency management.  It may be young compared to npm, but it’s quickly gaining traction in the Javascript world, and for good reason.  Using Yarn in your pipeline can help alleviate a number of headaches and add excellent refinements to your Javascript package management.  Yarn is a worthy successor and replacement for npm, and we are looking forward to its continued advancement.

Let us know if you have any comments or questions @stelligent

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Containerizing Jenkins with Docker and CentOS

A containerized Jenkins setup, with all the tools ready to go, is super useful for the DevOps developer. Jenkins makes it easy to parameterize and manage jobs, and so running numerous tests in parallel is efficient and profitable. Docker allows us to containerize such an environment. This blog post defines an all-in-one Jenkins container for use in DevOps development.

We define our Jenkins container in the Dockerfile, and then build and run it with:

Jenkins allows us to capture and retain output from job runs. We can share output as a link to other developers. We also want Jenkins to establish outgoing connections natively by the host (ie. container not using NAT). This is important to ensure our development jobs have access to services on the host, such as Hologram for aws metadata service. To do this, we can run the container with host networking by passing --net=host to the docker run command.

The container will open up tcp/8080 directly on the docker host, so it can be hit by others, given proper network/host firewalling rules.  Any ports opened by jobs will also open on the host IP, which may also be seen remotely (if multiple jobs try to open the same port, jobs will fail, so these must be run serially).  Remote users would need to have a VPN setup, or simply run the container on the cloud, and set TCP/IP rules to allow *only* each developer’s public IP. Please be aware that Remote Code Execution is a serious exploit and a publicly accessible Jenkins is that best way to facilitate that.  In addition to layer3/4 TCP/IP firewalling rules, layer 7 application security should always be enforced.  This means creating a Jenkins user for each person connecting and setting a secure password for each account.

We need to make our Jenkins home directory available to the container, so we bind the Jenkins directory from our host to /var/lib/jenkins via adding a volume directive to our docker run command:

--volume /path/to/jenkins/on/host:/var/lib/jenkins

We can easily port our environment between hosts by simply rsync’ing jenkins home. We can also encrypt the /path/to/jenkins/on/host, so that is must be specifically decrypted for the jenkins container to access it. When not using the container, the data can remain encrypted at rest. This can set up by making /path/to/jenkins/on/host a separate partition, which can then be encrypted.  The same process can be very useful when containerizing source control, like Gitlab.

As there are race conditions and possible corruption when running docker-in-docker, it may be better to simply bind the docker socket to the container. This allows containers launched by Jenkins to run on the host in parallel.  We also want to bind the vbox device so that jenkins can run vagrant with virtualbox backend. We add these volume directives to our docker run command:

--volume /dev/vboxdrv:/dev/vboxdrv
--volume /var/run/docker.sock:/var/run/docker.sock

We use the Centos7 base image for our Jenkins container, as we can trust the source origin, as much as is possible, and then we add Jenkins and all our tools on top in the Dockerfile:

Note that most of the above command run as root, however we need to set a few things as the Jenkins user.   This is necessary to have nix working properly for the jenkins user. In addition to installing nix, we also prepopulate the centos/7 vbox image into the container. This saves us time as vagrant vbox jobs in jenkins do not have to download when run.

We now have set up various tools commonly used, and can execute them within jenkins. We set the entrypoint for the container to launch Jenkins, the CMD> directive,…. and we are done.

If all works properly, you should see the Jenkins gui on http://localhost:8080.
A quick test job should show all the above tools are now available to jenkins jobs:

There are other tool-specific Docker containers available, such as the Ansible container, which may better suit certain needs. Other containerization technologies, like libvirt, may be preferable, but the DockerHub makes up for any difficiencies by its ease of container image sharing.

This setup is only a good idea for development. In a production CI environment, Jenkins workers should execute the jobs. Each job would be tied to a specifically locked down worker, based on job requirements. Installing tools in production on the Jenkins Master should not be necessary (other than monitoring tools like CloudWatch Agent or SplunkForwarder). Care should be taken to minimize the master’s attack surface. Docker containers are not meant to run a full interactive environment, with many child processes, so in production it may be that a better Jenkins experience is to be had by running Jenkins natively on bare metal, or a VM like AWS EC2, as opposed to inside a container.

Thanks for reading,
@hackoflamb

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If you are Stelligent is hiring and we would love to hear from you!

Introducing mu: a tool for managing your microservices in AWS

mu is a tool that Stelligent has created to make it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this first post of the blog series focused on the mu tool, we will be introducing the motivation for the tool and demonstrating the deployment of a microservice with it.  

Why microservices?

The architectural pattern of decomposing an application into microservices has proven extremely effective at increasing an organization’s ability to deliver software faster.  This is due to the fact that microservices are independently deployable components that are decoupled from other components and highly cohesive around a single business capability.  Those attributes of a microservice yield smaller team sizes that are able to operate with a high level of autonomy to deliver what the business wants at the pace the market demands.

What’s the catch?

When teams begin their journey with microservices, they usually face cost duplication on two fronts:  infrastructure and re-engineering. The first duplication cost is found in the “infrastructure overhead” used to support the microservice deployment.  For example, if you are deploying your microservices on AWS EC2 instances, then for each microservice, you need a cluster of EC2 instances to ensure adequate capacity and tolerance to failures.  If a single microservice requires 12 t2.small instances to meet capacity requirements and we want to be able to survive an outage in 1 out of 4 availability zones, then we would need to run 16 instances total, 4 per availability zone.  This leaves an overhead cost of 4 t2.small instances.  Then multiply this cost by the number of microservices for a given application and it is easy to see that the overhead cost of microservices deployed in this manner can add up quickly.

Containers to the rescue!

An approach to addressing this challenge of overhead costs is to use containers for deploying microservices.  Each microservice would be deployed as a series of containers to a cluster of hosts that is shared by all microservices.  This allows for greater density of microservices on EC2 instances and allows the overhead to be shared by all microservices.  Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers.  ECS leverages many AWS services to provide a robust container management solution.  Additionally, a developer can use tools like CodeBuild and CodePipeline to create continuous delivery pipelines for their microservices.

That sounds complicated…

This approach leads to the second duplication cost of microservices: the cost of “reengineering”.  There is a significant learning curve for developers to learn how to use all these different AWS resources to deploy their microservices in an efficient manner.  If each team is using their autonomy to engineer a platform on AWS for their microservices then a significant level of engineering effort is being duplicated.  This duplication not only causes additional engineering costs, but also impedes a team’s ability to deliver the differentiating business capabilities that they were commissioned to do in the first place.

Let mu help!

To address these challenges, mu was created to simplify the declaration and administration of the AWS resources necessary to support microservices.  mu is a tool that a developer uses from their workstation to deploy their microservices to AWS quickly and efficiently as containers.  It codifies best practices for microservices, containers and continuous delivery pipelines into the AWS resources it creates on your behalf.  It does this from a simple CLI application that can be installed on the developer’s workstation in seconds.  Similar to how the Serverless Framework improved the developer experience of Lambda and API Gateway, this tool makes it easier for developers to use ECS as a microservices platform.

Additionally, mu does not require any servers, databases or other AWS resources to support itself.  All state information is managed via CloudFormation stacks.  It will only create resources (via CloudFormation) necessary to run your microservices.  This means at any point you can stop using mu and continue to manage the AWS resources that it created via AWS tools such as the CLI or the console.

Core components

The mu tool consists of three main components:

  • Environments – an environment includes a shared network (VPC) and cluster of hosts (ECS and EC2 instances) necessary to run microservices as clusters.  The environments include the ability to automatically scale out or scale in based on resource requirements across all the microservices that are deployed to it.  Many environments can exist (e.g. development, staging, production)
  • Services – a microservice that will be deployed to a given environment (or environments) as a set of containers.
  • Pipeline – a continuous delivery pipeline that will manage the building, testing, and deploying of a microservice in the various environments.

mu-architecture

Installing and configuring mu

First let’s install mu:

$ curl -s http://getmu.io/install.sh | sh

If you’re appalled at the idea of curl | bash installers, then you can always just download the latest version directly.

mu will use the same mechanism as aws-cli to authenticate with the AWS services.  If you haven’t configured your AWS credentials yet, the easiest way to configure them is to install the aws-cli and then follow the aws configure instructions:

$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

Setup your microservice

In order for mu to setup a continuous delivery pipeline for your microservice, you’ll need to run mu from within a git repo.  For this demo, we’ll be using the stelligent/banana-service repo for our microservice.  If you want to follow along and try this on your own, you’ll want to fork the repo and clone your fork.

Let’s begin with cloning the microservice repo:

$ git clone git@github.com:myuser/banana-service.git
$ cd banana-service

Next, we will initialize mu configuration for our microservice:

$ mu init --env
Writing config to '/Users/casey.lee/Dev/mu/banana-service/mu.yml'
Writing buildspec to '/Users/casey.lee/Dev/mu/banana-service/buildspec.yml'

We need to update the mu.yml that was generated with the URL paths that we want to route to this microservice and the CodeBuild image to use:

environments:
- name: dev
- name: production
service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  pipeline:
    source:
      provider: GitHub
      repo: myuser/banana-service
    build:
      image: aws/codebuild/java:openjdk-8

Next, we need to update the generated buildspec.yml to include the gradle build command:

version: 0.1
phases:
  build:
    commands:
      - gradle build
artifacts:
  files:
    - '**/*'

Finally, commit and push our changes:

$ git add --all && git commit -m "mu init" && git push

Create the pipeline

Make sure you have GitHub token with repo and admin:repo_hook scopes to provide to the pipeline in order to integrate with your GitHub repo.  Then you can create the pipeline:

$ mu pipeline up
Upserting Bucket for CodePipeline
Upserting Pipeline for service 'banana-service' ...
  GitHub token: XXXXXXXXXXXXXXX

Now that the pipeline is created, it will build and deploy for every commit to your git repo.  You can monitor the status of the pipeline as it builds and deploys the microservice:

$ mu svc show

Pipeline URL:   https://console.aws.amazon.com/codepipeline/home?region=us-west-2#/view/mu-pipeline-banana-service-Pipeline-1B3A94CZR6WH
+------------+----------+------------------------------------------+-------------+---------------------+
|   STAGE    |  ACTION  |                 REVISION                 |   STATUS    |     LAST UPDATE     |
+------------+----------+------------------------------------------+-------------+---------------------+
| Source     | Source   | 1f1b09f0bbc3f42170b8d32c68baf683f1e3f801 | Succeeded   | 2017-04-07 15:12:35 |
| Build      | Artifact |                                        - | Succeeded   | 2017-04-07 15:14:49 |
| Build      | Image    |                                        - | Succeeded   | 2017-04-07 15:19:02 |
| Acceptance | Deploy   |                                        - | InProgress  | 2017-04-07 15:19:07 |
| Acceptance | Test     |                                        - | -           |                   - |
| Production | Approve  |                                        - | -           |                   - |
| Production | Deploy   |                                        - | -           |                   - |
| Production | Test     |                                        - | -           |                   - |
+------------+----------+------------------------------------------+-------------+---------------------+

Deployments:
+-------------+-------+-------+--------+-------------+------------+
| ENVIRONMENT | STACK | IMAGE | STATUS | LAST UPDATE | MU VERSION |
+-------------+-------+-------+--------+-------------+------------+
+-------------+-------+-------+--------+-------------+------------+

You can also monitor the build logs:

$ mu pipeline logs -f
[Container] 2017/04/07 22:25:43 Running command mu -c mu.yml svc deploy dev 
[Container] 2017/04/07 22:25:43 Upsert repo for service 'banana-service' 
[Container] 2017/04/07 22:25:43   No changes for stack 'mu-repo-banana-service' 
[Container] 2017/04/07 22:25:43 Deploying service 'banana-service' to 'dev' from '324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f' 

Once the pipeline has completed deployment of the service, you can view logs from service:

$ mu service logs -f dev                                                                                                                                                                         
  .   ____          _          __ _ _
 /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| | ) ) ) )
  ' | ____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

:: Spring Boot ::        (v1.4.0.RELEASE) 
2017-04-07 22:30:08.788  INFO 5 --- [           main] com.stelligent.BananaApplication         : Starting BananaApplication on 6a4d5544d9de with PID 5 (/app.jar started by root in /) 
2017-04-07 22:30:08.824  INFO 5 --- [           main] com.stelligent.BananaApplication         : No active profile set, falling back to default profiles: default 
2017-04-07 22:30:09.342  INFO 5 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@108c4c35: startup date [Fri Apr 07 22:30:09 UTC 2017]; root of context hierarchy 
2017-04-07 22:30:09.768  INFO 5 --- [           main] com.stelligent.BananaApplication         : Starting BananaApplication on 7818361f6f45 with PID 5 (/app.jar started by root in /) 

Testing the service

Finally, we can get the information about the ELB endpoint in the dev environment to test the service:

$ mu env show dev                                                                                                                                                                        

Environment:    dev
Cluster Stack:  mu-cluster-dev (UPDATE_COMPLETE)
VPC Stack:      mu-vpc-dev (UPDATE_COMPLETE)
Bastion Host:
Base URL:       http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com
Container Instances:
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
|    EC2 INSTANCE     |   TYPE   |     AMI      |     AZ     | CONNECTED | STATUS | # TASKS | CPU AVAIL | MEM AVAIL |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
| i-093b788b4f39dd14b | t2.micro | ami-62d35c02 | us-west-2a | true      | ACTIVE |       3 |       604 |       139 |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+

Services:
+----------------+---------------------------------------------------------------------+------------------+---------------------+
|    SERVICE     |                                IMAGE                                |      STATUS      |     LAST UPDATE     |
+----------------+---------------------------------------------------------------------+------------------+---------------------+
| banana-service | 324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f | CREATE_COMPLETE  | 2017-04-07 15:25:43 |
+----------------+---------------------------------------------------------------------+------------------+---------------------+


$ curl -s http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas | jq

[
  {
    "pickedAt": "2017-04-10T10:34:27.911",
    "peeled": false,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas/1"
      }
    ]
  }
]

Cleanup

To cleanup the resources that mu created, run the following commands:

$ mu pipeline term
$ mu env term dev
$ mu env term production

Conclusion

As you can see, mu addresses infrastructure and engineering overhead costs associated with microservices.  It makes deployment of microservices via containers simple and cost-efficient.  Additionally, it ensures the deployments are repeatable and non-dramatic by utilizing a continuous delivery pipeline for orchestrating the flow of software changes into production.

In the upcoming posts in this blog series, we will look into:

  • Test Automation –  add test automation to the continuous delivery pipeline with mu
  • Custom Resources –  create custom resources like DynamoDB with mu during our microservice deployment
  • Service Discovery – use mu to enable service discovery via Consul to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started.  Keep in touch with us in our Gitter room and share your feedback!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If you are Stelligent is hiring and we would love to hear from you!

DevOps in AWS Radio: Parameter Store with AWS CodePipeline (Episode 6)

In this episode, Paul Duvall and Brian Jakovich are joined by Trey McElhattan from Stelligent to cover recent DevOps in AWS news and speak about Using Parameter Store with AWS CodePipeline.

Here are the show notes:

DevOps in AWS News

Episode Topics

  1. What is Parameter Store? What are its key features?
  2. Which AWS tools does it use?
  3. What are the alternatives to Parameter store?
  4. Benefits of using Parameter Store along versus alternatives?
  5. Can you use Parameter Store if you’re not using AWS services and tools?
  6. Which are supported: SDKs, CLI, Console, CloudFormation, etc.?
  7. What are the different ways to store string using Parameter Store?
  8. What’s the pricing model for Parameter Store?
  9. How do you use Parameter Store in the context of a deployment pipeline – and, specifically, with AWS CodePipeline?

Related Blog Posts

  1. Using Parameter Store with AWS CodePipeline
  2. Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks
  3. Overview of Parameter Store

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

HOSTING Acquires Stelligent

It’s a time of celebration in the Stelligent community – HOSTING, the Denver-based cloud managed services provider, has announced its agreement to acquire us. You can view the press release here. Moving forward, we’ll be known as Stelligent, a Division of HOSTING.

Adding to the wonderful news is that Stelligent co-founder Paul Duvall has been named HOSTING CTO, while our other co-founder, Rob Daly, has been named Executive Vice President of Public Cloud Consulting at HOSTING. Both will continue working out of our east coast offices.

So what does this acquisition mean for you?

Simply, we will be able to do more and serve you better. The combination of DevOps Automation with managed services represents an advanced model of solution delivery within the public cloud. Now, the same expert team that develops, migrates and automates your application will also manage, monitor and secure it. As you have come to expect, your end-to-end solution will be architected and implemented by experts in leveraging the full capabilities and business benefits of AWS.

Interested in learning more?

Please click here for FAQs on the acquisition or feel free to reach out to Paul.Duvall@Stelligent.com or Rob.Daly@Stelligent.com for more information on this new chapter.

Using Parameter Store with AWS CodePipeline

Systems Manager Parameter Store is a managed service (part of AWS EC2 Systems Manager (SSM)) that provides a convenient way to efficiently and securely get and set commonly used configuration data across multiple resources in your software delivery lifecycle.

codepipeline_ssm

In this post, we will be focusing on the basic usage of Parameter Store and how to effectively use it as part of a continuous delivery pipeline using AWS CodePipeline. The following describes some of the capabilities of Parameter Store and the resources with which they can be used:

  • Managed Service: Parameter Store is managed by AWS. This means that you won’t have to put in the engineering work to setup something like Vault, Zookeeper, etc. just to store the configuration that your application/service needs.
  • Access Controls: Through the use of AWS Identity Access Management access to Parameter Store can be limited by enabling or restricting access to the service itself, or by enabling or restricting access to particular parameters.
  • Encryption: The Parameter Store gives a user the ability to also encrypt parameters using the AWS Key Management Service (KMS). When creating a parameter, you can specify that the parameter is encrypted with a KMS key. 
  • Audit: All calls to Parameter Store are tracked and recorded in AWS CloudTrail so they can be audited.

At the end of this post, you will be able to launch an example solution via AWS CloudFormation.

Working with Parameter Store

Prerequisites

In order to follow the examples below, you’ll need to have the AWS CLI setup on your local workstation. You can find a guide to install the AWS CLI here.

Creating a Parameter in the Parameter Store

To manually create a parameter in the parameter store there a few easy steps to follow:

  1. The user must sign into their AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on Parameter Store.
  3. Click Get Started Now or Create Parameter and input the following information:
    1. Name: The name that you want the parameter to be called
    2. Description(optional): A description of what the parameter does or contains
    3. Type: You can choose either a String, String List, or Secure String
  4. Click Create Parameter and it will bring you to the Parameter Store console where you can see your newly created parameter

To create a parameter using the AWS CLI, here are examples of creating a String, SecureString, and String List:

String:

 aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."

StringList:

aws ssm put-parameter --name "HostedZoneNames" --type "StringList" --value “stelligent.com.,google.com.,amazon.com.

SecureString:

 aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

After running these commands, your parameter store console would look something like this:

Screen Shot 2017-03-06 at 5.06.40 PM

Getting Parameter Values using the AWS CLI

To get a parameter String, StringList, or SecureString from the from the Parameter Store using the AWS CLI you must use the following syntax in your terminal:

String:

aws ssm get-parameters --names "HostedZoneName"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "String", 
            "Name": "HostedZoneName", 
            "Value": "stelligent.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.24 PMStringList:

 aws ssm get-parameters --names "HostedZoneNames"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "StringList", 
            "Name": "HostedZoneNames", 
            "Value": "stelligent.com.,google.com.,amazon.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.45 PMSecureString:

aws ssm get-parameters --names "Password"

The output in your terminal would look like this (the value of the parameter is encrypted):

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "SecureString", 
            "Name": "Password", 
            "Value": "AQECAHicQXIA+CERB7LyH8+YXXUK1vqiI87oM0Wq7kgMCmGqUQAAAGowaAYJKoZIhvcNAQcGoFswWQIBADBUBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDE0kvmQLY6Ertt5BGwIBEIAnlfTl1XxzRwUzkFCBYn8P0lJ6dOdjPNQNbYgjD1+KTk/SlNJznvrF"
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.59 PM

Deleting a Parameter from the Parameter Store

To delete a parameter from the Parameter Store manually, you must use the following steps:

  1. Sign into your AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on the Parameter Store tab.
  3. Select the parameter that you wish to delete
  4. Click the Actions button and select Delete Parameter from the menu

To delete a parameter using the AWS CLI you must use the following syntax in your terminal(this works for String, StringList, and SecureString)

aws ssm delete-parameter --name "HostedZoneName"
aws ssm delete-parameter --name "HostedZoneNames"
aws ssm delete-parameter --name "Password"

Using Parameter Store in AWS CodePipeline

Parameter Store can be very useful when constructing and running a deployment pipeline. Parameter Store can be used alongside a simple token/replace script to dynamically generate configuration files without having to manually modify those files. This is useful because you can pass through frequently used pieces of data configuration easily and efficiently as part of a continuous delivery process. An illustration of the AWS infrastructure architecture is shown below.

new-designer (4)

In this example, we have a deployment pipeline modeled via AWS CodePipeline that consists of two stages: a Source stage and a Build stage.

First, let’s take a look at the Source stage.

MyCodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      ArtifactStore:
        Location: !Ref S3Bucket
        Type: S3
      RoleArn: !GetAtt [CodePipelineRole, Arn]
      Stages:
        - Name: Source
          Actions:
          - Name: GitHubSource
            ActionTypeId:
              Category: Source
              Owner: ThirdParty
              Provider: GitHub
              Version: 1
            OutputArtifacts:
              - Name: OutputArtifact
            Configuration:
              Owner: stelligent
              Repo: parameter-store-example
              Branch: master
              OAuthToken: !Ref GitHubToken

As part of the Source stage, the pipeline will get source assets from the GitHub repository that contains the configuration file that will be modified along with a Ruby script that will get the parameter from the Parameter Store and replace the variable tokens inside of the configuration file. 

After the Source stage completes, there’s a Build stage, where we’ll be doing all of the actual work to modify our configuration file.

The Build stage uses the CodeBuild project (defined as the ConfigFileBuild action) to run the Ruby script that will modify the configuration file and replace the variable tokens inside of it with the requested parameters from the Parameter Store.

  ConfigFileBuild:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Ref AWS::StackName
      Description: Changes sample configuration file
      ServiceRole: !GetAtt CodeBuildRole.Arn
      Artifacts:
        Type: CODEPIPELINE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_LARGE
        Image: aws/codebuild/eb-ruby-2.3-amazonlinux-64:2.1.6
      Source:
        Type: CODEPIPELINE
        BuildSpec: |
          version: 0.1
          phases:
            pre_build:
              commands:
                - gem install aws-sdk
            build:
              commands:
                - ruby sample_ruby_ssm.rb
          artifacts:
            files:
              - '**/*'

The CodeBuild project contains the buildspec that actually runs the  Ruby script that will be making the configuration changes (sample_ruby_ssm.rb).

Copy of AWS Simple Icons v2.1 (6).png

Here is what the Ruby script looks like:

require 'aws-sdk'

client = Aws::SSM::Client.new(region: 'us-east-1')
resp = client.get_parameters({
  names: ["HostedZoneName", "Password"], # required
  with_decryption: true,
})

hostedzonename = resp.parameters[0].value
password = resp.parameters[1].value

file_names = ['sample_ssm_config.json']

file_names.each do |file_name|
  text = File.read(file_name)

  # Display text for usability
  puts text

  # Substitute Variables
  new_contents = text.gsub(/HOSTEDZONE/, hostedzonename)
  new_contents = new_contents.gsub(/PASSWORD/, password)

  # To write changes to the file, use:
  File.open(file_name, "w") {|file| file.puts new_contents.to_s }
end

Here is what the configuration file with the variable tokens (HOSTEDZONE, PASSWORD) looks like before it gets modified:

{
  "Parameters" : {
    "HostedZoneName" : "HOSTEDZONE",
    "Password" : "PASSWORD"
  }
}

Here is what the configuration file would consist of after the Ruby script pulls the requested parameters from the Parameter Store and replaces the variable tokens (HOSTEDZONE, PASSWORD). The Password parameter is being decrypted through the ruby script in this process.

{
  "Parameters" : {
    "HostedZoneName" : "stelligent.com.",
    "Password" : "Password123$"
  }
}

IMPORTANT NOTE:  In this example above you can see that the “Password” parameter is being returned in plain text (Password123$). The reason that is happening is because when this Ruby script runs, it is returning the secured string parameter with the decrypted value (with_decryption: true). The purpose of showing this example in this way is purely just to illustrate what returning multiple parameters into a configuration file would look like. In a real-world situation you would never want to return any password displayed in its plain text because that can present security issues and is bad practice in general. In order to return that “Password” parameter value in its encrypted form all you would simply have to do is modify the Ruby script on the 6th line and change the “with_decryption: true” to “with_decryption: false“.  Here is what the modified configuration file would look like with the “Password” parameter being returned in its encrypted form:

Screen Shot 2017-03-10 at 10.57.27 AM

Launch the Solution via CloudFormation

To run this deployment pipeline and see Parameter Store in action, you can click the “Launch Stack” button below which will take you directly to the CloudFormation console within your AWS account and load the CloudFormation template. Walk through the CloudFormation wizard to launch the stack. 

In order to be able to execute this pipeline you must have the following:

  • The AWS CLI already installed on your local workstation. You can find a guide to install the AWS CLI here
  • A generated GitHub Oauth token for your GitHub user. Instructions on how to generate an Oauth token can be found here
  • In order for the Ruby script that is part of this pipeline process to run you must create these two parameters in your Parameter Store:
    1. HostedZoneName
    2. Password
aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."
aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

As you begin to launch the pipeline in CloudFormation (Launch Stack button is located below), you will be prompted to enter this one parameter:

  • GitHubToken (Your generated GitHub Oauth token)

Once you have passed in this initial parameter, you can begin to launch the pipeline that will make use of the Parameter Store.

NOTE: You will be charged for your CodePipeline and CodeBuild usage.

Once the stack is CREATE_COMPLETE, click on the Outputs tab and then the value for the CodePipelineUrl output to view the pipeline in CodePipeline.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post, you learned how to use the EC2 Systems Manager Parameter Store and some of its features. You learned how to create, delete, get, and set parameters manually as well as through the use of the AWS CLI. You also learned how to use the Parameter Store in a practical situation by incorporating it in the process of setting configuration data that is used as part of a CodePipeline continuous delivery pipeline.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/parameter-store-example. Let us know if you have any comments or questions @stelligent or @TreyMcElhattan

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Docker Swarm Mode on AWS

Docker Swarm Mode is the latest entrant in a large field of container orchestration systems. Docker Swarm was originally released as a standalone product that ran master and agent containers on a cluster of servers to orchestrate the deployment of containers. This changed with the release of Docker 1.12 in July of 2016. Docker Swarm Mode is now officially part of docker-engine, and built right into every installation of Docker. Swarm Mode brought many improvements over the standalone Swarm product, including:

  • Built-in Service Discovery: Docker Swarm originally included drivers to integrate with Consul, etcd or Zookeeper for the purposes of Service Discovery. However, this required the setup of a separate cluster dedicated to service discovery. The Swarm Mode manager nodes now assign a unique DNS name to each service in the cluster, and load balances between the running containers in those services.
  • Mesh Routing: One of the most unique features of Docker Swarm Mode is  Mesh Routing. All of the nodes within a cluster are aware of the location of every container within the cluster via gossip. This means that if a request arrives on a node that is not currently running the service for which that request was intended, the request will be routed to a node that is running a container for that service. This makes it so that nodes don’t have to be purpose built for specific services. Any node can run any service, and every node can be load balanced equally, reducing complexity and the number of resources needed for an application.
  • Security: Docker Swarm Mode uses TLS encryption for communication between services and nodes by default.
  • Docker API: Docker Swarm Mode utilizes the same API that every user of Docker is already familiar with. No need to install or learn additional software.
  • But wait, there’s more! Check out some of the other features at Docker’s Swarm Mode Overview page.

For companies facing increasing complexity in Docker container deployment and management, Docker Swarm Mode provides a convenient, cost-effective, and performant tool to meet those needs.

Creating a Docker Swarm cluster

 cloudcraft-docker-swarm-architecture-2

For the sake of brevity, I won’t reinvent the wheel and go over manual cluster creation here. Instead, I encourage you to follow the fantastic tutorial on Docker’s site.

What I will talk about however is the new Docker for AWS tool that Docker recently released. This is an AWS Cloudformation template that can be used to quickly and easily set up all of the necessary resources for a highly available Docker Swarm cluster, and because it is a Cloudformation template, you can edit the template to add any additional resources, such as Route53 hosted zones or S3 buckets to your application.

One of the very interesting features of this tool is that it dynamically configures the listeners for your Elastic Load Balancer (ELB). Once you deploy a service on Docker Swarm, the built-in management service that is baked into instances launched with Docker for AWS will automatically create a listener for any published ports for your service. When a service is removed, that listener will subsequently be removed.

If you want to create a Docker for AWS stack, read over the list of prerequisites, then click the Launch Stack button below. Keep in mind you may have to pay for any resources you create. If you are deploying Docker for AWS into an older account that still has EC2-Classic, or wish to deploy Docker for AWS into an existing VPC, read the FAQ here for more information.

cloudformation-launch-stack

Deploying a Stack to Docker Swarm

With the release of Docker 1.13 in January of 2017, major enhancements were added to Docker Swarm Mode that greatly improved its ease of use. Docker Swarm Mode now integrates directly with Docker Compose v3 and officially supports the deployment of “stacks” (groups of services) via docker-compose.yml files. With the new properties introduced in Docker Compose v3, it is possible to specify node affinity via tags, rolling update policies, restart policies, and desired scale of containers. The same docker-compose.yml file you would use to test your application locally can now be used to deploy to production. Here is a sample service with some of the new properties:

version: "3"
services:

  vote:
    image: dockersamples/examplevotingapp_vote:before
    ports:
      - 5000:80
    networks:
      - frontend
    deploy:
      replicas: 2
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == worker]
  
networks:
  frontend:

While most of the properties within this YAML structure will be familiar to anyone used to Docker Compose v2, the deploy property is new to v3. The replicas field indicates the number of containers to run within the service. The update_config  field tells the swarm how many containers to update in parallel and how long to wait between updates. The restart_policy field determines when a container should be restarted. Finally, the placement field allows container affinity to be set based on tags or node properties, such as Node Role. When deploying this docker-compose file locally, using docker-compose up, the deploy properties are simply ignored.

Deployment of a stack is incredibly simple. Follow these steps to download Docker’s example voting app stack file and run it on your cluster.

SSH into any one of your Manager nodes with the user 'docker' and the EC2 Keypair you specified when you launched the stack.

curl -O https://raw.githubusercontent.com/docker/example-voting-app/master/docker-stack.yml

docker stack deploy -c docker-stack.yml vote

You should now see Docker creating your services, volumes and networks. Now run the following command to view the status of your stack and the services running within it.

docker stack ps vote

You’ll get output similar to this:

counter_api_-_root_ip-10-0-0-114___home_ubuntu_-_ssh_-i____ssh_jim-labs-ohio_pem_docker_52_14_83_166_-_214x43

This shows the container id, container name, container image, node the container is currently running on, its desired and current state, and any errors that may have occurred. As you can see, the vote_visualizer.1 container failed at run time, so it was shut down and a new container spun up to replace it.

This sample application opens up three ports on your Elastic Load Balancer (ELB): 5000 for the voting interface, 5001 for the real-time vote results interface, and 8080 for the Docker Swarm visualizer. You can find the DNS Name of your ELB by either going to the EC2 Load Balancers page of the AWS console, or viewing your Cloudformation stack Outputs tab in the Cloudformation page of the AWS Console. Here is an example of the Cloudformation Outputs tab:

cloudformation_management_console_%f0%9f%94%8a

DefaultDNSTarget is the URL you can use to access your application.

If you access the Visualizer on port 8080, you will see an interface similar to this:

visualizer_%f0%9f%94%8a

This is a handy tool to see which containers are running, and on which nodes.

Scaling Services

Scaling services is as simple as running the command docker service scale SERVICENAME=REPLICAS, for example:

docker service scale vote_vote=3

will scale the vote service to 3 containers, up from 2. Because Docker Swarm uses an overlay network, it is able to run multiple containers of the same service on the same node, allowing you to scale your services as high as your CPU and Memory allocations will allow.

Updating Stacks

If you make any changes to your docker-compose file, updating your stack is incredibly easy. Simply run the same command you used to create your stack:

docker stack deploy -c docker-stack.yml vote

Docker Swarm will update any services that were changed from the previous version, and adhere to any update_configs specified in the docker-compose file. In the case of the vote service specified above, only one container will be updated at a time, and a 10 second delay will occur once the first container is successfully updated before the second container is updated.

Next Steps

This was just a brief overview of the capabilities of Docker Swarm Mode in Docker 1.13. For further reading, feel free to explore the Docker Swarm Mode and Docker Compose docs. In another post, I’ll be going over some of the advantages and disadvantages of Docker Swarm Mode compared to other container orchestration systems, such as ECS and Kubernetes.

If you have any experiences with Docker Swarm Mode that you would like to share, or have any questions on any of the materials presented here, please leave a comment below!

References

Docker Swarm Mode

Docker for AWS

Docker Example Voting App Github

Introduction to Amazon Lightsail

At re:Invent 2016 Amazon introduced Lightsail, the newest addition to the list of AWS Compute Services. It is a quick and easy way to launch a virtual private server within AWS.

As someone who moved into the AWS world from an application development background, this sounded pretty interesting to me.  Getting started with deploying an app can be tricky, especially if you want to do it with code and scripting rather than going through the web console.  CloudFormation is an incredible tool but I can’t be the only developer to look at the user guide for deploying an application and then decide that doing my demo from localhost wasn’t such a bad option. There is a lot to learn there and it can be frustrating because you just want to get your app up and running, but before you can even start working on that you have to figure how how to create your VPC correctly.

Lightsail takes care of that for you.  The required configuration is minimal, you pick a region, an allocation of compute resources (i.e. memory, cpu, and storage), and an image to start from.  They even offer images dedicated to tailored to common developer setups so it is possible to just log in, download your source code, and you are off to the races.

No you can’t use Lightsail to deploy a highly available load balanced application, but if you are new to working with AWS it is a great way to get a feel for what you can do without being overwhelmed by all the possibilities. Plus once you get the hang of it you have a better foundation for branching out to some of the more comprehensive solutions offered by Amazon.

Deploying a Node App From Github

Let’s look at a basic example.  We have source code in Github, we want it deployed on the internet.  Yes, this is the monumental kind of challenge that I like to tackle every day.

Setup

You will need:

  • An AWS Account
    • At this time Lightsail is only supported in us-east-1 so you will have to try it out there.
  • The AWS Command Line Interface
    • The Lightsail CLI commands are relatively new so please make sure you are updated to the latest version
  • About ten minutes

Step 1 – Create an SSH Key

First of all let’s create an SSH key for connecting to our instance.  This is not required, Lightsail has a default key that you can use, but it is generally better to avoid using shared keys.

Step 2 – Create a User Data Script

The user data script is how you give instructions to AWS to tailor an instance to your needs.   This can be as complex or simple as you want it to be. For this case we want our instance to run a Node application that is in Github.  We are going to use Amazon Linux for our instance so we need to install Node and Git then pull down the app from Github.

Take the following snippet and save it as userdata.sh.  Feel free to modify if you have a different app that you would like to try deploying.

A user data script is not the only option here.  Lightsail supports a variety of preconfigured images.  For example it has a WordPress image, so if you needed a wordpress server you wouldn’t have to do anything but launch it.  It also supports creating an instance snapshot.  So you could start an instance, log in, do any necessary configuration manually, and then save that snapshot for future use.

That being said once you start to move beyond what Lightsail provides you will find yourself working with instance user data for a variety of purposes and it is nice to get a feel for it with some basic examples

Step 3 – Launch an Instance

Next we have to actually create the instance, we simply call create and refer to the key and userdata script we just made.

This command has several parameters so let’s run through them quickly:

  • instance-names – Required: This is the name for your server.
  • availability-zone – Required: An availability zones is an isolated datacenter in a region.  Since Lightstail isn’t concerned with high availability deployments, we just have to choose one.
  • blueprint-id – Required: The blueprint is the reference to the server image
  • bundle-id – Required: The set of specs that describe your server
  • [user-data-file] – Optional: This is the file we created above.  If no script is specified your instance will have the functionality provided by the blueprint, but no capabilities tailored to your needs.
  • [key-pair-name] – Optional: This is the private key that we will use to connect to the instance.  If this is not specified there is a default key that is available through the Lightsail console.

It will take about a minute for your instance to be up and running.  If you have the web console open you can see when it ready:
Screen Shot 2017-01-11 at 12.23.58 PM.png

Screen Shot 2017-01-11 at 12.25.07 PM.png
Once we are running it is time check out our application…
Screen Shot 2017-01-11 at 12.26.21 PM.png

Or not.

Step 4 – Troubleshooting

Let’s log in and see where things went wrong.  We will need the key we created in the first step and the IP address of our instance.  You can get see that in the web console or through another cli command to pull the instance data.

When you are troubleshooting any sort of AWS virtual server a good place to start is by checking out: /var/log/cloud-init-output.log

Screen Shot 2017-01-11 at 2.42.44 PM.png

Cloud-init-output.log contains the output from the instance launch commands.  That includes the commands run by Amazon to configure the instance as well as any commands from user data script.  Let’s take a look…

Screen Shot 2017-01-11 at 12.26.54 PM.png

Ok… that actually that looks like it started the app correctly.  So what is the problem?  Well if you looked at the application linked above and actually read the README (which, frankly, sounds exhausting) you probably already know…
Screen Shot 2017-01-11 at 2.46.41 PM.png

If we take a look at our firewall settings for the instance networking:

Screen Shot 2017-01-11 at 2.50.41 PM.png

Step 5 – Update The Instance Firewall

We can fix this!  AWS is managing the VPC that your instance is deployed in but you still have the ability to control the access.  We just have to open the port that our application is listening on.

Then if we reload the page: Success!

Screen Shot 2017-01-11 at 2.54.43 PM.png

Wrapping Up

That’s it, you are deployed!  If you’re familiar with Heroku you are probably not particularly impressed right now, but if you tried to use AWS to script out a simple app deployment in the past and got fed up the third time your CloudFormation stack rolled back due to having your subnets configured incorrectly I encourage you to give Lightsail a shot.

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If you are Stelligent is hiring and we would love to hear from you!

More Info

Lightsail Annoucement

Full Lightsail CLI Reference

DevOps in AWS Radio: AWS CodeBuild (Episode 5)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak about the release of AWS CodeBuild and how you can integrate the service with other services on AWS.

Here are the show notes:

DevOps in AWS News

Episode Topics

  1. Benefits of using AWS CodeBuild along with alternatives
  2. Which programming languages/platforms and operating systems does CodeBuild support?
  3. What’s the pricing model?
  4. How does the buildspec.yml file work?
  5. How do you run CodeBuild (SDK, CLI, Console, CloudFormation)?
  6. Any Jenkins integrations?
  7. Which source providers does CodeBuild support?
  8. How to integrate CodeBuild with the rest of Developer Tools Suite

Related Blog Posts

  1. An Introduction to AWS CodeBuild
  2. Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…