Blog

DevOps in AWS Radio: Parameter Store with AWS CodePipeline (Episode 6)

In this episode, Paul Duvall and Brian Jakovich are joined by Trey McElhattan from Stelligent to cover recent DevOps in AWS news and speak about Using Parameter Store with AWS CodePipeline.

Here are the show notes:

DevOps in AWS News

Episode Topics

  1. What is Parameter Store? What are its key features?
  2. Which AWS tools does it use?
  3. What are the alternatives to Parameter store?
  4. Benefits of using Parameter Store along versus alternatives?
  5. Can you use Parameter Store if you’re not using AWS services and tools?
  6. Which are supported: SDKs, CLI, Console, CloudFormation, etc.?
  7. What are the different ways to store string using Parameter Store?
  8. What’s the pricing model for Parameter Store?
  9. How do you use Parameter Store in the context of a deployment pipeline – and, specifically, with AWS CodePipeline?

Related Blog Posts

  1. Using Parameter Store with AWS CodePipeline
  2. Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks
  3. Overview of Parameter Store

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

HOSTING Acquires Stelligent

It’s a time of celebration in the Stelligent community – HOSTING, the Denver-based cloud managed services provider, has announced its agreement to acquire us. You can view the press release here. Moving forward, we’ll be known as Stelligent, a Division of HOSTING.

Adding to the wonderful news is that Stelligent co-founder Paul Duvall has been named HOSTING CTO, while our other co-founder, Rob Daly, has been named Executive Vice President of Public Cloud Consulting at HOSTING. Both will continue working out of our east coast offices.

So what does this acquisition mean for you?

Simply, we will be able to do more and serve you better. The combination of DevOps Automation with managed services represents an advanced model of solution delivery within the public cloud. Now, the same expert team that develops, migrates and automates your application will also manage, monitor and secure it. As you have come to expect, your end-to-end solution will be architected and implemented by experts in leveraging the full capabilities and business benefits of AWS.

Interested in learning more?

Please click here for FAQs on the acquisition or feel free to reach out to Paul.Duvall@Stelligent.com or Rob.Daly@Stelligent.com for more information on this new chapter.

Using Parameter Store with AWS CodePipeline

Systems Manager Parameter Store is a managed service (part of AWS EC2 Systems Manager (SSM)) that provides a convenient way to efficiently and securely get and set commonly used configuration data across multiple resources in your software delivery lifecycle.

codepipeline_ssm

In this post, we will be focusing on the basic usage of Parameter Store and how to effectively use it as part of a continuous delivery pipeline using AWS CodePipeline. The following describes some of the capabilities of Parameter Store and the resources with which they can be used:

  • Managed Service: Parameter Store is managed by AWS. This means that you won’t have to put in the engineering work to setup something like Vault, Zookeeper, etc. just to store the configuration that your application/service needs.
  • Access Controls: Through the use of AWS Identity Access Management access to Parameter Store can be limited by enabling or restricting access to the service itself, or by enabling or restricting access to particular parameters.
  • Encryption: The Parameter Store gives a user the ability to also encrypt parameters using the AWS Key Management Service (KMS). When creating a parameter, you can specify that the parameter is encrypted with a KMS key. 
  • Audit: All calls to Parameter Store are tracked and recorded in AWS CloudTrail so they can be audited.

At the end of this post, you will be able to launch an example solution via AWS CloudFormation.

Working with Parameter Store

Prerequisites

In order to follow the examples below, you’ll need to have the AWS CLI setup on your local workstation. You can find a guide to install the AWS CLI here.

Creating a Parameter in the Parameter Store

To manually create a parameter in the parameter store there a few easy steps to follow:

  1. The user must sign into their AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on Parameter Store.
  3. Click Get Started Now or Create Parameter and input the following information:
    1. Name: The name that you want the parameter to be called
    2. Description(optional): A description of what the parameter does or contains
    3. Type: You can choose either a String, String List, or Secure String
  4. Click Create Parameter and it will bring you to the Parameter Store console where you can see your newly created parameter

To create a parameter using the AWS CLI, here are examples of creating a String, SecureString, and String List:

String:

 aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."

StringList:

aws ssm put-parameter --name "HostedZoneNames" --type "StringList" --value “stelligent.com.,google.com.,amazon.com.

SecureString:

 aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

After running these commands, your parameter store console would look something like this:

Screen Shot 2017-03-06 at 5.06.40 PM

Getting Parameter Values using the AWS CLI

To get a parameter String, StringList, or SecureString from the from the Parameter Store using the AWS CLI you must use the following syntax in your terminal:

String:

aws ssm get-parameters --names "HostedZoneName"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "String", 
            "Name": "HostedZoneName", 
            "Value": "stelligent.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.24 PMStringList:

 aws ssm get-parameters --names "HostedZoneNames"

The output in your terminal would look like this:

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "StringList", 
            "Name": "HostedZoneNames", 
            "Value": "stelligent.com.,google.com.,amazon.com."
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.45 PMSecureString:

aws ssm get-parameters --names "Password"

The output in your terminal would look like this (the value of the parameter is encrypted):

{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Type": "SecureString", 
            "Name": "Password", 
            "Value": "AQECAHicQXIA+CERB7LyH8+YXXUK1vqiI87oM0Wq7kgMCmGqUQAAAGowaAYJKoZIhvcNAQcGoFswWQIBADBUBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDE0kvmQLY6Ertt5BGwIBEIAnlfTl1XxzRwUzkFCBYn8P0lJ6dOdjPNQNbYgjD1+KTk/SlNJznvrF"
        }
    ]
}

And in the console:

Screen Shot 2017-03-06 at 5.05.59 PM

Deleting a Parameter from the Parameter Store

To delete a parameter from the Parameter Store manually, you must use the following steps:

  1. Sign into your AWS account and go the EC2 console.
  2. Under the Systems Manager Shared Resources section click on the Parameter Store tab.
  3. Select the parameter that you wish to delete
  4. Click the Actions button and select Delete Parameter from the menu

To delete a parameter using the AWS CLI you must use the following syntax in your terminal(this works for String, StringList, and SecureString)

aws ssm delete-parameter --name "HostedZoneName"
aws ssm delete-parameter --name "HostedZoneNames"
aws ssm delete-parameter --name "Password"

Using Parameter Store in AWS CodePipeline

Parameter Store can be very useful when constructing and running a deployment pipeline. Parameter Store can be used alongside a simple token/replace script to dynamically generate configuration files without having to manually modify those files. This is useful because you can pass through frequently used pieces of data configuration easily and efficiently as part of a continuous delivery process. An illustration of the AWS infrastructure architecture is shown below.

new-designer (4)

In this example, we have a deployment pipeline modeled via AWS CodePipeline that consists of two stages: a Source stage and a Build stage.

First, let’s take a look at the Source stage.

MyCodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      ArtifactStore:
        Location: !Ref S3Bucket
        Type: S3
      RoleArn: !GetAtt [CodePipelineRole, Arn]
      Stages:
        - Name: Source
          Actions:
          - Name: GitHubSource
            ActionTypeId:
              Category: Source
              Owner: ThirdParty
              Provider: GitHub
              Version: 1
            OutputArtifacts:
              - Name: OutputArtifact
            Configuration:
              Owner: stelligent
              Repo: parameter-store-example
              Branch: master
              OAuthToken: !Ref GitHubToken

As part of the Source stage, the pipeline will get source assets from the GitHub repository that contains the configuration file that will be modified along with a Ruby script that will get the parameter from the Parameter Store and replace the variable tokens inside of the configuration file. 

After the Source stage completes, there’s a Build stage, where we’ll be doing all of the actual work to modify our configuration file.

The Build stage uses the CodeBuild project (defined as the ConfigFileBuild action) to run the Ruby script that will modify the configuration file and replace the variable tokens inside of it with the requested parameters from the Parameter Store.

  ConfigFileBuild:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Ref AWS::StackName
      Description: Changes sample configuration file
      ServiceRole: !GetAtt CodeBuildRole.Arn
      Artifacts:
        Type: CODEPIPELINE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_LARGE
        Image: aws/codebuild/eb-ruby-2.3-amazonlinux-64:2.1.6
      Source:
        Type: CODEPIPELINE
        BuildSpec: |
          version: 0.1
          phases:
            pre_build:
              commands:
                - gem install aws-sdk
            build:
              commands:
                - ruby sample_ruby_ssm.rb
          artifacts:
            files:
              - '**/*'

The CodeBuild project contains the buildspec that actually runs the  Ruby script that will be making the configuration changes (sample_ruby_ssm.rb).

Copy of AWS Simple Icons v2.1 (6).png

Here is what the Ruby script looks like:

require 'aws-sdk'

client = Aws::SSM::Client.new(region: 'us-east-1')
resp = client.get_parameters({
  names: ["HostedZoneName", "Password"], # required
  with_decryption: true,
})

hostedzonename = resp.parameters[0].value
password = resp.parameters[1].value

file_names = ['sample_ssm_config.json']

file_names.each do |file_name|
  text = File.read(file_name)

  # Display text for usability
  puts text

  # Substitute Variables
  new_contents = text.gsub(/HOSTEDZONE/, hostedzonename)
  new_contents = new_contents.gsub(/PASSWORD/, password)

  # To write changes to the file, use:
  File.open(file_name, "w") {|file| file.puts new_contents.to_s }
end

Here is what the configuration file with the variable tokens (HOSTEDZONE, PASSWORD) looks like before it gets modified:

{
  "Parameters" : {
    "HostedZoneName" : "HOSTEDZONE",
    "Password" : "PASSWORD"
  }
}

Here is what the configuration file would consist of after the Ruby script pulls the requested parameters from the Parameter Store and replaces the variable tokens (HOSTEDZONE, PASSWORD). The Password parameter is being decrypted through the ruby script in this process.

{
  "Parameters" : {
    "HostedZoneName" : "stelligent.com.",
    "Password" : "Password123$"
  }
}

IMPORTANT NOTE:  In this example above you can see that the “Password” parameter is being returned in plain text (Password123$). The reason that is happening is because when this Ruby script runs, it is returning the secured string parameter with the decrypted value (with_decryption: true). The purpose of showing this example in this way is purely just to illustrate what returning multiple parameters into a configuration file would look like. In a real-world situation you would never want to return any password displayed in its plain text because that can present security issues and is bad practice in general. In order to return that “Password” parameter value in its encrypted form all you would simply have to do is modify the Ruby script on the 6th line and change the “with_decryption: true” to “with_decryption: false“.  Here is what the modified configuration file would look like with the “Password” parameter being returned in its encrypted form:

Screen Shot 2017-03-10 at 10.57.27 AM

Launch the Solution via CloudFormation

To run this deployment pipeline and see Parameter Store in action, you can click the “Launch Stack” button below which will take you directly to the CloudFormation console within your AWS account and load the CloudFormation template. Walk through the CloudFormation wizard to launch the stack. 

In order to be able to execute this pipeline you must have the following:

  • The AWS CLI already installed on your local workstation. You can find a guide to install the AWS CLI here
  • A generated GitHub Oauth token for your GitHub user. Instructions on how to generate an Oauth token can be found here
  • In order for the Ruby script that is part of this pipeline process to run you must create these two parameters in your Parameter Store:
    1. HostedZoneName
    2. Password
aws ssm put-parameter --name "HostedZoneName" --type "String" --value "stelligent.com."
aws ssm put-parameter --name "Password" --type "SecureString" --value "Password123$"

As you begin to launch the pipeline in CloudFormation (Launch Stack button is located below), you will be prompted to enter this one parameter:

  • GitHubToken (Your generated GitHub Oauth token)

Once you have passed in this initial parameter, you can begin to launch the pipeline that will make use of the Parameter Store.

NOTE: You will be charged for your CodePipeline and CodeBuild usage.

Once the stack is CREATE_COMPLETE, click on the Outputs tab and then the value for the CodePipelineUrl output to view the pipeline in CodePipeline.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post, you learned how to use the EC2 Systems Manager Parameter Store and some of its features. You learned how to create, delete, get, and set parameters manually as well as through the use of the AWS CLI. You also learned how to use the Parameter Store in a practical situation by incorporating it in the process of setting configuration data that is used as part of a CodePipeline continuous delivery pipeline.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/parameter-store-example. Let us know if you have any comments or questions @stelligent or @TreyMcElhattan

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Docker Swarm Mode on AWS

Docker Swarm Mode is the latest entrant in a large field of container orchestration systems. Docker Swarm was originally released as a standalone product that ran master and agent containers on a cluster of servers to orchestrate the deployment of containers. This changed with the release of Docker 1.12 in July of 2016. Docker Swarm Mode is now officially part of docker-engine, and built right into every installation of Docker. Swarm Mode brought many improvements over the standalone Swarm product, including:

  • Built-in Service Discovery: Docker Swarm originally included drivers to integrate with Consul, etcd or Zookeeper for the purposes of Service Discovery. However, this required the setup of a separate cluster dedicated to service discovery. The Swarm Mode manager nodes now assign a unique DNS name to each service in the cluster, and load balances between the running containers in those services.
  • Mesh Routing: One of the most unique features of Docker Swarm Mode is  Mesh Routing. All of the nodes within a cluster are aware of the location of every container within the cluster via gossip. This means that if a request arrives on a node that is not currently running the service for which that request was intended, the request will be routed to a node that is running a container for that service. This makes it so that nodes don’t have to be purpose built for specific services. Any node can run any service, and every node can be load balanced equally, reducing complexity and the number of resources needed for an application.
  • Security: Docker Swarm Mode uses TLS encryption for communication between services and nodes by default.
  • Docker API: Docker Swarm Mode utilizes the same API that every user of Docker is already familiar with. No need to install or learn additional software.
  • But wait, there’s more! Check out some of the other features at Docker’s Swarm Mode Overview page.

For companies facing increasing complexity in Docker container deployment and management, Docker Swarm Mode provides a convenient, cost-effective, and performant tool to meet those needs.

Creating a Docker Swarm cluster

 cloudcraft-docker-swarm-architecture-2

For the sake of brevity, I won’t reinvent the wheel and go over manual cluster creation here. Instead, I encourage you to follow the fantastic tutorial on Docker’s site.

What I will talk about however is the new Docker for AWS tool that Docker recently released. This is an AWS Cloudformation template that can be used to quickly and easily set up all of the necessary resources for a highly available Docker Swarm cluster, and because it is a Cloudformation template, you can edit the template to add any additional resources, such as Route53 hosted zones or S3 buckets to your application.

One of the very interesting features of this tool is that it dynamically configures the listeners for your Elastic Load Balancer (ELB). Once you deploy a service on Docker Swarm, the built-in management service that is baked into instances launched with Docker for AWS will automatically create a listener for any published ports for your service. When a service is removed, that listener will subsequently be removed.

If you want to create a Docker for AWS stack, read over the list of prerequisites, then click the Launch Stack button below. Keep in mind you may have to pay for any resources you create. If you are deploying Docker for AWS into an older account that still has EC2-Classic, or wish to deploy Docker for AWS into an existing VPC, read the FAQ here for more information.

cloudformation-launch-stack

Deploying a Stack to Docker Swarm

With the release of Docker 1.13 in January of 2017, major enhancements were added to Docker Swarm Mode that greatly improved its ease of use. Docker Swarm Mode now integrates directly with Docker Compose v3 and officially supports the deployment of “stacks” (groups of services) via docker-compose.yml files. With the new properties introduced in Docker Compose v3, it is possible to specify node affinity via tags, rolling update policies, restart policies, and desired scale of containers. The same docker-compose.yml file you would use to test your application locally can now be used to deploy to production. Here is a sample service with some of the new properties:

version: "3"
services:

  vote:
    image: dockersamples/examplevotingapp_vote:before
    ports:
      - 5000:80
    networks:
      - frontend
    deploy:
      replicas: 2
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == worker]
  
networks:
  frontend:

While most of the properties within this YAML structure will be familiar to anyone used to Docker Compose v2, the deploy property is new to v3. The replicas field indicates the number of containers to run within the service. The update_config  field tells the swarm how many containers to update in parallel and how long to wait between updates. The restart_policy field determines when a container should be restarted. Finally, the placement field allows container affinity to be set based on tags or node properties, such as Node Role. When deploying this docker-compose file locally, using docker-compose up, the deploy properties are simply ignored.

Deployment of a stack is incredibly simple. Follow these steps to download Docker’s example voting app stack file and run it on your cluster.

SSH into any one of your Manager nodes with the user 'docker' and the EC2 Keypair you specified when you launched the stack.

curl -O https://raw.githubusercontent.com/docker/example-voting-app/master/docker-stack.yml

docker stack deploy -c docker-stack.yml vote

You should now see Docker creating your services, volumes and networks. Now run the following command to view the status of your stack and the services running within it.

docker stack ps vote

You’ll get output similar to this:

counter_api_-_root_ip-10-0-0-114___home_ubuntu_-_ssh_-i____ssh_jim-labs-ohio_pem_docker_52_14_83_166_-_214x43

This shows the container id, container name, container image, node the container is currently running on, its desired and current state, and any errors that may have occurred. As you can see, the vote_visualizer.1 container failed at run time, so it was shut down and a new container spun up to replace it.

This sample application opens up three ports on your Elastic Load Balancer (ELB): 5000 for the voting interface, 5001 for the real-time vote results interface, and 8080 for the Docker Swarm visualizer. You can find the DNS Name of your ELB by either going to the EC2 Load Balancers page of the AWS console, or viewing your Cloudformation stack Outputs tab in the Cloudformation page of the AWS Console. Here is an example of the Cloudformation Outputs tab:

cloudformation_management_console_%f0%9f%94%8a

DefaultDNSTarget is the URL you can use to access your application.

If you access the Visualizer on port 8080, you will see an interface similar to this:

visualizer_%f0%9f%94%8a

This is a handy tool to see which containers are running, and on which nodes.

Scaling Services

Scaling services is as simple as running the command docker service scale SERVICENAME=REPLICAS, for example:

docker service scale vote_vote=3

will scale the vote service to 3 containers, up from 2. Because Docker Swarm uses an overlay network, it is able to run multiple containers of the same service on the same node, allowing you to scale your services as high as your CPU and Memory allocations will allow.

Updating Stacks

If you make any changes to your docker-compose file, updating your stack is incredibly easy. Simply run the same command you used to create your stack:

docker stack deploy -c docker-stack.yml vote

Docker Swarm will update any services that were changed from the previous version, and adhere to any update_configs specified in the docker-compose file. In the case of the vote service specified above, only one container will be updated at a time, and a 10 second delay will occur once the first container is successfully updated before the second container is updated.

Next Steps

This was just a brief overview of the capabilities of Docker Swarm Mode in Docker 1.13. For further reading, feel free to explore the Docker Swarm Mode and Docker Compose docs. In another post, I’ll be going over some of the advantages and disadvantages of Docker Swarm Mode compared to other container orchestration systems, such as ECS and Kubernetes.

If you have any experiences with Docker Swarm Mode that you would like to share, or have any questions on any of the materials presented here, please leave a comment below!

References

Docker Swarm Mode

Docker for AWS

Docker Example Voting App Github

Introduction to Amazon Lightsail

At re:Invent 2016 Amazon introduced Lightsail, the newest addition to the list of AWS Compute Services. It is a quick and easy way to launch a virtual private server within AWS.

As someone who moved into the AWS world from an application development background, this sounded pretty interesting to me.  Getting started with deploying an app can be tricky, especially if you want to do it with code and scripting rather than going through the web console.  CloudFormation is an incredible tool but I can’t be the only developer to look at the user guide for deploying an application and then decide that doing my demo from localhost wasn’t such a bad option. There is a lot to learn there and it can be frustrating because you just want to get your app up and running, but before you can even start working on that you have to figure how how to create your VPC correctly.

Lightsail takes care of that for you.  The required configuration is minimal, you pick a region, an allocation of compute resources (i.e. memory, cpu, and storage), and an image to start from.  They even offer images dedicated to tailored to common developer setups so it is possible to just log in, download your source code, and you are off to the races.

No you can’t use Lightsail to deploy a highly available load balanced application, but if you are new to working with AWS it is a great way to get a feel for what you can do without being overwhelmed by all the possibilities. Plus once you get the hang of it you have a better foundation for branching out to some of the more comprehensive solutions offered by Amazon.

Deploying a Node App From Github

Let’s look at a basic example.  We have source code in Github, we want it deployed on the internet.  Yes, this is the monumental kind of challenge that I like to tackle every day.

Setup

You will need:

  • An AWS Account
    • At this time Lightsail is only supported in us-east-1 so you will have to try it out there.
  • The AWS Command Line Interface
    • The Lightsail CLI commands are relatively new so please make sure you are updated to the latest version
  • About ten minutes

Step 1 – Create an SSH Key

First of all let’s create an SSH key for connecting to our instance.  This is not required, Lightsail has a default key that you can use, but it is generally better to avoid using shared keys.

Step 2 – Create a User Data Script

The user data script is how you give instructions to AWS to tailor an instance to your needs.   This can be as complex or simple as you want it to be. For this case we want our instance to run a Node application that is in Github.  We are going to use Amazon Linux for our instance so we need to install Node and Git then pull down the app from Github.

Take the following snippet and save it as userdata.sh.  Feel free to modify if you have a different app that you would like to try deploying.

A user data script is not the only option here.  Lightsail supports a variety of preconfigured images.  For example it has a WordPress image, so if you needed a wordpress server you wouldn’t have to do anything but launch it.  It also supports creating an instance snapshot.  So you could start an instance, log in, do any necessary configuration manually, and then save that snapshot for future use.

That being said once you start to move beyond what Lightsail provides you will find yourself working with instance user data for a variety of purposes and it is nice to get a feel for it with some basic examples

Step 3 – Launch an Instance

Next we have to actually create the instance, we simply call create and refer to the key and userdata script we just made.

This command has several parameters so let’s run through them quickly:

  • instance-names – Required: This is the name for your server.
  • availability-zone – Required: An availability zones is an isolated datacenter in a region.  Since Lightstail isn’t concerned with high availability deployments, we just have to choose one.
  • blueprint-id – Required: The blueprint is the reference to the server image
  • bundle-id – Required: The set of specs that describe your server
  • [user-data-file] – Optional: This is the file we created above.  If no script is specified your instance will have the functionality provided by the blueprint, but no capabilities tailored to your needs.
  • [key-pair-name] – Optional: This is the private key that we will use to connect to the instance.  If this is not specified there is a default key that is available through the Lightsail console.

It will take about a minute for your instance to be up and running.  If you have the web console open you can see when it ready:
Screen Shot 2017-01-11 at 12.23.58 PM.png

Screen Shot 2017-01-11 at 12.25.07 PM.png
Once we are running it is time check out our application…
Screen Shot 2017-01-11 at 12.26.21 PM.png

Or not.

Step 4 – Troubleshooting

Let’s log in and see where things went wrong.  We will need the key we created in the first step and the IP address of our instance.  You can get see that in the web console or through another cli command to pull the instance data.

When you are troubleshooting any sort of AWS virtual server a good place to start is by checking out: /var/log/cloud-init-output.log

Screen Shot 2017-01-11 at 2.42.44 PM.png

Cloud-init-output.log contains the output from the instance launch commands.  That includes the commands run by Amazon to configure the instance as well as any commands from user data script.  Let’s take a look…

Screen Shot 2017-01-11 at 12.26.54 PM.png

Ok… that actually that looks like it started the app correctly.  So what is the problem?  Well if you looked at the application linked above and actually read the README (which, frankly, sounds exhausting) you probably already know…
Screen Shot 2017-01-11 at 2.46.41 PM.png

If we take a look at our firewall settings for the instance networking:

Screen Shot 2017-01-11 at 2.50.41 PM.png

Step 5 – Update The Instance Firewall

We can fix this!  AWS is managing the VPC that your instance is deployed in but you still have the ability to control the access.  We just have to open the port that our application is listening on.

Then if we reload the page: Success!

Screen Shot 2017-01-11 at 2.54.43 PM.png

Wrapping Up

That’s it, you are deployed!  If you’re familiar with Heroku you are probably not particularly impressed right now, but if you tried to use AWS to script out a simple app deployment in the past and got fed up the third time your CloudFormation stack rolled back due to having your subnets configured incorrectly I encourage you to give Lightsail a shot.

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If you are Stelligent is hiring and we would love to hear from you!

More Info

Lightsail Annoucement

Full Lightsail CLI Reference

DevOps in AWS Radio: AWS CodeBuild (Episode 5)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak about the release of AWS CodeBuild and how you can integrate the service with other services on AWS.

Here are the show notes:

DevOps in AWS News

Episode Topics

  1. Benefits of using AWS CodeBuild along with alternatives
  2. Which programming languages/platforms and operating systems does CodeBuild support?
  3. What’s the pricing model?
  4. How does the buildspec.yml file work?
  5. How do you run CodeBuild (SDK, CLI, Console, CloudFormation)?
  6. Any Jenkins integrations?
  7. Which source providers does CodeBuild support?
  8. How to integrate CodeBuild with the rest of Developer Tools Suite

Related Blog Posts

  1. An Introduction to AWS CodeBuild
  2. Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Sharing for the People: Stelligentsia Publications

Many moons ago, Jonny coined the term “Stelligentsia” to refer to our small, merry band of technologists at the time. Times have changed and the team has grown by a factor of 10 but we strive to live up to the name as all things DevOps and AWS continues to evolve.

stelligent-autopeople

We find the best way to do this is not only to continually learn and improve, but to also share our knowledge with others – a core value at Stelligent. We believe that you are more valuable when sharing knowledge with as many people as possible than keeping it locked inside your head only letting paying customers access to the principles and practices derived from these experiences.

Over the years at Stelligent, we’ve published many resources that are available to everyone. In this post, we do our best to list some of the blogs, articles, open source projects, refcardz, books, and other publications authored by Stelligentsia. If we listed everything, it’d likely be less useful, so we did our best to curate the publications listed in this post. We hope this collection provides a valuable resource to you and your organization.

Open Source

You can access our entire GitHub organization by going to https://github.com/stelligent/. The listing below represents the most useful, relevant, repositories in our organization.

Blogs

Stelligent

Here are the top 10 blog entries from Stelligentsia in 2016:

For many more detailed posts from Stelligentsia, visit the Stelligent Blog.

Series and Categories

  • Continuous Security – Five-part Stelligent Blog Series on applying security into your deployment pipelines
  • CodePipeline – We had over 20 CodePipeline-related posts in 2016 that describe how to integrate the deployment pipeline orchestration service with other tools and services
  • Serverless Delivery – Parts 1, 2, and 3

AWS Partner Network

Linkedin

Articles

Over the years, we have published many articles across the industry. Here are a few of them.

IBM developerWorks: Automation for the People

You can find a majority of the articles in this series by going to Automation for the People. Below, we provide a list to all accessible articles:

IBM developerWorks: Agile DevOps

You can find 7 of the 10 articles in this series by going to Agile DevOps. Below, we provide a list to all 10 articles:

DZone

  • Continuous Delivery Patterns – A description and visualization of Continuous Delivery patterns and antipatterns derived from the book on Continuous Delivery and Stelligent’s experiences.
  • Continuous Integration Patterns and Antipatterns – Reviews Patterns (a solution to a problem) and Anti-Patterns (ineffective approaches sometimes used to “fix” a problem) in the CI process.

Videos and Screencasts

Collectively, we have spoken at large conferences, meetups, and other events. We’ve included video recordings of some of these along with some recorded screencasts on DevOps, Continuous Delivery, and AWS.

AWS re:Invent

How-To Videos

  • DevOps in the Cloud – DevOps in the Cloud LiveLessons walks viewers through the process of putting together a complete continuous delivery platform for a working software application written in Ruby on Rails
  • DevOps in AWS – DevOps in AWS focuses on how to implement the key architectural construct in continuous delivery: the deployment pipeline. Viewers receive step-by-step instructions on how to do this in AWS based on an open-source software system that is in production. They also learn the DevOps practices teams can embrace to increase their effectiveness.

Stelligent’s YouTube

Websites

Podcasts

  • DevOps in AWS Radio – How to create a one-click (or “no click”) implementation of software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes

Social Media

Books

  • Continuous Integration: Improving Software Quality and Reducing Risk – Through more than forty CI-related practices using application examples in different languages, readers learn that CI leads to more rapid software development, produces deployable software at every step in the development lifecycle, and reduces the time between defect introduction and detection, saving time and lowering costs. With successful implementation of CI, developers reduce risks and repetitive manual processes, and teams receive better project visibility.

Additional Resources

Stelligent Reading List – A reading list for all Stelligentsia that we update from time to time.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

 

AWS CodeBuild is Here

At re:Invent 2016 AWS introduced AWS CodeBuild, a new service which compiles from source, runs tests, and produces ready to deploy software packages.  AWS CodeBuild handles provisioning, management, and scaling of your build servers.  You can either use pre-packaged build environments to get started quickly, or create custom build environments using your own build tools.  CodeBuild charges by the minute for compute resources, so you aren’t paying for a build environment while it is not in use.

AWS CodeBuild Introduction

Stelligent engineer, Harlen Bains has posted An Introduction to AWS CodeBuild to the AWS Partner Network (APN) Blog.  In the post he explores the basics of AWS CodeBuild and then demonstrates how to use the service to build a Java application.

Integrating AWS CodeBuild with AWS Developer Tools

In the follow-up post:  Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite, Stelligent CTO and AWS Community Hero,  Paul Duvall expands on how to integrate and automate the orchestration of CodeBuild with the rest of the AWS Developer Tools suite – including AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline using AWS’ provisioning tool, AWS CloudFormation.  He goes over the benefits of automating all the actions and stages into a deployment pipeline, while also providing an example with detailed screencast.

In the Future

Look to the Stelligent Blog for announcements, evaluations, and guides on new AWS products.  We are always looking for engineers who love to make things work better, faster, and just get a kick out of automating everything.  If you live and breathe DevOps, continuous delivery, and AWS, we want to hear from you.

Big Data at AWS re:Invent 2016

AWS re:Invent 2016 has kicked off for me in the realm of Big Data. It’s a challenging topic and one of great interest to companies around the globe so it was a no-brainer to be hanging around with folks at The Mirage for the Big Data talks. This blog post will be a quick write up on some interesting topics, announcements and features of the various tools covered today.

Big Data in AWS

The Big Data Mini Con had no announcements for new services. However, Amazon’s ecosystem for Big Data tools is growing rapidly and we got a sweet introduction to what is currently available, here’s some of the more interesting ones:

  • Import/Export Snowball – A nearly indestructible petabyte scale means of importing or exporting data into or out of Amazon S3.
  • Kinesis – AWS flavor of data stream and real-time analytics processing.
  • Redshift – A petabyte scale data warehouse solution as a service.
  • EMR – Apache Ecosystem / Hadoop as a service.
  • Data Pipeline – Data orchestration service for inter-AWS and on-premise workflows.
  • S3 – Durable, infinitely scalable, distributed object storage in the cloud.
  • Direct Connect – Up to 10GB direct connection from your VPC to your on-premise network.
  • Machine Learning – A real-time predictive modeling service.
  • Quick Sight – A business intelligence, data visualization and analytics tool.

Announcement: Data Transformation with Lambda on Kinesis Streams

A new feature that is coming to Kinesis — the ability for you to transform your streaming data with Lambda. The idea is to have a lambda function transform your data as it comes in instead of relying on an application running in EC2 processing the stream. It’s another very effective way of controlling costs and reducing the overhead of dealing with scaling yourself. In order to kickstart acceptance, Amazon intends to provide a library of templates for common transformation use-cases.

Amazon EMR

Recently, autoscaling in Amazon ElasticMapReduce became generally available. You can now configure your EMR clusters to scale based on metrics in AWS CloudWatch. It’s no dummy either, not only will it perform scaling operations on your actually processing throughput but it will also optimize your instance time.

  • The amount of metrics available in CloudWatch for EMR clusters is staggering and it’s this level of integration that makes autoscaling super intelligent. Instead of relying on abstract information about CPU and Memory — which can be hit or miss based on your work loads — you can configure scaling events to happen on REAL throughput metrics such as MapSlotsOpen, ReduceSlotsOpen or AppsPending based on which tools you’re running.
  • Instance time optimization is built into your EMR cluster’s autoscaling. It will automatically give you full-utilization of your instance before terminating due to a scale-down event. So when you scale-up and purchase an hour of EC2 capacity, you will get the entire hour of extra horsepower before it scales down. This way you get all of the capacity you paid for versus paying for the full hour and only utilizing a few minutes of it.

Announcement: Advanced Spot Provisioning & Spot Block support

A feature coming soon to EMR is Advanced Spot Provisioning, an extension to the spot instance reservations specifically tailored for distributed systems in AWS. This new feature will allow you to configure spot instance reservations for a list of instance types. You will be able to have a range of instance sizes running in your cluster and have spot instances requested differently for both your core node fleet and your task node fleet. The provisioning tool will select the most optimal instance and availability zone based on the capacity and price you have configured.

In addition to the optimizations of spot provisioning I mentioned above, EMR will also take advantage of Spot Instance Blocks. With traditional spot instances, you can reserve instances with great discounts at the risk of losing that capacity when normal demand increases. With Spot Instance Blocks, you can block off 1 to 6 hours of spot instance capacity. Spot Instance Blocks are priced differently than Spot Instances, but can be a big source of cost reduction in your data processing architecture for larger workloads.

Compute & Storage Decoupling

The final concept with EMR that was really driven home today was the decoupling of your compute and storage resources. In a traditional setup you typically have storage and compute bound together — meaning when you need more storage you end up with more compute and vis-a-vis.

With the latest iteration of Amazon EMR (5.2.0 recently released) HBase is now able to be fully integrated with EMRFS, which uses Amazon S3 for storage. By moving your storage into an S3 backed solution, you no longer have to scale your cluster for storage demands, get effectively infinite scalability, and you take advantage of S3’s eleven 9’s of durability. Traditional HDFS is still installed onto EMR so you can take advantage of a distributed local data store as needed.

Amazon Redshift

Amazon Redshift is an exceptional tool for Data Warehousing and it’s one of my favorite services offered by AWS. If you haven’t already, take some time to dive deep into the documentation and understand the complexities behind maximizing your architecture. This session was an excellent starter into concepts like data processing, distribution keys, sort keys, and compression.

Keys

Optimizing your queries in Redshift, like any other database, is going to be based around your keys. Since this is a lengthy topic, I’ll give you a quick overview of the important keys in Redshift. I recommend watching this session online or reviewing the documentation to fully understand how to architect them.

  • Distribution keys are used to physically store rows together and collate. Using the distribution key in all your JOINs is a best practice for performance — even though it may seem redundant.
  • Sort Keys are columns you specify for Redshift to optimize queries with. Redshift can skip entire blocks of data by referencing header data internally thus dramatically improving query performance.
  • Interleaved Sort Keys are designed for very large data sets and provide more performance as the table size increases. You can create interleaved sort keys with up to eight columns.

AWS Schema Conversion Tool

The schema conversion tool can be pointed to an existing database to copy its schema into a Redshift cluster and recommend schema changes for compatibility. Currently, it works with Oracle, Netezza, Greenplum, Teradata and more recently Redshift itself.

The schema conversion tool now supports Redshift -to- Redshift conversion as a means to optimizing your existing data structure. By analyzing your existing redshift cluster, the tool can provide recommendations for distribution keys and sort keys. Since the optimization service only provides recommendations based on existing usage and a very flat view of your data, you’ll want to test extensively before making a hard switchover.

Wrap Up

Today’s talks were enlightening and I eagerly await for new big data-as-a-service products to be announced in the coming days. If you want more information about these topics or want to hear the case studies yourself, I recommend you watch the sessions when they are released:

  • BDM205 – Big Data Mini Con State of the Union
  • BDM401 – Deep Dive: Amazon EMR Best Practices & Design Patterns
  • BDM402 – Best Practices for Data Warehousing with Amazon Redshift

For those of you at re:Invent, some of these sessions will be repeated and I highly recommend them.

If you see my ugly mug the next couple days, be sure to say hello — I’d love to know what your doing with Big Data, DevOps or AWS.

Going to AWS re:Invent is one of many excellent perks for Stelligent engineers. Good news! We’re hiring, check out our Careers page.

Stelligent CTO and Co-Founder, Paul Duvall named an AWS Community Hero

Paul Duvall, Stelligent CTO and Co-Founder, has been named an AWS Community Hero. The AWS Community Heroes program is designed to recognize and honor individuals who have had a real impact within the AWS community. Among those are AWS experts who go above and beyond to share their experience and knowledge through social media, blogs, events, user groups, and workshops.

“To be first nominated and ultimately selected as an AWS Community Hero is a great honor,” said Duvall. “I have to add that it is very natural for me to promote AWS, which we have recommended to all who know Stelligent since we first gained exposure to it in 2009, and to which we have exclusively focused our efforts since 2013. AWS is, without a doubt, the best cloud service provider.”

Duvall is constantly exploring how to build solutions in AWS that are in line with continuous delivery principles, and he is currently working on a new book, Enterprise DevOps in AWS, a spiritual successor to his critically acclaimed Continuous Integration released in 2007.

“Paul has established crucial tenets at Stelligent that have shaped everyone here, and we couldn’t be more pleased that AWS has recognized his impact, not just here at Stelligent, but within the AWS community-at-large,” noted Rob Daly, Stelligent CEO and Co-Founder. Two of Stelligent’s core values are “sharing” and “continuous improvement,” and all Stelligent employees are held to those values, with Duvall leading through example. For instance, he shares his AWS expertise and experience through frequent posts on Stelligent’s blog and helps his colleagues craft theirs, as well. Duvall also leads efforts with various AWS product teams, offering insight and R&D effort to explore and contribute feedback regarding various AWS services, both in beta and general availability.

Along with Paul, Stelligent engineers have been responsible for dozens of AWS and Devops blog related posts in the last year alone.  “Every post on our blog is dedicated to achieving continuous delivery while adhering to AWS best practices,” added Jonny Sywulak, Stelligent’s Blog Czar and Senior DevOps Automation Engineer. “We obsess over customers, and we dedicate ourselves to applying what we believe are essential practices to achieve the aims of continuous delivery. This acknowledgement of Paul underscores the value of sharing experiences to advance technology and to create awareness about how Stelligent can help.

More details are available at our press release.

About Stelligent
Stelligent is a technology services company that provides DevOps Automation in the Amazon Web Services (AWS) cloud. We aim for “one-click deployment.” Our reason for being is to help our customers gain the ability to continuously deploy their software, when they want to, and with confidence. We’ve been providing Continuous Delivery solutions in AWS since 2009. Follow @Stelligent on twitter.com/Stelligent. Learn more at http://www.stelligent.com.