Application Auto Scaling with Amazon ECS

In this blog post, you’ll see an example of Application Auto Scaling for the Amazon ECS (EC2 Container Service). Automatic scaling of the container instances in your ECS cluster has been a feature for quite some time, but until recently you were not able to scale the tasks in your ECS service with built-in technology from AWS. In May of 2016, Automatic Scaling with Amazon ECS was announced which allowed us to configure elasticity into our deployed container services in Amazon’s cloud.

Developer Note: Skip to the “CloudFormation Examples” section to skip right to the code!

Why should you auto scale your container services?

Efficient and effective scaling of your microservices is why you should choose automatic scaling of your containers. If your primary goals include fault tolerance or elastic workloads, then leveraging a combination of cloud technology for autoscaling and infrastructure as code are the keys to success. With AWS’ Automatic Application Autoscaling, you can quickly configure elasticity into your architecture in a repeatable and testable way.

Introducing CloudFormation Support

For the first few months of this new feature it was not available in AWS CloudFormation. Configuration was either a manual process in the AWS Console or a series of API calls made from the CLI or one of Amazon’s SDKs. Finally, in August of 2016, we can now manage this configuration easily using CloudFormation.

The resource types you’re going to need to work with are:

The ScalableTarget and ScalingPolicy are the new resources that configure how your ECS Service behaves when an Alarm is triggered. In addition, you will need to create a new Role to give access to the Application Auto Scaling service to describe your CloudWatch Alarms and to modify your ECS Service — such as increasing your Desired Count.

CloudFormation Examples

The below examples were written for AWS CloudFormation in the YAML format. You can plug these snippets directly into your existing templates with minimal adjustments necessary. Enjoy!

Step 1: Implement a Role

These permissions were gathered from the various sources in AWS documentation.

ApplicationAutoScalingRole:
  Type: AWS::IAM::Role
  Properties:
    AssumeRolePolicyDocument:
      Statement:
      - Effect: Allow
        Principal:
          Service:
          - application-autoscaling.amazonaws.com
        Action:
        - sts:AssumeRole
     Path: "/"
     Policies:
     - PolicyName: ECSBlogScalingRole
       PolicyDocument:
         Statement:
         - Effect: Allow
           Action:
           - ecs:UpdateService
           - ecs:DescribeServices
           - application-autoscaling:*
           - cloudwatch:DescribeAlarms
           - cloudwatch:GetMetricStatistics
           Resource: "*"

Step 2: Implement some alarms

The below alarm will initiate scaling based on container CPU Utilization.

AutoScalingCPUAlarm:
  Type: AWS::CloudWatch::Alarm
  Properties:
    AlarmDescription: Containers CPU Utilization High
    MetricName: CPUUtilization
    Namespace: AWS/ECS
    Statistic: Average
    Period: '300'
    EvaluationPeriods: '1'
    Threshold: '80'
    AlarmActions:
    - Ref: AutoScalingPolicy
    Dimensions:
    - Name: ServiceName
      Value:
        Fn::GetAtt:
        - YourECSServiceResource
        - Name
    - Name: ClusterName
      Value:
        Ref: YourECSClusterName
    ComparisonOperator: GreaterThanOrEqualToThreshold

Step 3: Implement the ScalableTarget

This resource configures your Application Scaling to your ECS Service and provides some limitations for its function. Other than your MinCapacity and MaxCapacity, these settings are quite fixed when used with ECS.

AutoScalingTarget:
  Type: AWS::ApplicationAutoScaling::ScalableTarget
  Properties:
    MaxCapacity: 20
    MinCapacity: 1
    ResourceId:
      Fn::Join:
      - "/"
      - - service
        - Ref: YourECSClusterName
        - Fn::GetAtt:
          - YourECSServiceResource
          - Name
    RoleARN:
      Fn::GetAtt:
      - ApplicationAutoScalingRole
      - Arn
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs

Step 4: Implement the ScalingPolicy

This resource configures your exact scaling configuration — when to scale up or down and by how much. Pay close attention to the StepAdjustments in the StepScalingPolicyConfiguration as the documentation on this is very vague.

In the below example, we are scaling up by 2 containers when the alarm is greater than the Metric Threshold and scaling down by 1 container when below the Metric Threshold. Take special note of how MetricIntervalLowerBound and MetricIntervalUpperBound work together. When unspecified, they are effectively infinity for the upper bound and negative infinity for the lower bound. Finally, note that these thresholds are computed based on aggregated metrics — meaning the Average, Minimum or Maximum of your combined fleet of containers.

AutoScalingPolicy:
  Type: AWS::ApplicationAutoScaling::ScalingPolicy
  Properties:
    PolicyName: ECSScalingBlogPolicy
    PolicyType: StepScaling
    ScalingTargetId:
      Ref: AutoScalingTarget
    ScalableDimension: ecs:service:DesiredCount
    ServiceNamespace: ecs
    StepScalingPolicyConfiguration:
      AdjustmentType: ChangeInCapacity
      Cooldown: 60
      MetricAggregationType: Average
      StepAdjustments:
      - MetricIntervalLowerBound: 0
        ScalingAdjustment: 2
      - MetricIntervalUpperBound: 0
        ScalingAdjustment: -1

Wrapping It Up

Amazon Web Services continues to provide excellent resources for automation, elasticity and virtually unlimited scalability. As you can see, with a couple solid examples underfoot you can very quickly build in that on-demand elasticity and inherent fault tolerance. After you have your tasks auto scaled, I recommend you check out the documentation on how to scale your container instances also to provide the same benefits to your ECS cluster itself.

Deploying Microservices? Let mu help!

With support for ECS Application Auto Scaling coming soon to Stelligent mu, it offers the fastest and most comprehensive platform for deploying microservices as containers.

Want to learn more about mu from its creators? Check out the DevOps in AWS Radio’s podcast or find more posts in our blog.

Additional Resources

Here are some of the supporting resources discussed in this post.

We’re Hiring!

Like what you’ve read? Would you like to join a team on the cutting edge of DevOps and Amazon Web Services? We’re hiring talented engineers like you. Click here to visit our careers page.

 

 

WordPress, Mu, and You

Today, we’re going to give you the “Hello, world” of web stacks and explain how you could run WordPress with mu. This is the fifth post in Stelligent’s blog series on mu, where we’ll lay out a path for using mu to run WordPress and try to hit a target that’s often hard to find in that part of our world: managing a WordPress stack with infrastructure as code, utilizing a continuous delivery pipeline. In that vein, we’re going to keep the feature list as short and simple as possible. To make this easier to follow, we’re going to keep the feature list short and simple. Look for future posts in the Stelligent blog where we’ll talk about adding more functionality.

Why would I want to run WordPress with mu?

mu will take care of a lot of nice things for you. It creates a CodePipeline stack that will manage your deployment workflow, and you’ll have test and production environments out of the box. You can inspect changes in your test instance before you let them continue through the pipeline.  It will manage your databases in RDS, so your backend data layer gets its own high availability and scaling that’s independent of your other resources; your test and production data are also kept separate. Each environment will run in Amazon’s EC2 Container Service, behind a load balancer, so the pipeline can roll out new containers without bringing the old ones down. When it’s all done, you can even export CloudFormation templates for your infrastructure and manage them on your own — you’re never locked into using mu!

We’re going to deploy a Docker container from the official WordPress image.  How do we get there? mu will manage a CodePipeline stack that orchestrates everything: it creates a custom image with your web data and the official image, then uses ECS to deploy a test instance for you to inspect, and then — if you approve it — rolls it out to production. Because we’re using mu, all of this will be done through code, and we’ll use GitHub to make that code available to CodePipeline.

Stelligent created mu to simplify using microservices on AWS. It gives you a simple, flexible front-end to some really sophisticated services that can seem bewildering until you’ve gained a lot of experience with them. All those advanced products from AWS are themselves hiding the complexity of running very robust services across data centers around the world. There is vast machinery hidden behind this relatively simple command-line interface. You benefit from all of this with a really simple process for deploying your WordPress site, and you get to do it with tools native to most developers: a code editor and a command line.

Getting Started

Let’s get down to the nitty gritty. To get started, fork our repository into your own GitHub account then clone it to your workstation. Click the “Fork” button in the top right corner of the page at stelligent/mu-wordpress:

Credit to GitHub; taken from their

Next, clone the new repo from your account to your local workstation:

git clone _your_fork_of_mu-wordpress_
cd mu-wordpress

In this directory, you’ll find a few files. Edit mu’s main config file, mu.yml, changing pipeline.source.repo to point to your own GitHub account instead of “stelligent”:

pipeline:
  source:
    provider: GitHub
    repo: _your_github_username_/mu-wordpress

Commit your changes and push them back up to your GitHub account:

git add mu.yml
git commit -m 'first config' && git push

You’re ready to start up your pipeline:

mu pipeline up

mu will ask you for a GitHub token. CodePipeline uses it to watch your repo for changes so that it can automatically deploy them. Create a new token in your own GitHub account and grant it the “admin:repo_hook” and “admin” permissions. Save it somewhere, like a nice password manager, and then provide it at mu’s prompt.

Now you can watch your pipeline get deployed:

mu pipeline logs -f

Give it a little time – it will probably take about 10 minutes for the pipeline to deploy all the resources. When it’s done, you’ll have your first environment, “test”:

mu env list

You should see a table like this, but it won’t say CREATE_COMPLETE under each environment until they’re done.

+-------------+-----------------------+---------------------+-----+
| ENVIRONMENT |         STACK         |       STATUS        | ... |
+-------------+-----------------------+---------------------+-----+
| test        | mu-cluster-test       | CREATE_COMPLETE     | ... |
+-------------+-----------------------+---------------------+-----+

On to WordPress!

Now that you have a test environment, you can initialize WordPress and make sure it’s working. Start by inspecting “test”:

mu env show test

You’ll see a block at the top that includes “Base URL”:

Environment:    test


Cluster Stack:  mu-cluster-test (UPDATE_IN_PROGRESS)
VPC Stack:      mu-vpc-test (UPDATE_COMPLETE)
Bastion Host:   1.2.3.4
Base URL:       http://mu-cl-some-long-uuid.us-east-1.elb.amazonaws.com

Append “/wp-admin” to that and load the URL in your browser:

http://mu-cl-some-long-uuid.us-east-1.elb.amazonaws.com/wp-admin

Follow the instructions there to set up a WordPress admin user and initialize the database.

First WordPress admin initialization page

Deploy to “production”

If it looks to you like it’s working, load up your CodePipeline console. You can find the URL for it by asking mu for information on your service:

mu service show | head

The first line of output will give you the URL for your pipeline stack:

Pipeline URL:   https://console.aws.amazon.com/codepipeline/home?region=...

Load that page in your web browser. Find the “Production” stage. The first step there is “Approve”, and you’ll see a button labeled “Review”. Fill that in – any text will do – and click the “Approve” button:

Approve or reject a revision in CodePipeline

CodePipeline will deploy a new copy of the container it build into your “prod” environment. When it’s done, you can do the same thing you did with test to initialize WordPress and inspect your new web site. When you’re ready, try these commands to watch it go and get the site URL:

mu pipeline logs -f
mu env show prod |head

Behind the scenes

If you’ve made it this far, I’ll bet you’re wondering what mu is actually doing for you. Let’s take a look. mu brings together a handful of Amazon products. It will:

Those are the pieces. How do they all fit together?

  • GitHub stores your infrastructural code.
  • mu acts as your front-end to AWS by generating and applying CloudFormation templates, orchestrated by CodePipeline.
  • CodeBuild combines your content with the official WordPress Docker container, storing the new image in Amazon ECR, then deploying it through ECS.
  • An ECS cluster is run for each environment we define: in this case, “test” and “prod”.
  • An AWS ALB sits in front of each cluster.
  • Your WordPress database will be provided by an Amazon RDS cluster, one for each environment. Each runs Aurora, Amazon’s highly optimized clone of MySQL.

 

Architectural diagram of mu-wordpress

Let’s talk about how your information flows through those pieces. AWS CodePipeline orchestrates all the other services in AWS. It’s used to give you a continuous delivery pipeline:

AWS CodePipeline pipeline created by mu

  1. It watches your GitHub repo for changes and automatically applies them shortly after you push.
  2. AWS CodeBuild uses buildspec.yml to run any custom steps you add there.
  3. AWS CodeBuild generates your own Docker image by combining the results of the last step with the official WordPress image and storing it in Amazon ECR.
  4. Your container is deployed to your “test” environment.
  5. You manually inspect your container and approve or reject it.
  6. If you approve it, your container is deployed to your “prod” environment.


Wrapping it up

If you want to customize your WordPress site, add content to the html directory of your repo. Anything there will be installed with WordPress as the container is built, so you have a simple way to add content to your site’s wp-content directory by adding new plugins or themes to your own codebase. Take a look at your mu.yml file, too – it includes comments that explain the settings used and suggest some things you might want to change, like the instance size deployed to each environment.

We’ve shown a fast and simple way to get WordPress running at AWS. There’s a lot of tech running behind the scenes, but mu makes it easy to get started and navigate that complexity. There’s a lot more we could do: optimizing WordPress and making your stack robust are extensive topics that deserve their own articles, and much has been written about them elsewhere. Look for future posts on the Stelligent blog as we improve our mu-wordpress code, too.

Additional Resources

Are you passionate about working with the latest AWS technologies? Are you interested in helping build tools to help automate deployments? Do you want to engage in spirited debate about how to pronounce Greek letters? Stelligent is hiring!

Introduction to Amazon Lightsail

At re:Invent 2016 Amazon introduced Lightsail, the newest addition to the list of AWS Compute Services. It is a quick and easy way to launch a virtual private server within AWS.

As someone who moved into the AWS world from an application development background, this sounded pretty interesting to me.  Getting started with deploying an app can be tricky, especially if you want to do it with code and scripting rather than going through the web console.  CloudFormation is an incredible tool but I can’t be the only developer to look at the user guide for deploying an application and then decide that doing my demo from localhost wasn’t such a bad option. There is a lot to learn there and it can be frustrating because you just want to get your app up and running, but before you can even start working on that you have to figure how how to create your VPC correctly.

Lightsail takes care of that for you.  The required configuration is minimal, you pick a region, an allocation of compute resources (i.e. memory, cpu, and storage), and an image to start from.  They even offer images dedicated to tailored to common developer setups so it is possible to just log in, download your source code, and you are off to the races.

No you can’t use Lightsail to deploy a highly available load balanced application, but if you are new to working with AWS it is a great way to get a feel for what you can do without being overwhelmed by all the possibilities. Plus once you get the hang of it you have a better foundation for branching out to some of the more comprehensive solutions offered by Amazon.

Deploying a Node App From Github

Let’s look at a basic example.  We have source code in Github, we want it deployed on the internet.  Yes, this is the monumental kind of challenge that I like to tackle every day.

Setup

You will need:

  • An AWS Account
    • At this time Lightsail is only supported in us-east-1 so you will have to try it out there.
  • The AWS Command Line Interface
    • The Lightsail CLI commands are relatively new so please make sure you are updated to the latest version
  • About ten minutes

Step 1 – Create an SSH Key

First of all let’s create an SSH key for connecting to our instance.  This is not required, Lightsail has a default key that you can use, but it is generally better to avoid using shared keys.

Step 2 – Create a User Data Script

The user data script is how you give instructions to AWS to tailor an instance to your needs.   This can be as complex or simple as you want it to be. For this case we want our instance to run a Node application that is in Github.  We are going to use Amazon Linux for our instance so we need to install Node and Git then pull down the app from Github.

Take the following snippet and save it as userdata.sh.  Feel free to modify if you have a different app that you would like to try deploying.

A user data script is not the only option here.  Lightsail supports a variety of preconfigured images.  For example it has a WordPress image, so if you needed a wordpress server you wouldn’t have to do anything but launch it.  It also supports creating an instance snapshot.  So you could start an instance, log in, do any necessary configuration manually, and then save that snapshot for future use.

That being said once you start to move beyond what Lightsail provides you will find yourself working with instance user data for a variety of purposes and it is nice to get a feel for it with some basic examples

Step 3 – Launch an Instance

Next we have to actually create the instance, we simply call create and refer to the key and userdata script we just made.

This command has several parameters so let’s run through them quickly:

  • instance-names – Required: This is the name for your server.
  • availability-zone – Required: An availability zones is an isolated datacenter in a region.  Since Lightstail isn’t concerned with high availability deployments, we just have to choose one.
  • blueprint-id – Required: The blueprint is the reference to the server image
  • bundle-id – Required: The set of specs that describe your server
  • [user-data-file] – Optional: This is the file we created above.  If no script is specified your instance will have the functionality provided by the blueprint, but no capabilities tailored to your needs.
  • [key-pair-name] – Optional: This is the private key that we will use to connect to the instance.  If this is not specified there is a default key that is available through the Lightsail console.

It will take about a minute for your instance to be up and running.  If you have the web console open you can see when it ready:
Screen Shot 2017-01-11 at 12.23.58 PM.png

Screen Shot 2017-01-11 at 12.25.07 PM.png
Once we are running it is time check out our application…
Screen Shot 2017-01-11 at 12.26.21 PM.png

Or not.

Step 4 – Troubleshooting

Let’s log in and see where things went wrong.  We will need the key we created in the first step and the IP address of our instance.  You can get see that in the web console or through another cli command to pull the instance data.

When you are troubleshooting any sort of AWS virtual server a good place to start is by checking out: /var/log/cloud-init-output.log

Screen Shot 2017-01-11 at 2.42.44 PM.png

Cloud-init-output.log contains the output from the instance launch commands.  That includes the commands run by Amazon to configure the instance as well as any commands from user data script.  Let’s take a look…

Screen Shot 2017-01-11 at 12.26.54 PM.png

Ok… that actually that looks like it started the app correctly.  So what is the problem?  Well if you looked at the application linked above and actually read the README (which, frankly, sounds exhausting) you probably already know…
Screen Shot 2017-01-11 at 2.46.41 PM.png

If we take a look at our firewall settings for the instance networking:

Screen Shot 2017-01-11 at 2.50.41 PM.png

Step 5 – Update The Instance Firewall

We can fix this!  AWS is managing the VPC that your instance is deployed in but you still have the ability to control the access.  We just have to open the port that our application is listening on.

Then if we reload the page: Success!

Screen Shot 2017-01-11 at 2.54.43 PM.png

Wrapping Up

That’s it, you are deployed!  If you’re familiar with Heroku you are probably not particularly impressed right now, but if you tried to use AWS to script out a simple app deployment in the past and got fed up the third time your CloudFormation stack rolled back due to having your subnets configured incorrectly I encourage you to give Lightsail a shot.

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If you are Stelligent is hiring and we would love to hear from you!

More Info

Lightsail Annoucement

Full Lightsail CLI Reference

Screencast on AWS Elastic Beanstalk

Amazon Web Services released their Platform as a Service offering on Wednesday, January 19th. I've gotten an opportunity to play with it and I'm quite impressed. I created a seven-minute screencast that takes you through the steps to deploy and configure an application/environment using Elastic Beanstalk. In this screencast, you'll see how easy it was to get a Hudson CI server up and running in an EC2 environment. Furthermore, Elastic Beanstalk provides automatic scaling, monitoring, configuration right 'out of the box'. It's worth checking out.