In this blog post, you’ll see an example of Application Auto Scaling for the Amazon ECS (EC2 Container Service). Automatic scaling of the container instances in your ECS cluster has been a feature for quite some time, but until recently you were not able to scale the tasks in your ECS service with built-in technology from AWS. In May of 2016, Automatic Scaling with Amazon ECS was announced which allowed us to configure elasticity into our deployed container services in Amazon’s cloud.
Developer Note: Skip to the “CloudFormation Examples” section to skip right to the code!
Why should you auto scale your container services?
Efficient and effective scaling of your microservices is why you should choose automatic scaling of your containers. If your primary goals include fault tolerance or elastic workloads, then leveraging a combination of cloud technology for autoscaling and infrastructure as code are the keys to success. With AWS’ Automatic Application Autoscaling, you can quickly configure elasticity into your architecture in a repeatable and testable way.
Introducing CloudFormation Support
For the first few months of this new feature it was not available in AWS CloudFormation. Configuration was either a manual process in the AWS Console or a series of API calls made from the CLI or one of Amazon’s SDKs. Finally, in August of 2016, we can now manage this configuration easily using CloudFormation.
The resource types you’re going to need to work with are:
The ScalableTarget and ScalingPolicy are the new resources that configure how your ECS Service behaves when an Alarm is triggered. In addition, you will need to create a new Role to give access to the Application Auto Scaling service to describe your CloudWatch Alarms and to modify your ECS Service — such as increasing your Desired Count.
The below examples were written for AWS CloudFormation in the YAML format. You can plug these snippets directly into your existing templates with minimal adjustments necessary. Enjoy!
Step 1: Implement a Role
These permissions were gathered from the various sources in AWS documentation.
ApplicationAutoScalingRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Statement: - Effect: Allow Principal: Service: - application-autoscaling.amazonaws.com Action: - sts:AssumeRole Path: "/" Policies: - PolicyName: ECSBlogScalingRole PolicyDocument: Statement: - Effect: Allow Action: - ecs:UpdateService - ecs:DescribeServices - application-autoscaling:* - cloudwatch:DescribeAlarms - cloudwatch:GetMetricStatistics Resource: "*"
Step 2: Implement some alarms
The below alarm will initiate scaling based on container CPU Utilization.
AutoScalingCPUAlarm: Type: AWS::CloudWatch::Alarm Properties: AlarmDescription: Containers CPU Utilization High MetricName: CPUUtilization Namespace: AWS/ECS Statistic: Average Period: '300' EvaluationPeriods: '1' Threshold: '80' AlarmActions: - Ref: AutoScalingPolicy Dimensions: - Name: ServiceName Value: Fn::GetAtt: - YourECSServiceResource - Name - Name: ClusterName Value: Ref: YourECSClusterName ComparisonOperator: GreaterThanOrEqualToThreshold
Step 3: Implement the ScalableTarget
This resource configures your Application Scaling to your ECS Service and provides some limitations for its function. Other than your MinCapacity and MaxCapacity, these settings are quite fixed when used with ECS.
AutoScalingTarget: Type: AWS::ApplicationAutoScaling::ScalableTarget Properties: MaxCapacity: 20 MinCapacity: 1 ResourceId: Fn::Join: - "/" - - service - Ref: YourECSClusterName - Fn::GetAtt: - YourECSServiceResource - Name RoleARN: Fn::GetAtt: - ApplicationAutoScalingRole - Arn ScalableDimension: ecs:service:DesiredCount ServiceNamespace: ecs
Step 4: Implement the ScalingPolicy
This resource configures your exact scaling configuration — when to scale up or down and by how much. Pay close attention to the StepAdjustments in the StepScalingPolicyConfiguration as the documentation on this is very vague.
In the below example, we are scaling up by 2 containers when the alarm is greater than the Metric Threshold and scaling down by 1 container when below the Metric Threshold. Take special note of how MetricIntervalLowerBound and MetricIntervalUpperBound work together. When unspecified, they are effectively infinity for the upper bound and negative infinity for the lower bound. Finally, note that these thresholds are computed based on aggregated metrics — meaning the Average, Minimum or Maximum of your combined fleet of containers.
AutoScalingPolicy: Type: AWS::ApplicationAutoScaling::ScalingPolicy Properties: PolicyName: ECSScalingBlogPolicy PolicyType: StepScaling ScalingTargetId: Ref: AutoScalingTarget ScalableDimension: ecs:service:DesiredCount ServiceNamespace: ecs StepScalingPolicyConfiguration: AdjustmentType: ChangeInCapacity Cooldown: 60 MetricAggregationType: Average StepAdjustments: - MetricIntervalLowerBound: 0 ScalingAdjustment: 2 - MetricIntervalUpperBound: 0 ScalingAdjustment: -1
Wrapping It Up
Amazon Web Services continues to provide excellent resources for automation, elasticity and virtually unlimited scalability. As you can see, with a couple solid examples underfoot you can very quickly build in that on-demand elasticity and inherent fault tolerance. After you have your tasks auto scaled, I recommend you check out the documentation on how to scale your container instances also to provide the same benefits to your ECS cluster itself.
Deploying Microservices? Let mu help!
With support for ECS Application Auto Scaling coming soon to Stelligent mu, it offers the fastest and most comprehensive platform for deploying microservices as containers.
Want to learn more about mu from its creators? Check out the DevOps in AWS Radio’s podcast or find more posts in our blog.
Here are some of the supporting resources discussed in this post.
Like what you’ve read? Would you like to join a team on the cutting edge of DevOps and Amazon Web Services? We’re hiring talented engineers like you. Click here to visit our careers page.
Stelligent Amazon Pollycast