Recently, our Chief Architect, Casey Lee and I – along with 48 of our colleagues at Stelligent – were at the AWS re:Invent 2017 conference in Las Vegas, NV. We were particularly proud as Stelligent was announced as an AWS Premier Partner along with several other significant company milestones.

There were over 75 significant product announcements at the conference. Of this, there were probably 20 new DevOps-related features announced at or prior to re:Invent. Instead of providing a broad brush across all of these announcements, Casey and I decided to choose the top 5 — from our perspective — and provide a bit more detail on why we think they matter to the DevOps community.  


Figure 1 – CEO of AWS, Andy Jassy at the AWS re:Invent 2017 keynote in Las Vegas, NV on November 29, 2017 

On Wednesday, toward the end of Andy Jassy’s (CEO of AWS) keynote, he quoted the lyrics to an old Tom Petty song indicating that “The waiting is the hardest part”. Andy was referring to IoT but it got me thinking more about DevOps and how a primary reason for DevOps is reducing the waiting for customers. In my view, the key purpose of DevOps is to reduce the waiting by increasing the speed of feedback between customers and developers.  


Figure 2 – DevOps Speeds up Feedback [Source]

DevOps is a portmanteau of the words representing “Development” and “Operations” teams, but it’s really more about representing the entire value stream. What you see in Figure 2 is similar to what AWS shares in some of its DevOps talks — by relating DevOps to the software development lifecycle. On one side you have customers and the other, developers. A developer comes up with an idea for a new feature, implements it and then puts it through a process of building, testing, and going through a release process until it gets delivered to production where your customers actually start using it. It’s only once it gets into the hands of your customers that you start to learn from it. You can get usage data, get direct feedback from customers, or start to make informed decisions on what to work on next. Based on this, you might decide to update or improve the feature, or even develop a new feature. And, this is where this feedback loop starts again. 

There are two key points to consider: 

  • How fast you’re able to get through this feedback loop determines how responsive you can be to customers and how innovative you are.
  • From your customer’s perspective, you’re only delivering value when you’re spending time on developing high-quality features.

Therefore, you want to increase the time you’re spending on developing high-quality features and decrease the time you’re spending on the process for building, testing, and releasing software systems. As a result, any efficiency you can push into the middle of this process to increase feedback loops while delivering high-quality software is DevOps. And, this could be changes to the culture, organization, process, or tooling changes. Improving anything in this feedback loop is the essence of DevOps.  

Consequently, the AWS re:Invent announcements we will be focusing on in this post are the DevOps-related features that best help speed up these feedback loops.  


The first set of exciting announcements from re:Invent are related to the Amazon Elastic Container Service (Amazon ECS). Amazon ECS is a container orchestration service for running Docker containers. Previously, the orchestration service was completely managed by AWS but you had to provision and scale your own EC2 instances for running the container workloads. At re:Invent a new service was announced, AWS Fargate that provides a managed solution for running your container workloads with Amazon ECS, thereby removing the requirement to manage your own EC2 instances. 

As shown in Figure 3, configuration of the ECS clusters, services and task definitions doesn’t change with AWS Fargate. The only change is that when launching your services and tasks you choose which launch type to use: 

  • EC2: in this mode, the tasks are launched on EC2 container instances that you manage and register with the ECS cluster 
  • FARGATE: in this mode, the tasks are launched on infrastructure that is managed by AWS 


Figure 3 –  AWS Fargate [Source]  

Let’s review what makes AWS Fargate so significant: 

  • No Servers – Removing the need to manage EC2 instances greatly simplifies the approach to running container workloads on AWS.  Activities such as AMI baking, OS patching, system backups and log management for the EC2 container instances are no longer your responsibility.  Additionally, managing the capacity of the EC2 container instances and properly scaling the cluster are no longer a consideration.  Just launch your containers as ECS tasks and let AWS Fargate manage the infrastructure that the workload runs on.  One thing to be aware of with AWS Fargate is that it is only available in the us-east-1 region. 
  • Task Networking – One feature that was announced a couple weeks prior to re:Invent was a feature called task networking. With this feature, each ECS task would have an elastic network interface (ENI) attached to it. This allows each task to have its own IP address in your VPC and avoid the need for port mapping.  Additionally, this allows configuring security groups at the task level, enabling finer grain control over network access amongst containers. This was announced before AWS Fargate and wasn’t very interesting due to the limits on how many ENIs can be attached to an EC2 instance.  However, now with the announcement of AWS Fargate, the limits aren’t an issue and you can now leverage task networking at scale. 
  • Launch Times – If you are using ECS in a continuous delivery pipeline, there is a good chance you are not only having to manage the starting and stopping of ECS tasks, but also the launching and termination of EC2 instances. This can add significant times to your pipeline executions. Now with AWS Fargate, there is no more waiting for EC2 instances to launch.   
  • Pricing – With AWS Fargate, you pay per second for the vCPU ($0.0506 per hour) and memory ($0.0127 per hour) reserved for each ECS task. Assuming a container that can run in 512MB with 0.25 vCPU, the chart below shows that the pricing for AWS Fargate will likely be higher than managing your own EC2 instances, but not by an unreasonable amount. Additionally, the pricing for running only a couple containers is less when run on AWS Fargate as compared with EC2 instances. Also, keep in mind there are other variables such as launch and scale out/scale in time that you will pay for with EC2 instances that would not be a factor in the AWS Fargate model. 


Figure 4 – Fargate/ECS Price Comparison

The next exciting announcement related to Amazon ECS is Amazon Elastic Container Service for Kubernetes (Amazon EKS). With Amazon EKS, you get a managed cluster of Kubernetes masters running in multiple availability zones. This removes the need for you to manage the configuration of the cluster, handling software upgrades, scheduling backups of the master, and management of etcd. The entire control plane is managed by the EKS service.  You are still responsible for managing the provisioning and scaling of the worker EC2 instances as well as the installation and configuration of kubelet for your EKS cluster. There is however an EKS optimized AMI based on Amazon Linux coming soon to simplify this process. 


Figure 5 – Elastic Container Service for Kubernetes (EKS) [Source]

Let’s look at some highlights from the announcement of the EKS service: 

  • VPC Networking – Previously, running Kubernetes in AWS often required an overlay network like Flannel or Weave to provide each pod with its own unique IP address.  This made working with VPC resources challenging.  However, EKS addresses this challenge with the CNI plugin they open sourced that handles the creation of ENIs and VPC IP addresses as needed to allow pods to have native VPC networking capabilities. 
  • Kubectl + IAM – EKS provides a Kubernetes API that is authenticated via IAM.  However, EKS then delegates the authorization to the RBAC capabilities that Kubernetes users are familiar with.   


Figure 6 – Kubectl + IAM [Source]

  • Additional Integrations: 
    • EKS can use an IAM service role to manage ELB resources for your applications.
    • EKS will send logs from the master to CloudWatch logs. 
    • Support for enabling kube-dns and dashboard on the EKS master 

There are a few things to be aware of regarding EKS.  First, it is still in preview so you will need to request access to have it enabled in your AWS account. Also there are currently no CloudFormation resources for defining your EKS clusters. Finally, support for using AWS Fargate for running your EKS workloads has been announced and is expected to become available in 2018. 


AWS Cloud9 is a cloud IDE for writing, running, and debugging code. With Cloud9, you can code with only your web browser. No need to download software, configure your environments, or setup your IDE and its configuration on each of your computers. You can collaborate in the same environment with other developers while seeing each of your changes in real time. This is something that I think raises the bar when it comes to pair programming and code reviews. You can also code in nearly any programming language you choose and it supports some of the top IDE platforms such as vim, emacs, and Sublime.  

You can get a new environment up and running in less than one minute by going to the Cloud9 Console and clicking Create environment. While you can use the pre-selected defaults, you can also configure your EC2 instance type, auto-hibernate settings, IAM role, VPC, and subnet – as shown in Figure 7.


Figure 7 – Cloud9 Configuration

In less than one minute, it launches your environment and you can immediately start writing code (as shown in Figure 8). You can also configure your environment settings such as your color scheme, key bindings, AWS, project, and user settings. 


Figure 8 – Cloud9 IDE    

From the perspective of speeding up feedback loops, there are a couple of key benefits to Cloud9. The first is the real-time collaboration in being up to work with other developers in real time on the same code. The fact that most tools are already installed and configured (e.g. git and the AWS CLI) saves a lot of time and you can go from computer to computer and access your same environments and settings. All of this saves lots of unnecessary time and allows developers to focus on developing features for customers.  

For Cloud9, AWS charges you for the EC2 instance the EBS storage. Since you can choose different EC2 instance types and the amount of time you use Cloud9 may vary, prices are variable Fortunately, AWS provides a typical example pricing chart (as shown in Figure 9): 


Figure 9 – Cloud9 Pricing Example [Source]    

A few things to keep in mind:  

  • Cloud9 runs on an EC2 instance. You’re still responsible for updating this instance 
  • There are no automatic backups so make sure you’re storing your code in a version-control repository 
  • Cloud9 only runs while connected to the Internet. There is no local mode.  


AWS CodeBuild is a managed service for building, testing and packaging software.  It provides similar capabilities to Jenkins, without requiring you to manage the infrastructure, configuration, patching, backups and scaling. A week before re:Invent, it was announced that AWS CodeBuild can now access resources in your VPC. This was previously a significant limitation with AWS CodeBuild that often required running Jenkins on EC2 instances in the VPC to work around. Now you can leverage AWS CodeBuild to replace Jenkins in your pipeline and thereby decrease the administration time and cost of the pipeline. Let’s look at some use cases for AWS CodeBuild now that VPC connectivity is possible: 

  • AMI Baking – Baking an AMI with a tool like Packer requires the ability to SSH into a temporary EC2 instance to perform configuration before taking a snapshot of the instance for the new AMI.  Now AWS CodeBuild can be used to create AMIs via EC2 instances running in your VPC. 
  • Database Configuration – A common use case we see is the need to run tools like Liquibase against a database to perform database schema and data migrations as a part of a continuous delivery pipeline.  Generally, these databases are on private subnets in the VPC.  AWS CodeBuild can now be used to perform these database configurations. 
  • Automated Testing  A critical step of a continuous delivery pipeline is the implementation of automated testing against the new software.  Often times, the software is provisioned on infrastructure that is running on private subnets within a VPC and therefore requires testing from inside the VPC.  AWS CodeBuild can now be configured to drive these types of tests against the new software and infrastructure as a part of a continuous delivery pipeline. 

The other significant announcement to AWS CodeBuild was support for caching.  Building software generally requires downloading a set of tools and dependencies required to successfully build, test and package the software. Examples of these dependencies include JAR files resolved for a Maven project or Ruby gems from a Gemfile.  Resolving and downloading these dependencies can consume a significant amount of time for a software build.  Now with AWS CodeBuild, you can configure caching of artifacts from you build for future builds to reuse.  This will significantly decrease your build times and thereby also reducing your lead time for your pipelines.  Use of this feature requires configuring two things: 

  • Declare the paths to be cached in your buildspec.yml 


Figure 10 – CodeBuild buildspec.yml [Source]

  • Specify the S3 bucket and path prefix to use for storing the cached artifacts within the CodeBuild project definition 



Figure 11 – CodeBuild Enable Caching in S3 [Source]

Both of these capabilities are available now in CloudFormation on the AWS::CodeBuild::Project resource. 


You can now use the Serverless Application Model (SAM) to deploy Lambda functions using Canary, Linear or All-at-Once deployment patterns.


Figure 12 – SAM template for deploying Serverless applications [Source]

AWS CodeDeploy is built into the SAM to provide the ability to perform gradual code deployments enabling the following features:

  • “Deploy new versions of your Lambda function and automatically create aliases that point to the new version.
  • Gradually shift customer traffic to the new version until you are satisfied it is working as expected or roll back the update.
  • Define pre-traffic and post-traffic test functions to verify the newly deployed code is configured correctly and your application operates as expected.
  • Roll back the deployment if CloudWatch alarms are triggered.” [Source]


Figure 13 – Deployment Preference Type for Deploying Lambda via CodeDeploy [Source]

While you can deploy Lambda-based applications directly from the AWS CLI, by using AWS CodeDeploy, you get all of these added features.

Why it Matters

While there have been powerful tools, serverless applications have felt like second-class citizens when it comes to deployments. Most approaches have seemed like workarounds so being able to deploy Lambda functions with a service like CodeDeploy should help reduce many of the previous challenges when it comes to deploying serverless applications.


During one of the breakout sessions, DEV 317 – Deep Dive on AWS CloudFormation, it was announced that support for drift detection would be available in 2018.  This feature would allow you to check for changes between the intended state of your AWS resources, as described in a CloudFormation stack,  compared against the actual state of the AWS resources.  This feature will provide a mechanism for detecting manual changes that may have occurred to resources that were created by CloudFormation.  Drift detection not only detects that a change occurred, but also provides details about what attributes have changed. 

For organizations that embrace immutable infrastructure or are using CloudFormation as a mechanism to perform governance of the resources that teams are creating in the AWS account, automated drift detection can provide a means to identify resources that may now be out of compliance. 


Figure 14 – CloudFormation Drift Detection [Source]

One additional announcement that was made the week before re:Invent was the ability to specify parameters for CloudFormation stacks via Systems Manager Parameter Store.  By using SSM parameter types,  you can maintain configuration data separate from the CloudFormation stacks  


Figure 15 – CloudFormation Parameter Store parameters [Source]

Let’s look at a few use cases for SSM parameters in CloudFormation: 

  • Image ID – Store the latest ID of the image to use for EC2 instances in an SSM parameter from an AMI pipeline and have CloudFormation lookup the ID when creating or updating the stack. 
  • Endpoints – Store the endpoint of a database instance in an SSM parameter to be looked up and passed on to an ECS Service via environment variables. 
  • Credentials – Store database passwords in an SSM parameter to be looked up and passed on to an RDS DB instance. WARNING: currently SecureString parameter types are unsupported so this approach is not recommended until that limitation is addressed. 

Honorable Mentions 

  • Amazon Aurora Serverless Preview – With Amazon Aurora Serverless, you can now scale your database infrastructure based on load.  This allows you to only pay for the memory and compute resources you need.  Additionally, this provides a much faster startup time for database instances.  These benefits come at a cost of about a 50% premium over traditional Amazon Aurora instances in an always-on configuration.  However, for infrequently used databases or for supporting continuous delivery pipelines, the pay-as-you-go model could help save significant cost as well as improve the lead time of the pipelines. 
  • AWS Serverless Application Repository – a collection of serverless applications published by developers, companies, and partners in the serverless community. You can find everything from code samples and components for building web and mobile applications to back-end processing services and complete applications. 
  • API Gateway VPC endpoints  Previously, all endpoints that API Gateway routed to were required to be public.  This limitation made it very difficult to expose APIs running behind ELBs on EC2 instances or on ECS services.  With the announcement that API Gateway supports VPC endpoints, you can now target internal ELBs for the APIs configured in API Gateway.   
  • Amazon GuardDuty  A managed threat detection service that monitors your CloudTrail and VPC Flow logs to look for malicious or unauthorized behavior in your account.  This provides a simple and cost efficient way to improve the security posture of your AWS account.  Support is available now for configuring this via CloudFormation. 
  • Managed Rules for AWS WAF  Leverage WAF rules that are managed by third parties such as Alert Logic and Trend Micro. Simply subscribe to WAF rules from AWS Marketplace and associate them to your AWS WAF web ACL. Since the rules are managed by a vendor, you don’t have to worry about keeping the rules up-to-date with latest threats.  Pricing is pay-as-you-go without any subscription commitments. 
  • AWS PrivateLink for Customer and Partner Services  “With AWS PrivateLink, you can now make services available to other accounts and Virtual Private Cloud (VPC) networks that are accessed securely as private endpoints.” [Source]


From a DevOps perspective, we see AWS fleshing out its offering in providing services and tools to help speed up feedback loops. All of AWS’ capabilities continue to move up the stack from primitive building blocks to managed services and as Werner Vogels, the CTO of Amazon, said in his keynote, “(The) true premise of (the) cloud is just around the corner – soon you’ll only be writing business logic” 

The big DevOps announcements were related to making it easier to:

  • deploy to containers with Fargate and EKS
  • write and debug code via AWS Cloud9
  • build code via AWS CodeBuild enhancements such as VPC integration and caching
  • deploy serverless components via AWS CodeDeploy
  • use new CloudFormation features to automatically provision your infrastructure

We expect that all of these features will help speed up feedback loops between customers and developers. We’re excited to get to work with our customers in reducing all of the waiting!

Additional Resources 

Leave a Reply