Stelligent

AWS re:Invent 2017 DevOps re:Cap

Recently, our Chief Architect, Casey Lee and I – along with 48 of our colleagues at Stelligent – were at the AWS re:Invent 2017 conference in Las Vegas, NV. We were particularly proud as Stelligent was announced as an AWS Premier Partner along with several other significant company milestones.
There were over 75 significant product announcements at the conference. Of this, there were probably 20 new DevOps-related features announced at or prior to re:Invent. Instead of providing a broad brush across all of these announcements, Casey and I decided to choose the top 5?—?from our perspective?—?and provide a bit more detail on why we think they matter to the DevOps community.  
jassy.png

Figure 1 – CEO of AWS, Andy Jassy at the AWS re:Invent 2017 keynote in Las Vegas, NV on November 29, 2017 

On Wednesday, toward the end of Andy Jassy’s (CEO of AWS) keynote, he quoted the lyrics to an old Tom Petty song indicating that “The waiting is the hardest part”. Andy was referring to IoT but it got me thinking more about DevOps and how a primary reason for DevOps is reducing the waiting for customers. In my view, the key purpose of DevOps is to reduce the waiting by increasing the speed of feedback between customers and developers.  
 

Figure 2 – DevOps Speeds up Feedback [Source]

DevOps is a portmanteau of the words representing “Development” and “Operations” teams, but it’s really more about representing the entire value stream. What you see in Figure 2 is similar to what AWS shares in some of its DevOps talks?—?by relating DevOps to the software development lifecycle. On one side you have customers and the other, developers. A developer comes up with an idea for a new feature, implements it and then puts it through a process of building, testing, and going through a release process until it gets delivered to production where your customers actually start using it. It’s only once it gets into the hands of your customers that you start to learn from it. You can get usage data, get direct feedback from customers, or start to make informed decisions on what to work on next. Based on this, you might decide to update or improve the feature, or even develop a new feature. And, this is where this feedback loop starts again. 
There are two key points to consider: 

Therefore, you want to increase the time you’re spending on developing high-quality features and decrease the time you’re spending on the process for building, testing, and releasing software systems. As a result, any efficiency you can push into the middle of this process to increase feedback loops while delivering high-quality software is DevOps. And, this could be changes to the culture, organization, process, or tooling changes. Improving anything in this feedback loop is the essence of DevOps.  
Consequently, the AWS re:Invent announcements we will be focusing on in this post are the DevOps-related features that best help speed up these feedback loops.  

ECS  

The first set of exciting announcements from re:Invent are related to the Amazon Elastic Container Service (Amazon ECS). Amazon ECS is a container orchestration service for running Docker containers. Previously, the orchestration service was completely managed by AWS but you had to provision and scale your own EC2 instances for running the container workloads. At re:Invent a new service was announced, AWS Fargate that provides a managed solution for running your container workloads with Amazon ECS, thereby removing the requirement to manage your own EC2 instances. 
As shown in Figure 3, configuration of the ECS clusters, services and task definitions doesn’t change with AWS Fargate. The only change is that when launching your services and tasks you choose which launch type to use: 

Figure 3 –  AWS Fargate [Source]  

Let’s review what makes AWS Fargate so significant: 

 

Figure 4 – Fargate/ECS Price Comparison

The next exciting announcement related to Amazon ECS is Amazon Elastic Container Service for Kubernetes (Amazon EKS). With Amazon EKS, you get a managed cluster of Kubernetes masters running in multiple availability zones. This removes the need for you to manage the configuration of the cluster, handling software upgrades, scheduling backups of the master, and management of etcd. The entire control plane is managed by the EKS service.  You are still responsible for managing the provisioning and scaling of the worker EC2 instances as well as the installation and configuration of kubelet for your EKS cluster. There is however an EKS optimized AMI based on Amazon Linux coming soon to simplify this process. 
 

Figure 5 – Elastic Container Service for Kubernetes (EKS) [Source]

Let’s look at some highlights from the announcement of the EKS service: 

 

Figure 6 – Kubectl + IAM [Source]

There are a few things to be aware of regarding EKS.  First, it is still in preview so you will need to request access to have it enabled in your AWS account. Also there are currently no CloudFormation resources for defining your EKS clusters. Finally, support for using AWS Fargate for running your EKS workloads has been announced and is expected to become available in 2018. 

Cloud9 

AWS Cloud9 is a cloud IDE for writing, running, and debugging code. With Cloud9, you can code with only your web browser. No need to download software, configure your environments, or setup your IDE and its configuration on each of your computers. You can collaborate in the same environment with other developers while seeing each of your changes in real time. This is something that I think raises the bar when it comes to pair programming and code reviews. You can also code in nearly any programming language you choose and it supports some of the top IDE platforms such as vim, emacs, and Sublime.  
You can get a new environment up and running in less than one minute by going to the Cloud9 Console and clicking Create environment. While you can use the pre-selected defaults, you can also configure your EC2 instance type, auto-hibernate settings, IAM role, VPC, and subnet – as shown in Figure 7.

Figure 7 – Cloud9 Configuration

In less than one minute, it launches your environment and you can immediately start writing code (as shown in Figure 8). You can also configure your environment settings such as your color scheme, key bindings, AWS, project, and user settings. 

Figure 8 – Cloud9 IDE    

From the perspective of speeding up feedback loops, there are a couple of key benefits to Cloud9. The first is the real-time collaboration in being up to work with other developers in real time on the same code. The fact that most tools are already installed and configured (e.g. git and the AWS CLI) saves a lot of time and you can go from computer to computer and access your same environments and settings. All of this saves lots of unnecessary time and allows developers to focus on developing features for customers.  
For Cloud9, AWS charges you for the EC2 instance the EBS storage. Since you can choose different EC2 instance types and the amount of time you use Cloud9 may vary, prices are variable Fortunately, AWS provides a typical example pricing chart (as shown in Figure 9): 

Figure 9 – Cloud9 Pricing Example [Source]    

A few things to keep in mind:  

CodeBuild 

AWS CodeBuild is a managed service for building, testing and packaging software.  It provides similar capabilities to Jenkins, without requiring you to manage the infrastructure, configuration, patching, backups and scaling. A week before re:Invent, it was announced that AWS CodeBuild can now access resources in your VPC. This was previously a significant limitation with AWS CodeBuild that often required running Jenkins on EC2 instances in the VPC to work around. Now you can leverage AWS CodeBuild to replace Jenkins in your pipeline and thereby decrease the administration time and cost of the pipeline. Let’s look at some use cases for AWS CodeBuild now that VPC connectivity is possible: 

The other significant announcement to AWS CodeBuild was support for caching.  Building software generally requires downloading a set of tools and dependencies required to successfully build, test and package the software. Examples of these dependencies include JAR files resolved for a Maven project or Ruby gems from a Gemfile.  Resolving and downloading these dependencies can consume a significant amount of time for a software build.  Now with AWS CodeBuild, you can configure caching of artifacts from you build for future builds to reuse.  This will significantly decrease your build times and thereby also reducing your lead time for your pipelines.  Use of this feature requires configuring two things: 

 

Figure 10 – CodeBuild buildspec.yml [Source]


 

Figure 11 – CodeBuild Enable Caching in S3 [Source]

Both of these capabilities are available now in CloudFormation on the AWS::CodeBuild::Project resource. 

CodeDeploy 

You can now use the Serverless Application Model (SAM) to deploy Lambda functions using Canary, Linear or All-at-Once deployment patterns.

Figure 12 – SAM template for deploying Serverless applications [Source]

AWS CodeDeploy is built into the SAM to provide the ability to perform gradual code deployments enabling the following features:

Figure 13 – Deployment Preference Type for Deploying Lambda via CodeDeploy [Source]

While you can deploy Lambda-based applications directly from the AWS CLI, by using AWS CodeDeploy, you get all of these added features.
Why it Matters
While there have been powerful tools, serverless applications have felt like second-class citizens when it comes to deployments. Most approaches have seemed like workarounds so being able to deploy Lambda functions with a service like CodeDeploy should help reduce many of the previous challenges when it comes to deploying serverless applications.

CloudFormation 

During one of the breakout sessions, DEV 317 – Deep Dive on AWS CloudFormation, it was announced that support for drift detection would be available in 2018.  This feature would allow you to check for changes between the intended state of your AWS resources, as described in a CloudFormation stack,  compared against the actual state of the AWS resources.  This feature will provide a mechanism for detecting manual changes that may have occurred to resources that were created by CloudFormation.  Drift detection not only detects that a change occurred, but also provides details about what attributes have changed. 
For organizations that embrace immutable infrastructure or are using CloudFormation as a mechanism to perform governance of the resources that teams are creating in the AWS account, automated drift detection can provide a means to identify resources that may now be out of compliance. 

Figure 14 – CloudFormation Drift Detection [Source]

One additional announcement that was made the week before re:Invent was the ability to specify parameters for CloudFormation stacks via Systems Manager Parameter Store.  By using SSM parameter types,  you can maintain configuration data separate from the CloudFormation stacks  

Figure 15 – CloudFormation Parameter Store parameters [Source]

Let’s look at a few use cases for SSM parameters in CloudFormation: 

Honorable Mentions 

Conclusion 

From a DevOps perspective, we see AWS fleshing out its offering in providing services and tools to help speed up feedback loops. All of AWS’ capabilities continue to move up the stack from primitive building blocks to managed services and as Werner Vogels, the CTO of Amazon, said in his keynote, “(The) true premise of (the) cloud is just around the corner – soon you’ll only be writing business logic” 
The big DevOps announcements were related to making it easier to:

We expect that all of these features will help speed up feedback loops between customers and developers. We’re excited to get to work with our customers in reducing all of the waiting!

Additional Resources 

Stelligent Amazon Pollycast