We had over 40 people from Mphasis and Mphasis Stelligent at the AWS re:Invent 2019 conference in Las Vegas, NV. There were 77 product launches, feature releases, & services announced at the conference (and many more at “pre:Invent” in November). Of this, there were several DevOps-related features announced at re:Invent or during pre:Invent.

The theme of this year’s re:Invent was around transformation and how AWS has the services and tools to help enterprises and others completely transform their business models. During his keynote, Andy Jassy, CEO of AWS, described four points to this transformation:

  1. Senior leadership team conviction and alignment.
  2. Aggressive top-down goals (faster than they would otherwise do organically).
  3. Train your Builders.
  4. Don’t let paralysis stop you before you start.

As John Furrier at SiliconANGLE states, “As the cloud market changes industry structures, it is our view that many of the next-generation leaders will be new companies either born in the cloud or existing enterprises reborn in the cloud.” He went on to say that the rest will likely go out of business. Furrier adds that “Although the cloud is changing everything, it still only represents just 5.6% of total IT spending, which Gartner projects will hit $3.8 trillion this year.”

Instead of providing a broad brush across all of the announcements from re:Invent 2019, we decided to choose the top 5 that we believe will have a transformative effect on the DevOps community. Here are the top 5 (in alphabetical order):

  1. Amazon CloudWatch Synthetics – collects “canary” traffic, which can continually run tests in production even when there is no customer traffic helping you potentially discover issues before customers do.
  2. Amazon CodeGuru – automate code reviews and identify your most expensive lines of code.
  3. Amazon Detective – analyze and visualize security data to rapidly get to the root cause of potential security issues.
  4. Amazon EKS on AWS Fargate – serverless compute for Kubernetes.
  5. AWS IAM Access Analyzer – generates comprehensive findings that identify resources that can be accessed from outside an AWS account.

Before diving more deeply into these details of these services and features, I’d like to take a few steps back and describe what DevOps means to us. From our perspective, DevOps is about accelerating the speed of feedback between customers and builders (i.e. engineers, developers, and so forth). This notion is illustrated below.

If you think of a traditional development process, you build, test, and release software to customers. On these traditional teams, it’s usually a slow and arduous process of getting feedback from customers. This is often the result of organizational, process, cultural, and tooling barriers that throttle this feedback. This isn’t necessarily intentional but more the result of organizational inertia that has built up over time in which there isn’t a focus on speeding up effective feedback between customers and developers.  

When effectively applying DevOps practices, you compress the time by which developers get this feedback from customers while increasing the quality. You do this by breaking down organizational silos, treating everything as code, creating fully-automated workflows that build, test, deploy, and release software to production whenever there’s a business need to do so. By getting regular, effective feedback from customers, developers are more likely to build features that customers want. 

There are two key points to consider:

  • How fast you’re able to get through this feedback loop determines how responsive you can be to customers and how innovative you are.
  • From your customer’s perspective, you are only delivering value when you’re spending time on developing high-quality features.

Ultimately, DevOps is any organization, process, culture, or tooling changes that help speed up these effective feedback loops between customers and developers.

In the Developer Tools on AWS breakout session the general manager of AWS Developer Tools, Ken Exner, illustrated a pipeline of stages and actions on a typical Amazon or AWS team. These pipelines perform actions like getting the latest code, running automated tests and static analysis, and running synthetic monitoring/tests in production – to name a few. He also described time windows, enterprise and team pipeline policies, code reviews, security scans, and other techniques they use.

A typical pipeline at Amazon and AWS

AWS provides a suite of services and tools that support authoring, sourcing, building, testing, deploying, monitoring, and releasing software features. These services and tools include AWS CodePipeline, AWS Cloud9 and IDE toolkits, AWS CodeCommit, AWS CodeBuild, Amazon CodeGuru, AWS CodeDeploy, AWS X-Ray, Amazon CloudWatch, AWS CloudFormation, AWS CDK, and the AWS Serverless Application Model. Of course, they also provide support and integrations for all the third-party tools you might be using.

AWS Developer Tools for modern software delivery

Our focus at Mphasis Stelligent is in helping customers increase the speed and safety by which they deliver their software to production. Consequently, the AWS re:Invent announcements we will be focusing on in this post are the DevOps-related features that best help safely accelerate these feedback loops.  

Amazon CloudWatch Synthetics

Amazon has been using automated synthetic testing for many years. Netflix engineering open sourced an automated canary testing tool called Kayenta as part of the Spinnaker project. The purpose of this technique is to run tests and analysis in production to learn of potential problems before end users do.

For example, you can run these automated tests to ensure a key part of your system (e.g. ordering) continues to work. If there are any errors or degradation, you learn of it as soon as possible and, in many cases, even before your end users.

CloudWatch Synthetics makes it possible to run these tests and monitor them through the CloudWatch dashboard. What’s more, you can integrate Synthetics with AWS X-Ray to accelerate your debugging process.

CloudWatch Synthetics makes it easy to:

  • Run web tests
  • Monitor APIs
  • Get screenshots of behavior
  • Get alerted through Alarms

Amazon CloudWatch Synthetics is available in preview in the following public AWS Regions: US East (N. Virginia), US East (Ohio), and EU (Ireland).

Running a Canary Test

To get started, go to the Amazon CloudWatch console, then click on Synthetics on the left panel. The page displayed should look similar to the one below. From here, you will click on the Create canary button.

Create a canary test from the CloudWatch Synthetics Console

Next, you will go through the steps of creating a canary by either selecting a blueprint from AWS, uploading a script, or importing from S3.

Choose the type of canary test you wish to run

Then, you will give your canary a name, enter the application or endpoint URL, and modify the provided script as necessary. You can also define whether you only want the canary to run once or on a continuous schedule.

Choose the name, endpoint, and test script to run

Finally, you can define the data retention of the canary data, any alarm thresholds, and access permissions. Then, you click the Create canary button to begin running your test.

Configure data retention, alarm thresholds, and permissions

Because it’s AWS, you can set up the same with the CLI or SDK.

CloudFormation Support

None yet, but you can write your own CloudFormation resource provider.


The pricing for Amazon CloudWatch Synthetics is straightforward. You are charged $0.0012 per canary run. Using the CloudWatch pricing calculator – as shown below – if you run 100,000 canary runs in a given month, you will pay $120.

CloudWatch Synthetics Pricing Scenario

Amazon CodeGuru

Amazon CodeGuru is a machine learning service for automated code reviews and application performance recommendations. It helps you find the most expensive lines of code that hurt application performance and keep you up all night troubleshooting, then gives you specific recommendations to fix or improve your code. As of December 2019, it is in preview.

Trained own decades of experience and knowledge at Amazon. It evolves with user feedback. The Profiler searches for optimizations continuously, even in production. Provides actionable recommendations to fix identified issues. Automatically inspects code for hard to find defects. Helps you find the most promising methods for optimization in your running application.
CodeGuru currently supports Java code.
As AWS states, “It is like having a distinguished engineer on call 24×7” [Source].
CodeGuru does not persist customer code – it analyzes the code and then deletes its copy. It does not perform security analysis or testing at this time.

CodeGuru Reviewer

CodeGuru provides these features:

  • Automated code review comments
  • Integrates with GitHub and CodeCommit
  • Leverages pull request-based code review workflow

An illustration of how CodeGuru Reviewer works is shown below.

Amazon CodeGuru Reviewer – How it Works [Source]

To begin with CodeGuru Reviewer, you go to the CodeGuru console and associate a repository as show in the image below.

Amazon CodeGuru Reviewer – Associate Repository [Source]

After associating a GitHub or CodeCommit repository, CodeGuru will analyze code changes during pull request workflows and provide detailed and targeted recommendations on how to improve the code – as shown below.

Amazon CodeGuru Reviewer – Recommendations [Source]

CodeGuru Profiler

“CodeGuru Profiler provides specific recommendations so you can take action immediately on issues such as excessive recreation of expensive objects, expensive deserialization, usage of inefficient libraries, and excessive logging.” [Source]

For example, on Amazon Prime Day, there was a 325% increase in CPU utilization and 39% lower cost between 2017 and 2018 as a result of the technology behind CodeGuru Profiler.

As of December 2019, CodeGuru is supported in the following AWS regions: US East (N. Virginia) US East (Ohio) US West (Oregon) Europe (Ireland), and Asia Pacific (Sydney).

Let’s get started by having a look at how CodeGuru Profiler works. First, you need to install a very small agent in your application. The agent collects runtime data from your application as your application runs. It runs in a separate thread within your application.

Then, it begins to produce visualizations of your application performance using heuristics and machine learning. It also provides actionable recommendations that can save time and money.

CodeGuru Profiler How it Works [Source]

An example CodeGuru Profiler visualization for a web application is shown in the figure below.

CodeGuru Profiler Visualizations [Source]

You can click on sections in the visualization to display detailed recommendations on how to improve application performance.

CodeGuru Profiler Recommendations [Source]

Language Support

Both CodeGuru Reviewer and Profiler support Java as this point. Expect support for commonly used programming languages in the future.

CloudFormation Support

None yet, but you can write your own CloudFormation resource provider.


  • Code scan (pull requests)
    • $0.75 per 100 lines of code scanned per month
  • Application profiling on Amazon EC2 Instances and Amazon ECS, EKS, and AWS Fargate Containers
    • $0.005 per sampling hour for the first 36,000 sampling hours per application profile per month. No additional charge beyond 36,000 sample hours per application profile.

Amazon Detective

Amazon Detective helps you to quickly detect, analyze, and investigate the root cause of a security-related issue. With Amazon Detective, you can:

  • Perform faster and more effective investigations
  • Save time and effort with continuous data updates
  • Access easy to use visualizations

As of December 2019, this service is in preview.

Amazon Detective – How it Works [Source]

In a typical security incident response scenario, AWS has many services to enhance investigations. These include AWS Systems Manager, Config, and CloudWatch to identify security incidents. Services such as AWS Shield, AWS Secrets Manager, AWS WAF, and AWS IAM to protect access to resources. Amazon Inspector, Amazon Macie, Amazon GuardDuty, and AWS Security Hub to protect and detect security incidents. Amazon CloudWatch and AWS Lambda to investigate and respond. To automate, the detection and response, you can use AWS Lambda. Finally, to recover, you can use things like S3 snapshots and Glacier archives to support disaster recover scenarios.

AWS Services the Enhance Security Investigations [Source]

While AWS provides a lot of services, there’s also a lot of complexity and noise in getting access to the right data to appropriately respond to the security incident. What’s more, there’s a skill shortage of security analysts and as a result of the people and complexity, a significant costs associated with responding to these incidents.

Investigation Challenges [Source]

This is why AWS announced the preview release of Amazon Detective which provides this built-in data collection, automated analysis augmented by machine learning, and visual insights.

Amazon Detective – Key Features [Source]

There are three example use cases that AWS shared at re:Invent: Alert triage, incident investigation, and threat hunting.

Alert Triage Use Case [Source]

For Alert triage, you need to get quick answers to questions like whether it’s abnormal traffic, what happened right before, whether it’s a common failure, and how much data was sent.

Incident Investigation Use Case [Source]

For Incident investigation, you want to know things like what other EC2 instances communicated with the IP address, do the calls indicate that someone is poking around looking for holes, and if there were other principal IDs being used.

Threat Hunting Use Case [Source]

Finally, for threat hunting, you might want to know whether the suspicious user agent made any API calls and whether the same IP Address communicated with any of your instances over the past year.

Let’s have a look at an example. In this example, we’ll have a look at Amazon GuardDuty which already have deep integration with Amazon Detective. The GuardDuty findings page is shown below. From here, you can click on Actions and then Investigate which opens the Amazon Detective service console for the selected finding.

Access Amazon Detective from Amazon GuardDuty [Source]

From here you delve into detail in the Amazon Detective console – as shown below.

Amazon Detective CloudTrail Details [Source]

And, get delve into even more detail to help triage, investigate, and solve.

CloudFormation Support

None yet, but you can write your own CloudFormation resource provider.


For more information, see Amazon Detective pricing.

  • First 1000 GB/month is $2.00 per GB
  • Next 4000 GB/month is $1.00 per GB
  • First 5000 GB/month is $0.50 per GB
  • First 10,000 GB/month is $0.25 per GB
  • This includes
    • Data Sources
      • VPC Flow Logs
      • CloudTrail management events
      • GuardDuty findings
    • 1 year of security graph data

AWS Fargate Support for Amazon EKS

With AWS Fargate Support for Amazon EKS, you no longer need to provision and manage your own EC2 cluster for your EKS nodes. This significantly reduces the operational burden as AWS manages the underlying compute resources for you.

AWS also released AWS Fargate Spot at re:Invent which can help you save up to 70% on your use of AWS Fargate.

With EKS on Fargate you can bring any existing pods that you have created elsewhere. Each pod runs in an isolated environment across multiple availability zones. You only pay for the resources you need to run your pods. With EKS on Fargate, AWS includes native integrations for networking and security.


EKS on Fargate – Key Features [Source]

Getting Started

To get started, I will create a cluster. For more information on using eksctl, see Getting Started with eksctl.

pip install awscli --upgrade --user
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version

Now, you can use eksctl to create an EKS Cluster.

eksctl create cluster --name demo-mphasis --region us-east-1 --fargate

After the cluster is created (which might take up to 10 minutes or so), you should see an active cluster – as shown below:

Next, you need to create a Fargate profile. To do, go to the Amazon EKS console and select the demo-mphasis cluster. On the details pane under Fargate profiles, choose Add Fargate profile.

In the profile configuration page, I enter demo-mphasis then enter the FargatePodExecutionRole generated by the eksctl tool.

Now, you will configure the pod as shown below:

Review and create the Fargate profile.

With EKS for Fargate you no longer need to:

  • Manage Kubernetes worker nodes
  • Pay for unused capacity
  • Use K8s Cluster Autoscaler (CA)

What you get “out of the box”:

  • VM isoloation at pod level
  • Pod-level billing
  • Easy chargeback in multi-tenant scenarios

You cannot:

  • Deploy Dameonsets
  • Use service type LoadBalancer (CLB/NLB)
  • Run privileged containers
  • Run stateful workloads


Key Aspects of EKS on Fargate [Source]

Fargate for EKS is generally available. As of December 2019, Fargate support for EKS is available in the following public AWS Regions: US East (N. Virginia), US East (Ohio), AP Northeast (Tokyo), and EU (Ireland).

CloudFormation Support

None yet, but you can write your own CloudFormation resource provider.


  • $.20 per hour. This is the same as AWS Fargate pricing for ECS.

AWS IAM Access Analyzer

IAM Access Analyzer informs you which resources in your account that you are sharing with external principals using logic-based reasoning to analyze resource-based policies in your AWS environment. This feature is generally available.

Access Analyzer uses automated reasoning to a combinatorial number of scenarios. With Access Analyzer, you can:

  • quickly analyze thousands of resource policies across your account
  • continuously monitor impact of policy changes on access to your resources
  • comprehensively achieve the highest levels of security through automated reasoning

While these are some fantastic benefits to the current set of features in IAM Access Analyzer, I’m probably more excited about the possibilities of applying automated reasoning to resource-based policies. In particular, I’m looking forward to the time when AWS can help us define least-privilege permissions on IAM policies based on usage.

Until then, let’s learn the basics of IAM Access Analyzer.

To get started, you go to your IAM console and click on Access analyzer on the left pane. The Analyzer analyzes the resource-based policies for IAM Roles, S3 Buckets, Lambda functions, KMS Keys, and SQS Queues. It then generates findings on who has access to what.

Next, click the Create analyzer button as shown in the figure below.

Create an Analyzer from the IAM Access Analyzer Console

The name of the analyzer is pre-populated for you but you can modify the name as well. You can add any tags and then click the Create analyzer button.

Provide a name and tag for your Analyzer

Once you create the analyzer, the findings generally don’t immediately display so you’ll need to give it some time. Once the findings are generated, they’ll look similar to what you see in the figure below.

Findings generated by IAM Access Analyzer [Source]

IAM Access Analyzer currently works on an account level but it was announced the support for AWS Organizations – in which you can run Access Analyzer on multiple AWS accounts – will be coming soon.

Below is an illustration of a solution from Millennium Management that was described in one of the breakout session at re:Invent. In the solution, they’re using an AWS Step Functions workflow to analyze IAM Access Analyzer findings, determine a risk level, automatically remediate through AWS Lambda, and then notify team members.

Automated Remediation Solution from Millennium Management shown at re:Invent [Source]

CloudFormation Support

Yes, using AWS::AccessAnalyzer::Analyzer.

Here’s an example CloudFormation template:

AWSTemplateFormatVersion: 2010-09-09
    Type: 'AWS::AccessAnalyzer::Analyzer'
      AnalyzerName: MyAccountAnalyzer
      Type: ACCOUNT
          Key: Kind
          Value: Dev
          # Archive findings for a trusted AWS account
          RuleName: ArchiveTrustedAccountAccess
              Property: 'principal.AWS'
                - '123456789012'
          # Archive findings for known public S3 buckets
          RuleName: ArchivePublicS3BucketsAccess
              Property: 'resource'
                - 'arn:aws:s3:::docs-bucket'
                - 'arn:aws:s3:::clients-bucket'



Honorable Mentions

While they didn’t make Stelligent’s DevOps top 5, we thought the following announcements were worthy of honorable mention and we expect these services to be very useful for our customers.  

  • Amazon ECS CLI v2 – workflow to develop, test, deploy, operate, and observe their containerized applications, all without extensive prior knowledge of AWS.
  • AWS KMS Asymmetric keys – Use different keys for encrypting and decrypting data
  • AWS CloudFormation Registry and CloudFormation CLI – The CloudFormation Registry provides a per-account, per-region storage for your resource providers. You can use the open source CloudFormation CLI to create resource providers in a safe and systematic way.
  • The Amazon Builders’ Library – detailed knowledge base of resources describing how Amazon builds and operates software.
  • AWS CodeBuild – Test Reports with AWS CodeBuild – integrate the reports generated by functional or integration tests into the CodeBuild console.
  • EC2 Image Builder – simplifies the creation, maintenance, validation, sharing, and deployment of Linux or Windows Server images for use with Amazon EC2 and on-premises.
  • Code* Notifications – receive notifications about events in repositories, build projects, deployments, and pipelines when you use AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and/or AWS CodePipeline.


Here’s a summary of some of the recent CloudFormation changes related to DevOps and security.

What’s Next?

These services and features accelerate the speed and safety of feedback by ensuring your code is performant, your resources are more secure, and that you have the means to mitigate any potential security incidents.

What’s most exciting to see is how AWS is leveraging accelerant technologies such as machine learning and automated reasoning in the software delivery lifecycle (and, yes, this includes production).

All of these capabilities continue to move up the stack from primitive building blocks to managed services and as Werner Vogels, the CTO of Amazon, said in his 2018 re:Invent keynote, “The only thing we want to do is to build business logic” and AWS is helping teams achieve this.

(2018 re:Invent keynote) The future is that builders will be able to focus on business logic [Source]

The big DevOps announcements helped make it easier to:

  • discover production errors before your customers do.
  • learn and fix hard to find defects and expensive lines of code that hurt application performance.
  • rapidly get to the root cause of potential security issues.
  • reduce the complexity in deploying and managing Kubernetes.
  • identify resources that can be accessed from outside an AWS account.

We expect that all of these features will help speed up feedback loops between customers and developers. We’re excited to get to work with our customers in leveraging these accelerants.

Additional Resources

Stelligent Amazon Pollycast
Voiced by Amazon Polly